diff --git a/.jcheck/conf b/.jcheck/conf
index 17552ee7361ab..617f3d95ed8e6 100644
--- a/.jcheck/conf
+++ b/.jcheck/conf
@@ -5,6 +5,7 @@ version=23
[checks]
error=author,committer,reviewers,merge,issues,executable,symlink,message,hg-tag,whitespace,problemlists
+warning=issuestitle
[repository]
tags=(?:jdk-(?:[1-9]([0-9]*)(?:\.(?:0|[1-9][0-9]*)){0,4})(?:\+(?:(?:[0-9]+))|(?:-ga)))|(?:jdk[4-9](?:u\d{1,3})?-(?:(?:b\d{2,3})|(?:ga)))|(?:hs\d\d(?:\.\d{1,2})?-b\d\d)
diff --git a/doc/testing.html b/doc/testing.html
index 251475808498b..c56dfa3118833 100644
--- a/doc/testing.html
+++ b/doc/testing.html
@@ -177,10 +177,11 @@
tier3
: This test group includes more stressful
tests, the tests for corner cases not covered by previous tiers, plus
the tests that require GUIs. As such, this suite should either be run
@@ -367,7 +368,7 @@
Test suite control
just pass unnoticed.
To separate multiple keyword=value pairs, use ;
(semicolon). Since the shell normally eats ;
, the
-recommended usage is to write the assignment inside qoutes, e.g.
+recommended usage is to write the assignment inside quotes, e.g.
JTREG="...;..."
. This will also make sure spaces are
preserved, as in
JTREG="JAVA_OPTIONS=-XshowSettings -Xlog:gc+ref=debug"
.
@@ -396,10 +397,8 @@ JAVA_OPTIONS
Applies to JTReg, GTest and Micro.
VM_OPTIONS
Applies to JTReg, GTest and Micro.
-AOT_MODULES
-Applies to JTReg and GTest.
JCOV
-This keywords applies globally to the test runner system. If set to
+
This keyword applies globally to the test runner system. If set to
true
, it enables JCov coverage reporting for all tests run.
To be useful, the JDK under test must be run with a JDK built with JCov
instrumentation
@@ -499,11 +498,6 @@
VM_OPTIONS
LAUNCHER_OPTIONS
Additional Java options that are sent to the java launcher that
starts the JTReg harness.
-AOT_MODULES
-Generate AOT modules before testing for the specified module, or set
-of modules. If multiple modules are specified, they should be separated
-by space (or, to help avoid quoting issues, the special value
-%20
).
RETRY_COUNT
Retry failed tests up to a set number of times, until they pass. This
allows to pass the tests with intermittent failures. Defaults to 0.
@@ -526,11 +520,6 @@ OPTIONS
Additional options to the Gtest test framework.
Use GTEST="OPTIONS=--help"
to see all available Gtest
options.
-AOT_MODULES
-Generate AOT modules before testing for the specified module, or set
-of modules. If multiple modules are specified, they should be separated
-by space (or, to help avoid quoting issues, the special value
-%20
).
Microbenchmark keywords
FORK
Override the number of benchmark forks to spawn. Same as specifying
@@ -574,7 +563,7 @@
Non-US locale
If your locale is non-US, some tests are likely to fail. To work
around this you can set the locale to US. On Unix platforms simply
setting LANG="en_US"
in the environment before running
-tests should work. On Windows or MacOS, setting
+tests should work. On Windows or macOS, setting
JTREG="VM_OPTIONS=-Duser.language=en -Duser.country=US"
helps for most, but not all test cases.
For example:
@@ -609,7 +598,7 @@ macOS
Shortcuts; select or deselect desired shortcut.
For example,
test/jdk/javax/swing/TooltipManager/JMenuItemToolTipKeyBindingsTest/JMenuItemToolTipKeyBindingsTest.java
-fails on MacOS because it uses CTRL + F1
key sequence to
+fails on macOS because it uses CTRL + F1
key sequence to
show or hide tooltip message but the key combination is reserved by the
operating system. To run the test correctly the default global key
shortcut should be disabled using the steps described above, and then
diff --git a/doc/testing.md b/doc/testing.md
index 63a869bfc1c0d..351745c9293c5 100644
--- a/doc/testing.md
+++ b/doc/testing.md
@@ -102,12 +102,12 @@ TEST="tier1"`, but the latter is more tab-completion friendly. For more complex
test runs, the `test TEST="x"` solution needs to be used.
The test specifications given in `TEST` is parsed into fully qualified test
-descriptors, which clearly and unambigously show which tests will be run. As an
-example, `:tier1` will expand to `jtreg:$(TOPDIR)/test/hotspot/jtreg:tier1
-jtreg:$(TOPDIR)/test/jdk:tier1 jtreg:$(TOPDIR)/test/langtools:tier1
-jtreg:$(TOPDIR)/test/nashorn:tier1 jtreg:$(TOPDIR)/test/jaxp:tier1`. You can
-always submit a list of fully qualified test descriptors in the `TEST` variable
-if you want to shortcut the parser.
+descriptors, which clearly and unambiguously show which tests will be run. As
+an example, `:tier1` will expand to include all subcomponent test directories
+that define `tier1`, for example: `jtreg:$(TOPDIR)/test/hotspot/jtreg:tier1
+jtreg:$(TOPDIR)/test/jdk:tier1 jtreg:$(TOPDIR)/test/langtools:tier1 ...`. You
+can always submit a list of fully qualified test descriptors in the `TEST`
+variable if you want to shortcut the parser.
### Common Test Groups
@@ -151,7 +151,7 @@ A brief description of the tiered test groups:
- `tier2`: This test group covers even more ground. These contain, among other
things, tests that either run for too long to be at `tier1`, or may require
special configuration, or tests that are less stable, or cover the broader
- range of non-core JVM and JDK features/components(for example, XML).
+ range of non-core JVM and JDK features/components (for example, XML).
- `tier3`: This test group includes more stressful tests, the tests for corner
cases not covered by previous tiers, plus the tests that require GUIs. As
@@ -294,7 +294,7 @@ would just pass unnoticed.
To separate multiple keyword=value pairs, use `;` (semicolon). Since the shell
normally eats `;`, the recommended usage is to write the assignment inside
-qoutes, e.g. `JTREG="...;..."`. This will also make sure spaces are preserved,
+quotes, e.g. `JTREG="...;..."`. This will also make sure spaces are preserved,
as in `JTREG="JAVA_OPTIONS=-XshowSettings -Xlog:gc+ref=debug"`.
(Other ways are possible, e.g. using backslash:
@@ -334,13 +334,9 @@ Applies to JTReg, GTest and Micro.
Applies to JTReg, GTest and Micro.
-#### AOT_MODULES
-
-Applies to JTReg and GTest.
-
#### JCOV
-This keywords applies globally to the test runner system. If set to `true`, it
+This keyword applies globally to the test runner system. If set to `true`, it
enables JCov coverage reporting for all tests run. To be useful, the JDK under
test must be run with a JDK built with JCov instrumentation (`configure
--with-jcov=`, `make jcov-image`).
@@ -480,12 +476,6 @@ your test classes, use `JAVA_OPTIONS`.
Additional Java options that are sent to the java launcher that starts the
JTReg harness.
-#### AOT_MODULES
-
-Generate AOT modules before testing for the specified module, or set of
-modules. If multiple modules are specified, they should be separated by space
-(or, to help avoid quoting issues, the special value `%20`).
-
#### RETRY_COUNT
Retry failed tests up to a set number of times, until they pass. This allows to
@@ -517,12 +507,6 @@ Additional options to the Gtest test framework.
Use `GTEST="OPTIONS=--help"` to see all available Gtest options.
-#### AOT_MODULES
-
-Generate AOT modules before testing for the specified module, or set of
-modules. If multiple modules are specified, they should be separated by space
-(or, to help avoid quoting issues, the special value `%20`).
-
### Microbenchmark keywords
#### FORK
@@ -587,7 +571,7 @@ $ make test TEST="jtreg:test/hotspot/jtreg/containers/docker" \
If your locale is non-US, some tests are likely to fail. To work around this
you can set the locale to US. On Unix platforms simply setting `LANG="en_US"`
-in the environment before running tests should work. On Windows or MacOS,
+in the environment before running tests should work. On Windows or macOS,
setting `JTREG="VM_OPTIONS=-Duser.language=en -Duser.country=US"` helps for
most, but not all test cases.
@@ -635,7 +619,7 @@ select or deselect desired shortcut.
For example,
test/jdk/javax/swing/TooltipManager/JMenuItemToolTipKeyBindingsTest/JMenuItemToolTipKeyBindingsTest.java
-fails on MacOS because it uses `CTRL + F1` key sequence to show or hide tooltip
+fails on macOS because it uses `CTRL + F1` key sequence to show or hide tooltip
message but the key combination is reserved by the operating system. To run the
test correctly the default global key shortcut should be disabled using the
steps described above, and then deselect "Turn keyboard access on or off"
diff --git a/make/CompileDemos.gmk b/make/CompileDemos.gmk
index 599ce9842e4b0..6c751552e5042 100644
--- a/make/CompileDemos.gmk
+++ b/make/CompileDemos.gmk
@@ -183,7 +183,7 @@ $(eval $(call SetupBuildDemo, SwingSet2, \
))
$(eval $(call SetupBuildDemo, Font2DTest, \
- DISABLED_WARNINGS := rawtypes deprecation unchecked serial cast this-escape, \
+ DISABLED_WARNINGS := rawtypes deprecation unchecked serial cast this-escape dangling-doc-comments, \
DEMO_SUBDIR := jfc, \
))
diff --git a/make/Docs.gmk b/make/Docs.gmk
index 2977f6f66f9e1..192843de9cdec 100644
--- a/make/Docs.gmk
+++ b/make/Docs.gmk
@@ -681,28 +681,25 @@ ifeq ($(ENABLE_PANDOC), true)
$(TOPDIR)/make/jdk/src/classes/build/tools/pandocfilter)
$(foreach m, $(ALL_MODULES), \
- $(eval MAN_$m := $(call FindModuleManDirs, $m)) \
- $(foreach d, $(MAN_$m), \
- $(foreach f, $(call ApplySpecFilter, $(filter %.md, $(call FindFiles, $d))), \
- $(eval $m_$f_NAME := MAN_TO_HTML_$m_$(strip $(call RelativePath, $f, $(TOPDIR)))) \
- $(eval $(call SetupProcessMarkdown, $($m_$f_NAME), \
- SRC := $d, \
- FILES := $f, \
- DEST := $(DOCS_OUTPUTDIR)/specs/man, \
- FILTER := $(PANDOC_HTML_MANPAGE_FILTER), \
- CSS := $(GLOBAL_SPECS_DEFAULT_CSS_FILE), \
- REPLACEMENTS := \
- @@COPYRIGHT_YEAR@@ => $(COPYRIGHT_YEAR) ; \
- @@VERSION_SHORT@@ => $(VERSION_SHORT) ; \
- @@VERSION_SPECIFICATION@@ => $(VERSION_SPECIFICATION), \
- OPTIONS := --toc -V include-before='$(SPECS_TOP)' -V include-after='$(SPECS_BOTTOM_1)', \
- POST_PROCESS := $(TOOL_FIXUPPANDOC) --insert-nav --nav-right-info '$(HEADER_RIGHT_SIDE_INFO)' \
- --nav-subdirs 1 --nav-link-guides, \
- EXTRA_DEPS := $(PANDOC_HTML_MANPAGE_FILTER) \
- $(PANDOC_HTML_MANPAGE_FILTER_SOURCE), \
- )) \
- $(eval JDK_SPECS_TARGETS += $($($m_$f_NAME))) \
- ) \
+ $(eval MAN_$m := $(call ApplySpecFilter, $(filter %.md, $(call FindFiles, \
+ $(call FindModuleManDirs, $m))))) \
+ $(if $(MAN_$m), \
+ $(eval $(call SetupProcessMarkdown, MAN_TO_HTML_$m, \
+ FILES := $(MAN_$m), \
+ DEST := $(DOCS_OUTPUTDIR)/specs/man, \
+ FILTER := $(PANDOC_HTML_MANPAGE_FILTER), \
+ CSS := $(GLOBAL_SPECS_DEFAULT_CSS_FILE), \
+ REPLACEMENTS := \
+ @@COPYRIGHT_YEAR@@ => $(COPYRIGHT_YEAR) ; \
+ @@VERSION_SHORT@@ => $(VERSION_SHORT) ; \
+ @@VERSION_SPECIFICATION@@ => $(VERSION_SPECIFICATION), \
+ OPTIONS := --toc -V include-before='$(SPECS_TOP)' -V include-after='$(SPECS_BOTTOM_1)', \
+ POST_PROCESS := $(TOOL_FIXUPPANDOC) --insert-nav --nav-right-info '$(HEADER_RIGHT_SIDE_INFO)' \
+ --nav-subdirs 1 --nav-link-guides, \
+ EXTRA_DEPS := $(PANDOC_HTML_MANPAGE_FILTER) \
+ $(PANDOC_HTML_MANPAGE_FILTER_SOURCE), \
+ )) \
+ $(eval JDK_SPECS_TARGETS += $(MAN_TO_HTML_$m)) \
) \
)
diff --git a/make/InitSupport.gmk b/make/InitSupport.gmk
index c59e685bf4f3f..b6a0a8d5d6fd5 100644
--- a/make/InitSupport.gmk
+++ b/make/InitSupport.gmk
@@ -501,9 +501,12 @@ else # $(HAS_SPEC)=true
# Failure logs are only supported for "parallel" main targets, not the
# (trivial) sequential make targets (such as clean and reconfigure),
# since the failure-logs directory creation will conflict with clean.
+ # We also make sure the javatmp directory exists, which is needed if a java
+ # process (like javac) is using java.io.tmpdir.
define PrepareFailureLogs
$(RM) -r $(MAKESUPPORT_OUTPUTDIR)/failure-logs 2> /dev/null && \
$(MKDIR) -p $(MAKESUPPORT_OUTPUTDIR)/failure-logs
+ $(MKDIR) -p $(JAVA_TMP_DIR)
$(RM) $(MAKESUPPORT_OUTPUTDIR)/exit-with-error 2> /dev/null
endef
diff --git a/make/TestImage.gmk b/make/TestImage.gmk
index b719d6824da30..f6d680fb69073 100644
--- a/make/TestImage.gmk
+++ b/make/TestImage.gmk
@@ -28,6 +28,9 @@ default: all
include $(SPEC)
include MakeBase.gmk
+# Hook to include the corresponding custom file, if present.
+$(eval $(call IncludeCustomExtension, TestImage-pre.gmk))
+
############################################################################
BUILD_INFO_PROPERTIES := $(TEST_IMAGE_DIR)/build-info.properties
diff --git a/make/autoconf/flags-cflags.m4 b/make/autoconf/flags-cflags.m4
index c96b289084cca..97a3ec14ecd5e 100644
--- a/make/autoconf/flags-cflags.m4
+++ b/make/autoconf/flags-cflags.m4
@@ -459,7 +459,7 @@ AC_DEFUN([FLAGS_SETUP_CFLAGS_HELPER],
CFLAGS_OS_DEF_JVM="-D_ALLBSD_SOURCE -D_DARWIN_C_SOURCE -D_XOPEN_SOURCE"
CFLAGS_OS_DEF_JDK="-D_ALLBSD_SOURCE -D_DARWIN_UNLIMITED_SELECT"
elif test "x$OPENJDK_TARGET_OS" = xaix; then
- CFLAGS_OS_DEF_JVM="-DAIX -Dalloca'(size)'=__builtin_alloca'(size)' -D_LARGE_FILES"
+ CFLAGS_OS_DEF_JVM="-DAIX -D_LARGE_FILES"
CFLAGS_OS_DEF_JDK="-D_LARGE_FILES"
elif test "x$OPENJDK_TARGET_OS" = xbsd; then
CFLAGS_OS_DEF_JDK="-D_ALLBSD_SOURCE"
diff --git a/make/autoconf/flags-ldflags.m4 b/make/autoconf/flags-ldflags.m4
index acb815515a24d..a2d0d4e606afb 100644
--- a/make/autoconf/flags-ldflags.m4
+++ b/make/autoconf/flags-ldflags.m4
@@ -71,7 +71,7 @@ AC_DEFUN([FLAGS_SETUP_LDFLAGS_HELPER],
LDFLAGS_CXX_PARTIAL_LINKING="$MACHINE_FLAG -r"
if test "x$OPENJDK_TARGET_OS" = xlinux; then
- BASIC_LDFLAGS="-Wl,--exclude-libs,ALL"
+ BASIC_LDFLAGS="-fuse-ld=lld -Wl,--exclude-libs,ALL"
fi
if test "x$OPENJDK_TARGET_OS" = xaix; then
BASIC_LDFLAGS="-Wl,-b64 -Wl,-brtl -Wl,-bnorwexec -Wl,-bnolibpath -Wl,-bnoexpall \
@@ -118,7 +118,6 @@ AC_DEFUN([FLAGS_SETUP_LDFLAGS_HELPER],
# Setup LDFLAGS for linking executables
if test "x$TOOLCHAIN_TYPE" = xgcc; then
- EXECUTABLE_LDFLAGS="$EXECUTABLE_LDFLAGS -Wl,--allow-shlib-undefined"
# Enabling pie on 32 bit builds prevents the JVM from allocating a continuous
# java heap.
if test "x$OPENJDK_TARGET_CPU_BITS" != "x32"; then
diff --git a/make/autoconf/jdk-options.m4 b/make/autoconf/jdk-options.m4
index 8c91a2ccd33eb..23f331e79b7f9 100644
--- a/make/autoconf/jdk-options.m4
+++ b/make/autoconf/jdk-options.m4
@@ -190,6 +190,17 @@ AC_DEFUN_ONCE([JDKOPT_SETUP_JDK_OPTIONS],
fi
AC_SUBST(INCLUDE_SA)
+ # Setup default CDS alignment. On platforms where one build may run on machines with different
+ # page sizes, the JVM choses a compatible alignment to fit all possible page sizes. This slightly
+ # increases archive size.
+ # The only platform having this problem at the moment is Linux on aarch64, which may encounter
+ # three different page sizes: 4K, 64K, and if run on Mac m1 hardware, 16K.
+ COMPATIBLE_CDS_ALIGNMENT_DEFAULT=false
+ if test "x$OPENJDK_TARGET_OS" = "xlinux" && test "x$OPENJDK_TARGET_CPU" = "xaarch64"; then
+ COMPATIBLE_CDS_ALIGNMENT_DEFAULT=true
+ fi
+ AC_SUBST(COMPATIBLE_CDS_ALIGNMENT_DEFAULT)
+
# Compress jars
COMPRESS_JARS=false
@@ -673,7 +684,7 @@ AC_DEFUN([JDKOPT_ENABLE_DISABLE_CDS_ARCHIVE],
#
AC_DEFUN([JDKOPT_ENABLE_DISABLE_COMPATIBLE_CDS_ALIGNMENT],
[
- UTIL_ARG_ENABLE(NAME: compatible-cds-alignment, DEFAULT: false,
+ UTIL_ARG_ENABLE(NAME: compatible-cds-alignment, DEFAULT: $COMPATIBLE_CDS_ALIGNMENT_DEFAULT,
RESULT: ENABLE_COMPATIBLE_CDS_ALIGNMENT,
DESC: [enable use alternative compatible cds core region alignment],
DEFAULT_DESC: [disabled],
diff --git a/make/autoconf/spec.gmk.template b/make/autoconf/spec.gmk.template
index 616999f27368d..7ef1aee861ab8 100644
--- a/make/autoconf/spec.gmk.template
+++ b/make/autoconf/spec.gmk.template
@@ -353,6 +353,8 @@ BUNDLES_OUTPUTDIR = $(OUTPUTDIR)/bundles
TESTMAKE_OUTPUTDIR = $(OUTPUTDIR)/test-make
MAKESUPPORT_OUTPUTDIR = $(OUTPUTDIR)/make-support
+JAVA_TMP_DIR = $(SUPPORT_OUTPUTDIR)/javatmp
+
# This does not get overridden in a bootcycle build
CONFIGURESUPPORT_OUTPUTDIR := @CONFIGURESUPPORT_OUTPUTDIR@
BUILDJDK_OUTPUTDIR = $(OUTPUTDIR)/buildjdk
@@ -634,7 +636,8 @@ STATIC_BUILD := @STATIC_BUILD@
STRIPFLAGS := @STRIPFLAGS@
-JAVA_FLAGS := @JAVA_FLAGS@
+JAVA_FLAGS_TMPDIR := -Djava.io.tmpdir=$(JAVA_TMP_DIR)
+JAVA_FLAGS := @JAVA_FLAGS@ $(JAVA_FLAGS_TMPDIR)
JAVA_FLAGS_BIG := @JAVA_FLAGS_BIG@
JAVA_FLAGS_SMALL := @JAVA_FLAGS_SMALL@
BUILD_JAVA_FLAGS_SMALL := @BUILD_JAVA_FLAGS_SMALL@
@@ -675,7 +678,7 @@ DOCS_REFERENCE_JAVADOC := @DOCS_REFERENCE_JAVADOC@
SOURCE_REVISION_TRACKER := $(SUPPORT_OUTPUTDIR)/src-rev/source-revision-tracker
# Interim langtools modules and arguments
-INTERIM_LANGTOOLS_BASE_MODULES := java.compiler jdk.compiler jdk.javadoc
+INTERIM_LANGTOOLS_BASE_MODULES := java.compiler jdk.compiler jdk.internal.md jdk.javadoc
INTERIM_LANGTOOLS_MODULES := $(addsuffix .interim, $(INTERIM_LANGTOOLS_BASE_MODULES))
INTERIM_LANGTOOLS_ADD_EXPORTS := \
--add-exports java.base/sun.reflect.annotation=jdk.compiler.interim \
diff --git a/make/common/JavaCompilation.gmk b/make/common/JavaCompilation.gmk
index 259c1834da330..1503c86aff1df 100644
--- a/make/common/JavaCompilation.gmk
+++ b/make/common/JavaCompilation.gmk
@@ -197,7 +197,7 @@ define SetupJavaCompilationBody
ifeq ($$($1_COMPILER), bootjdk)
# Javac server is not available when using the bootjdk compiler.
- $1_JAVAC_CMD := $$(JAVAC)
+ $1_JAVAC_CMD := $$(JAVAC) -J$$(JAVA_FLAGS_TMPDIR)
ifeq ($$($1_SMALL_JAVA), true)
$1_FLAGS += $$(addprefix -J, $$(JAVA_FLAGS_SMALL))
@@ -211,7 +211,7 @@ define SetupJavaCompilationBody
$1_TARGET_RELEASE := $$(TARGET_RELEASE_BOOTJDK)
endif
else ifeq ($$($1_COMPILER), buildjdk)
- $1_JAVAC_CMD := $$(BUILD_JAVAC)
+ $1_JAVAC_CMD := $$(BUILD_JAVAC) -J$$(JAVA_FLAGS_TMPDIR)
ifeq ($$($1_TARGET_RELEASE), )
# If unspecified, default to the new jdk we're building
diff --git a/make/common/ProcessMarkdown.gmk b/make/common/ProcessMarkdown.gmk
index e78c026a89c37..125892504f91a 100644
--- a/make/common/ProcessMarkdown.gmk
+++ b/make/common/ProcessMarkdown.gmk
@@ -38,48 +38,53 @@ define ProcessMarkdown
$1_$2_INPUT_FILE := $3/$2
$1_$2_MARKER := $$(subst /,_,$1_$2)
- ifneq ($$($1_REPLACEMENTS), )
- $1_$2_PANDOC_INPUT := $$(SUPPORT_OUTPUTDIR)/markdown/$$($1_$2_MARKER)_pre.tmp
-
- $$(eval $$(call SetupTextFileProcessing, $1_$2_PREPROCESSED, \
- SOURCE_FILES := $$($1_$2_INPUT_FILE), \
- OUTPUT_FILE := $$($1_$2_PANDOC_INPUT), \
- REPLACEMENTS := $$($1_REPLACEMENTS), \
- ))
- else
- $1_$2_PANDOC_INPUT := $$($1_$2_INPUT_FILE)
- endif
-
- ifneq ($$($1_POST_PROCESS), )
- $1_$2_PANDOC_OUTPUT := $$(SUPPORT_OUTPUTDIR)/markdown/$$($1_$2_MARKER)_post.tmp
- else
- $1_$2_PANDOC_OUTPUT := $$($1_$2_OUTPUT_FILE)
- endif
+ # Only continue if this target file hasn't been processed already. This lets
+ # the first found source file override any other with the same name.
+ ifeq ($$($1_$2_OUTPUT_FILE_PROCESSED), )
+ $1_$2_OUTPUT_FILE_PROCESSED := true
+
+ ifneq ($$($1_REPLACEMENTS), )
+ $1_$2_PANDOC_INPUT := $$(SUPPORT_OUTPUTDIR)/markdown/$$($1_$2_MARKER)_pre.tmp
+
+ $$(eval $$(call SetupTextFileProcessing, $1_$2_PREPROCESSED, \
+ SOURCE_FILES := $$($1_$2_INPUT_FILE), \
+ OUTPUT_FILE := $$($1_$2_PANDOC_INPUT), \
+ REPLACEMENTS := $$($1_REPLACEMENTS), \
+ ))
+ else
+ $1_$2_PANDOC_INPUT := $$($1_$2_INPUT_FILE)
+ endif
- ifneq ($$($1_CSS), )
- ifneq ($$(findstring https:/, $$($1_CSS)), )
- $1_$2_CSS_OPTION := --css '$$($1_CSS)'
+ ifneq ($$($1_POST_PROCESS), )
+ $1_$2_PANDOC_OUTPUT := $$(SUPPORT_OUTPUTDIR)/markdown/$$($1_$2_MARKER)_post.tmp
else
- $1_$2_CSS := $$(strip $$(call RelativePath, $$($1_CSS), $$($1_$2_TARGET_DIR)))
- $1_$2_CSS_OPTION := --css '$$($1_$2_CSS)'
+ $1_$2_PANDOC_OUTPUT := $$($1_$2_OUTPUT_FILE)
endif
- endif
- # This does not need to be included in VARDEPS since it's from the actual
- # source file. Only run the shell if the recipe gets executed below.
- $1_$2_OPTIONS_FROM_SRC = \
- $$(shell $$(GREP) _pandoc-options_: $3/$2 | $$(CUT) -d : -f 2-)
+ ifneq ($$($1_CSS), )
+ ifneq ($$(findstring https:/, $$($1_CSS)), )
+ $1_$2_CSS_OPTION := --css '$$($1_CSS)'
+ else
+ $1_$2_CSS := $$(strip $$(call RelativePath, $$($1_CSS), $$($1_$2_TARGET_DIR)))
+ $1_$2_CSS_OPTION := --css '$$($1_$2_CSS)'
+ endif
+ endif
- ifneq ($$($1_FILTER), )
- $1_$2_OPTIONS := --filter $$($1_FILTER)
- endif
+ # This does not need to be included in VARDEPS since it's from the actual
+ # source file. Only run the shell if the recipe gets executed below.
+ $1_$2_OPTIONS_FROM_SRC = \
+ $$(shell $$(GREP) _pandoc-options_: $3/$2 | $$(CUT) -d : -f 2-)
+
+ ifneq ($$($1_FILTER), )
+ $1_$2_OPTIONS := --filter $$($1_FILTER)
+ endif
- $1_$2_VARDEPS := $$($1_OPTIONS) $$($1_$2_OPTIONS) $$($1_CSS) \
- $$($1_REPLACEMENTS) $$($1_POST_PROCESS)
- $1_$2_VARDEPS_FILE := $$(call DependOnVariable, $1_$2_VARDEPS, \
- $$(SUPPORT_OUTPUTDIR)/markdown/$$($1_$2_MARKER).vardeps)
+ $1_$2_VARDEPS := $$($1_OPTIONS) $$($1_$2_OPTIONS) $$($1_CSS) \
+ $$($1_REPLACEMENTS) $$($1_POST_PROCESS)
+ $1_$2_VARDEPS_FILE := $$(call DependOnVariable, $1_$2_VARDEPS, \
+ $$(SUPPORT_OUTPUTDIR)/markdown/$$($1_$2_MARKER).vardeps)
- $$($1_$2_PANDOC_OUTPUT): $$($1_$2_PANDOC_INPUT) $$($1_$2_VARDEPS_FILE) $$($1_EXTRA_DEPS)
+ $$($1_$2_PANDOC_OUTPUT): $$($1_$2_PANDOC_INPUT) $$($1_$2_VARDEPS_FILE) $$($1_EXTRA_DEPS)
$$(call LogInfo, Converting $2 to $$($1_FORMAT))
$$(call MakeDir, $$(SUPPORT_OUTPUTDIR)/markdown $$(dir $$($1_$2_PANDOC_OUTPUT)))
$$(call ExecuteWithLog, $$(SUPPORT_OUTPUTDIR)/markdown/$$($1_$2_MARKER), \
@@ -96,17 +101,18 @@ define ProcessMarkdown
fi
endif
- # If we have no post processing, PANDOC_OUTPUT is set to OUTPUT_FILE. Otherwise
- # PANDOC_OUTPUT is a temporary file, and we must now create the real OUTPUT_FILE.
- ifneq ($$($1_POST_PROCESS), )
- $$($1_$2_OUTPUT_FILE): $$($1_$2_PANDOC_OUTPUT)
+ # If we have no post processing, PANDOC_OUTPUT is set to OUTPUT_FILE. Otherwise
+ # PANDOC_OUTPUT is a temporary file, and we must now create the real OUTPUT_FILE.
+ ifneq ($$($1_POST_PROCESS), )
+ $$($1_$2_OUTPUT_FILE): $$($1_$2_PANDOC_OUTPUT)
$$(call LogInfo, Post-processing markdown file $2)
$$(call MakeDir, $$(SUPPORT_OUTPUTDIR)/markdown $$($1_$2_TARGET_DIR))
$$(call ExecuteWithLog, $$(SUPPORT_OUTPUTDIR)/markdown/$$($1_$2_MARKER)_post, \
( $$($1_POST_PROCESS) $$($1_$2_PANDOC_OUTPUT) > $$($1_$2_OUTPUT_FILE) ) )
- endif
+ endif
- $1 += $$($1_$2_OUTPUT_FILE)
+ $1 += $$($1_$2_OUTPUT_FILE)
+ endif
endef
################################################################################
diff --git a/make/conf/module-loader-map.conf b/make/conf/module-loader-map.conf
index 5b72bf78aa013..e904031186dee 100644
--- a/make/conf/module-loader-map.conf
+++ b/make/conf/module-loader-map.conf
@@ -120,6 +120,7 @@ NATIVE_ACCESS_MODULES= \
jdk.dynalink \
jdk.httpserver \
jdk.incubator.vector \
+ jdk.internal.le \
jdk.internal.vm.ci \
jdk.jfr \
jdk.jsobject \
diff --git a/make/data/docs-resources/resources/jdk-default.css b/make/data/docs-resources/resources/jdk-default.css
index 371f3ba00b95d..29f4f43764f0b 100644
--- a/make/data/docs-resources/resources/jdk-default.css
+++ b/make/data/docs-resources/resources/jdk-default.css
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2017, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2017, 2024, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -23,56 +23,62 @@
* questions.
*/
+@import url('../api/resource-files/fonts/dejavu.css');
+
body {
- margin: 2em 2em;
font-family: DejaVu Sans, Bitstream Vera Sans, Luxi Sans, Verdana, Arial, Helvetica, sans-serif;
font-size: 10pt;
+ margin: 0;
+ padding: 0;
+ height: 100%;
+ width: 100%;
+ min-width: 100%;
line-height: 1.4;
}
pre, code, tt {
- font-family: DejaVu Sans Mono, Bitstream Vera Sans Mono, Luxi Mono,
- Courier New, monospace;
+ font-family: DejaVu Sans Mono, Bitstream Vera Sans Mono, Luxi Mono, Courier New, monospace;
+}
+
+main, footer {
+ margin: 16px 27px;
+}
+
+/* Adjust horizontal margin for content elements outside of main element */
+:is(body, header) > :is(nav, h1, h2, h3, h4, h5, h6, p, .sub-title) {
+ margin-left: 27px;
+ margin-right: 27px;
}
blockquote {
- margin: 1.5ex 0em 1.5ex 2em;
+ margin: 1.5ex 0 1.5ex 2em;
}
p {
- padding: 0pt;
- margin: 1ex 0em;
+ margin-top: 1ex;
+ margin-bottom: 1ex;
}
-p:first-child, pre:first-child { margin-top: 0pt; }
-
-h1 {
- font-weight: bold;
- padding: 0pt;
- margin: 2ex .5ex 1ex 0pt;
+dd > p:first-child, pre:first-child {
+ margin-top: 0;
}
h1:first-child, h2:first-child {
- margin-top: 0ex;
+ margin-top: 0;
}
-h2 {
- font-weight: bold;
- padding: 0pt;
- margin: 2ex 0pt 1ex 0pt;
+h1, h2 {
+ margin-top: 2ex;
+ margin-bottom: 1ex;
}
-h3 {
- font-weight: bold;
- padding: 0pt;
- margin: 1.5ex 0pt 1ex 0pt;
+h3, h4, h5 {
+ margin-top: 1.5ex;
+ margin-bottom: 1ex;
}
h4, h5 {
font-size: 100%;
- font-weight: bold;
- padding: 0pt;
- margin: 1.5ex 0pt 1ex 0pt;
}
.subtitle {
@@ -100,7 +106,7 @@ a[href]:hover {
}
a img {
- border-width: 0px;
+ border-width: 0;
}
img {
@@ -154,9 +160,9 @@ table.centered td {
.draft-header {
text-align: center;
- font-size: 80%;
+ font-size: 11.2px;
padding: 6px;
- margin: -2.5em -2.5em 2.5em -2.5em;
+ line-height: initial;
}
.legal-footer {
@@ -183,29 +189,31 @@ nav#TOC ul ul li::before {
content: " \2022 "
}
-header#title-block-header {
- margin-top:-2em;
+/* Rules below replicate sizing of navigation bar in API docs */
+header#title-block-header div.navbar {
+ padding: 0 20px 0 26px;
+ margin-bottom: 30px;
+ background-color: #4D7A97;
+ color: #FFFFFF;
+ height: 44px;
+ overflow: hidden;
+ font-size: 0.857em;
+ line-height: initial;
+ display: flex;
+ flex-direction: row-reverse;
+ align-items: center;
}
-header#title-block-header div.navbar {
- margin: 0 -2.5em 2.5em -2.5em;
- padding: 0 2.5em;
- background-color:#4D7A97;
- color:#FFFFFF;
- float:left;
- width:100%;
- clear:right;
- min-height:2.8em;
- padding-top:10px;
- overflow:hidden;
- font-size:12px;
+header#title-block-header div.navbar nav {
+ flex: 1 1 auto;
+ font-size: 12px;
+ white-space: nowrap;
}
header#title-block-header div.navbar div {
- float:right;
- font-size:11px;
- height:2.9em;
- margin: auto 0;
+ flex: 0 0 auto;
+ font-size: 10.978px;
+ white-space: nowrap;
}
header#title-block-header ul {
diff --git a/make/data/hotspot-symbols/version-script-clang.txt b/make/data/hotspot-symbols/version-script-clang.txt
new file mode 100644
index 0000000000000..351d64fdd1d57
--- /dev/null
+++ b/make/data/hotspot-symbols/version-script-clang.txt
@@ -0,0 +1,8 @@
+SUNWprivate_1.1 {
+ global:
+ *;
+
+ local:
+ _fini;
+ _init;
+};
diff --git a/make/data/hotspot-symbols/version-script.txt b/make/data/hotspot-symbols/version-script-gcc.txt
similarity index 100%
rename from make/data/hotspot-symbols/version-script.txt
rename to make/data/hotspot-symbols/version-script-gcc.txt
diff --git a/make/hotspot/lib/CompileJvm.gmk b/make/hotspot/lib/CompileJvm.gmk
index c2ff006402529..dff7a159ae5fe 100644
--- a/make/hotspot/lib/CompileJvm.gmk
+++ b/make/hotspot/lib/CompileJvm.gmk
@@ -47,6 +47,12 @@ JVM_LDFLAGS += \
JVM_ASFLAGS += $(EXTRA_ASFLAGS)
+JVM_ASFLAGS += \
+ -I$(TOPDIR)/src/hotspot/os_cpu/$(HOTSPOT_TARGET_OS)_$(HOTSPOT_TARGET_CPU_ARCH) \
+ -I$(TOPDIR)/src/hotspot/os/$(HOTSPOT_TARGET_OS) \
+ -I$(TOPDIR)/src/hotspot/os/$(HOTSPOT_TARGET_OS_TYPE) \
+ #
+
JVM_LIBS += \
$(JVM_LIBS_FEATURES) \
#
@@ -148,7 +154,7 @@ ifeq ($(call isTargetOs, windows), true)
endif
ifeq ($(call isTargetOs, linux), true)
- HOTSPOT_VERSION_SCRIPT := $(TOPDIR)/make/data/hotspot-symbols/version-script.txt
+ HOTSPOT_VERSION_SCRIPT := $(TOPDIR)/make/data/hotspot-symbols/version-script-$(TOOLCHAIN_TYPE).txt
JVM_LDFLAGS += -Wl,-version-script=$(HOTSPOT_VERSION_SCRIPT)
endif
diff --git a/make/ide/visualstudio/hotspot/src/classes/build/tools/projectcreator/FileTreeCreator.java b/make/ide/visualstudio/hotspot/src/classes/build/tools/projectcreator/FileTreeCreator.java
index 385903e676ff7..f8fd111587cd9 100644
--- a/make/ide/visualstudio/hotspot/src/classes/build/tools/projectcreator/FileTreeCreator.java
+++ b/make/ide/visualstudio/hotspot/src/classes/build/tools/projectcreator/FileTreeCreator.java
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2012, 2016, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2012, 2024, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -54,7 +54,7 @@ public FileTreeCreator(Path startDir, Vector allConfigs, WinGammaPl
attributes.push(new DirAttributes());
}
- public class DirAttributes {
+ public static class DirAttributes {
private HashSet ignores;
private HashSet disablePch;
diff --git a/make/jdk/src/classes/build/tools/classlist/HelloClasslist.java b/make/jdk/src/classes/build/tools/classlist/HelloClasslist.java
index 921eaeee764da..1b930ca752777 100644
--- a/make/jdk/src/classes/build/tools/classlist/HelloClasslist.java
+++ b/make/jdk/src/classes/build/tools/classlist/HelloClasslist.java
@@ -151,6 +151,17 @@ public static void main(String ... args) throws Throwable {
LOGGER.log(Level.FINE, "New Date: " + newDate + " - old: " + oldDate);
+ // Pull SwitchBootstraps and associated classes into the classlist
+ record A(int a) { }
+ record B(int b) { }
+ Object o = new A(4711);
+ int value = switch (o) {
+ case A a -> a.a;
+ case B b -> b.b;
+ default -> 17;
+ };
+ LOGGER.log(Level.FINE, "Value: " + value);
+
// The Striped64$Cell is loaded rarely only when there's a contention among
// multiple threads performing LongAdder.increment(). This results in
// an inconsistency in the classlist between builds (see JDK-8295951).
diff --git a/make/jdk/src/classes/build/tools/cldrconverter/CLDRConverter.java b/make/jdk/src/classes/build/tools/cldrconverter/CLDRConverter.java
index 8d39ef4c1d55b..15251358a01a1 100644
--- a/make/jdk/src/classes/build/tools/cldrconverter/CLDRConverter.java
+++ b/make/jdk/src/classes/build/tools/cldrconverter/CLDRConverter.java
@@ -87,6 +87,7 @@ public class CLDRConverter {
static final String ZONE_NAME_PREFIX = "timezone.displayname.";
static final String METAZONE_ID_PREFIX = "metazone.id.";
static final String PARENT_LOCALE_PREFIX = "parentLocale.";
+ static final String LIKELY_SCRIPT_PREFIX = "likelyScript.";
static final String META_EMPTY_ZONE_NAME = "EMPTY_ZONE";
static final String[] EMPTY_ZONE = {"", "", "", "", "", ""};
static final String META_ETCUTC_ZONE_NAME = "ETC_UTC";
@@ -114,9 +115,13 @@ public class CLDRConverter {
// "parentLocales" map
private static final Map> parentLocalesMap = new HashMap<>();
+ static boolean nonlikelyScript;
private static final ResourceBundle.Control defCon =
ResourceBundle.Control.getControl(ResourceBundle.Control.FORMAT_DEFAULT);
+ // "likelyScript" map
+ private static final Map> likelyScriptMap = new HashMap<>();
+
private static Set AVAILABLE_TZIDS;
static int copyrightYear;
static String jdkHeaderTemplate;
@@ -175,7 +180,7 @@ String getKeyword() {
private static boolean verbose;
private CLDRConverter() {
- // no instantiation
+ // no instantiation
}
@SuppressWarnings("AssignmentToForLoopParameter")
@@ -475,8 +480,8 @@ private static void parseSupplemental() throws Exception {
parseLDMLFile(new File(SPPL_SOURCE_FILE), handlerSuppl);
Map parentData = handlerSuppl.getData("root");
parentData.keySet().stream()
- .filter(key -> key.startsWith(PARENT_LOCALE_PREFIX))
- .forEach(key -> {
+ .filter(key -> key.startsWith(PARENT_LOCALE_PREFIX))
+ .forEach(key -> {
parentLocalesMap.put(key, new TreeSet(
Arrays.asList(((String)parentData.get(key)).split(" "))));
});
@@ -492,6 +497,16 @@ private static void parseSupplemental() throws Exception {
// Parse likelySubtags
handlerLikelySubtags = new LikelySubtagsParseHandler();
parseLDMLFile(new File(LIKELYSUBTAGS_SOURCE_FILE), handlerLikelySubtags);
+ handlerLikelySubtags.getData().forEach((from, to) -> {
+ if (!from.contains("-")) { // look for language-only tag
+ var script = to.split("-")[1];
+ var key = LIKELY_SCRIPT_PREFIX + script;
+ var prev = likelyScriptMap.putIfAbsent(key, new TreeSet(Set.of(from)));
+ if (prev != null) {
+ prev.add(from);
+ }
+ }
+ });
// Parse supplementalMetadata
// Currently interested in deprecated time zone ids and language aliases.
@@ -561,6 +576,7 @@ private static void convertBundles(List bundles) throws Exception {
// for now.
if (isBaseModule) {
metaInfo.putAll(parentLocalesMap);
+ metaInfo.putAll(likelyScriptMap);
}
for (Bundle bundle : bundles) {
@@ -1135,7 +1151,7 @@ private static List applyParentLocales(String baseName, List can
// check irregular parents
for (int i = 0; i < candidates.size(); i++) {
Locale l = candidates.get(i);
- Locale p = childToParentLocaleMap.get(l);
+ Locale p = getParentLocale(l);
if (!l.equals(Locale.ROOT) &&
Objects.nonNull(p) &&
!candidates.get(i+1).equals(p)) {
@@ -1152,6 +1168,27 @@ private static List applyParentLocales(String baseName, List can
return candidates;
}
+ private static Locale getParentLocale(Locale child) {
+ Locale parent = childToParentLocaleMap.get(child);
+
+ // check non-likely script for root
+ if (nonlikelyScript && parent == null && child.getCountry().isEmpty()) {
+ var lang = " " + child.getLanguage() + " ";
+ var script = child.getScript();
+
+ if (!script.isEmpty()) {
+ parent = likelyScriptMap.entrySet().stream()
+ .filter(e -> e.getValue().contains(lang))
+ .findAny()
+ .map(Map.Entry::getKey)
+ .map(likely -> likely.equals(script) ? null : Locale.ROOT)
+ .orElse(null);
+ }
+ }
+
+ return parent;
+ }
+
private static void generateZoneName() throws Exception {
Files.createDirectories(Paths.get(DESTINATION_DIR, "java", "time", "format"));
Files.write(Paths.get(DESTINATION_DIR, "java", "time", "format", "ZoneName.java"),
diff --git a/make/jdk/src/classes/build/tools/cldrconverter/ResourceBundleGenerator.java b/make/jdk/src/classes/build/tools/cldrconverter/ResourceBundleGenerator.java
index 49c74544be66e..3953f38f653e6 100644
--- a/make/jdk/src/classes/build/tools/cldrconverter/ResourceBundleGenerator.java
+++ b/make/jdk/src/classes/build/tools/cldrconverter/ResourceBundleGenerator.java
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2012, 2023, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2012, 2024, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -38,6 +38,7 @@
import java.util.Objects;
import java.util.Set;
import java.util.SortedSet;
+import java.util.stream.Collectors;
class ResourceBundleGenerator implements BundleGenerator {
// preferred timezones - keeping compatibility with JDK1.1 3 letter abbreviations
@@ -306,81 +307,77 @@ public void generateMetaInfo(Map> metaInfo) throws IOE
import sun.util.locale.provider.LocaleProviderAdapter;
public class %s implements LocaleDataMetaInfo {
- private static final Map resourceNameToLocales = HashMap.newHashMap(%d);
- %s
- static {
- """, CLDRConverter.isBaseModule ? "cldr" : "resources.cldr.provider",
- className, metaInfo.keySet().stream().filter(k -> k.equals("AvailableLocales")).count(),
- CLDRConverter.isBaseModule ?
- """
+ """,
+ CLDRConverter.isBaseModule ? "cldr" : "resources.cldr.provider",
+ className);
+
+ if (CLDRConverter.isBaseModule) {
+ out.printf("""
private static final Map parentLocalesMap = HashMap.newHashMap(%d);
private static final Map languageAliasMap = HashMap.newHashMap(%d);
+ static final boolean nonlikelyScript = %s; // package access from CLDRLocaleProviderAdapter
+
+ static {
""".formatted(
metaInfo.keySet().stream().filter(k -> k.startsWith(CLDRConverter.PARENT_LOCALE_PREFIX)).count(),
- CLDRConverter.handlerSupplMeta.getLanguageAliasData().size()) :
- "");
-
- for (String key : metaInfo.keySet()) {
- if (key.startsWith(CLDRConverter.PARENT_LOCALE_PREFIX)) {
- String parentTag = key.substring(CLDRConverter.PARENT_LOCALE_PREFIX.length());
- if ("root".equals(parentTag)) {
- out.printf(" parentLocalesMap.put(Locale.ROOT,\n");
- } else {
- out.printf(" parentLocalesMap.put(Locale.forLanguageTag(\"%s\"),\n",
- parentTag);
- }
- String[] children = toLocaleList(metaInfo.get(key), true).split(" ");
- Arrays.sort(children);
- out.printf(" new String[] {\n" +
- " ");
- int count = 0;
- for (int i = 0; i < children.length; i++) {
- String child = children[i];
- out.printf("\"%s\", ", child);
- count += child.length() + 4;
- if (i != children.length - 1 && count > 64) {
- out.printf("\n ");
- count = 0;
+ CLDRConverter.handlerSupplMeta.getLanguageAliasData().size(),
+ Boolean.valueOf(CLDRConverter.nonlikelyScript)));
+
+ for (String key : metaInfo.keySet()) {
+ if (key.startsWith(CLDRConverter.PARENT_LOCALE_PREFIX)) {
+ String parentTag = key.substring(CLDRConverter.PARENT_LOCALE_PREFIX.length());
+ if ("root".equals(parentTag)) {
+ out.printf(" parentLocalesMap.put(Locale.ROOT,\n");
+ } else {
+ out.printf(" parentLocalesMap.put(Locale.forLanguageTag(\"%s\"),\n",
+ parentTag);
}
- }
- out.printf("\n });\n");
- } else {
- if ("AvailableLocales".equals(key)) {
- out.printf(" resourceNameToLocales.put(\"%s\",\n", key);
- out.printf(" \"%s\");\n", toLocaleList(applyLanguageAliases(metaInfo.get(key)), false));
+ generateStringArray(metaInfo.get(key), out);
}
}
- }
- // for languageAliasMap
- if (CLDRConverter.isBaseModule) {
+ out.println();
+
+ // for languageAliasMap
CLDRConverter.handlerSupplMeta.getLanguageAliasData().forEach((key, value) -> {
out.printf(" languageAliasMap.put(\"%s\", \"%s\");\n", key, value);
});
- }
+ out.printf(" }\n\n");
- out.printf(" }\n\n");
+ // end of static initializer block.
- // end of static initializer block.
-
- // Canonical TZ names for delayed initialization
- if (CLDRConverter.isBaseModule) {
+ // Delayed initialization section
out.printf("""
- private static class TZCanonicalIDMapHolder {
- static final Map tzCanonicalIDMap = HashMap.newHashMap(%d);
+ private static class CLDRMapHolder {
+ private static final Map tzCanonicalIDMap = HashMap.newHashMap(%d);
+ private static final Map likelyScriptMap = HashMap.newHashMap(%d);
+
static {
- """, CLDRConverter.handlerTimeZone.getData().size());
+ """, CLDRConverter.handlerTimeZone.getData().size(),
+ metaInfo.keySet().stream().filter(k -> k.startsWith(CLDRConverter.LIKELY_SCRIPT_PREFIX)).count());
CLDRConverter.handlerTimeZone.getData().entrySet().stream()
.forEach(e -> {
String[] ids = ((String)e.getValue()).split("\\s");
out.printf(" tzCanonicalIDMap.put(\"%s\", \"%s\");\n", e.getKey(),
- ids[0]);
+ ids[0]);
for (int i = 1; i < ids.length; i++) {
out.printf(" tzCanonicalIDMap.put(\"%s\", \"%s\");\n", ids[i],
ids[0]);
}
});
- out.printf(" }\n }\n\n");
+ out.println();
+
+ // for likelyScript map
+ for (String key : metaInfo.keySet()) {
+ if (key.startsWith(CLDRConverter.LIKELY_SCRIPT_PREFIX)) {
+ // ensure spaces at the begin/end for delimiting purposes
+ out.printf(" likelyScriptMap.put(\"%s\", \"%s\");\n",
+ key.substring(CLDRConverter.LIKELY_SCRIPT_PREFIX.length()),
+ " " + metaInfo.get(key).stream().collect(Collectors.joining(" ")) + " ");
+ }
+ }
+ out.printf(" }\n }\n");
}
+ out.println();
out.printf("""
@Override
@@ -390,12 +387,13 @@ public LocaleProviderAdapter.Type getType() {
@Override
public String availableLanguageTags(String category) {
- return resourceNameToLocales.getOrDefault(category, "");
+ return " %s";
}
- %s
- }
""",
- CLDRConverter.isBaseModule ? """
+ toLocaleList(applyLanguageAliases(metaInfo.get("AvailableLocales")), false));
+
+ if(CLDRConverter.isBaseModule) {
+ out.printf("""
@Override
public Map getLanguageAliasMap() {
@@ -404,16 +402,41 @@ public Map getLanguageAliasMap() {
@Override
public Map tzCanonicalIDs() {
- return TZCanonicalIDMapHolder.tzCanonicalIDMap;
+ return CLDRMapHolder.tzCanonicalIDMap;
}
public Map parentLocales() {
return parentLocalesMap;
}
- """ : "");
+
+ // package access from CLDRLocaleProviderAdapter
+ Map likelyScriptMap() {
+ return CLDRMapHolder.likelyScriptMap;
+ }
+ """);
+ }
+ out.printf("}\n");
}
}
+ private static void generateStringArray(SortedSet set, PrintWriter out) throws IOException {
+ String[] children = toLocaleList(set, true).split(" ");
+ Arrays.sort(children);
+ out.printf(" new String[] {\n" +
+ " ");
+ int count = 0;
+ for (int i = 0; i < children.length; i++) {
+ String child = children[i];
+ out.printf("\"%s\", ", child);
+ count += child.length() + 4;
+ if (i != children.length - 1 && count > 64) {
+ out.printf("\n ");
+ count = 0;
+ }
+ }
+ out.printf("\n });\n");
+ }
+
private static final Locale.Builder LOCALE_BUILDER = new Locale.Builder();
private static boolean isBaseLocale(String localeID) {
localeID = localeID.replaceAll("-", "_");
@@ -433,7 +456,9 @@ private static String toLocaleList(SortedSet set, boolean all) {
if (!all && CLDRConverter.isBaseModule ^ isBaseLocale(id)) {
continue;
}
- sb.append(' ');
+ if (sb.length() > 0) {
+ sb.append(' ');
+ }
sb.append(id);
}
}
diff --git a/make/jdk/src/classes/build/tools/cldrconverter/SupplementalDataParseHandler.java b/make/jdk/src/classes/build/tools/cldrconverter/SupplementalDataParseHandler.java
index f4d20160ffb3b..9ba4d64519949 100644
--- a/make/jdk/src/classes/build/tools/cldrconverter/SupplementalDataParseHandler.java
+++ b/make/jdk/src/classes/build/tools/cldrconverter/SupplementalDataParseHandler.java
@@ -57,29 +57,22 @@ class SupplementalDataParseHandler extends AbstractLDMLHandler {
// the weekData is listed using country code.
//
// weekData are generated per each country
- private final Map firstDayMap;
- private final Map minDaysMap;
+ private static final Map firstDayMap = new HashMap<>();
+ private static final Map minDaysMap = new HashMap<>();
// Parent locales. These information will only be
// generated towards the base meta info, with the format of
//
// parentLocale.=(" ")+
- private final Map parentLocalesMap;
+ private static final Map parentLocalesMap = new HashMap<>();
// Input Skeleton map for "preferred" and "allowed"
// Map<"preferred"/"allowed", Map<"skeleton", SortedSet<"regions">>>
- private final Map>> inputSkeletonMap;
+ private static final Map>> inputSkeletonMap = new HashMap<>();
// "component" specific to this parent locale chain
private String currentParentLocaleComponent;
- SupplementalDataParseHandler() {
- firstDayMap = new HashMap<>();
- minDaysMap = new HashMap<>();
- parentLocalesMap = new HashMap<>();
- inputSkeletonMap = new HashMap<>();
- }
-
/**
* It returns Map that contains the firstDay and minDays information for
* the country. The Map is created in JRE format after obtaining the data
@@ -158,9 +151,15 @@ public void startElement(String uri, String localName, String qName, Attributes
// Ignore component for now, otherwise "zh-Hant" falling back to "zh" would happen
// https://github.com/unicode-org/cldr/pull/2664
if (currentParentLocaleComponent == null) {
+ var parent = attributes.getValue("parent").replaceAll("_", "-");
+
parentLocalesMap.put(
- attributes.getValue("parent").replaceAll("_", "-"),
+ parent,
attributes.getValue("locales").replaceAll("_", "-"));
+
+ if ("root".equals(parent)) {
+ CLDRConverter.nonlikelyScript = "nonlikelyScript".equals(attributes.getValue("localeRules"));
+ }
}
}
break;
diff --git a/make/jdk/src/classes/build/tools/generatecurrencydata/GenerateCurrencyData.java b/make/jdk/src/classes/build/tools/generatecurrencydata/GenerateCurrencyData.java
index 9655e08016c5a..673e982948891 100644
--- a/make/jdk/src/classes/build/tools/generatecurrencydata/GenerateCurrencyData.java
+++ b/make/jdk/src/classes/build/tools/generatecurrencydata/GenerateCurrencyData.java
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2001, 2023, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2001, 2024, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -312,9 +312,6 @@ private static int makeSpecialCaseEntry(String currencyInfo) throws Exception {
checkCurrencyCode(newCurrency);
String timeString = currencyInfo.substring(4, length - 4);
long time = format.parse(timeString).getTime();
- if (Math.abs(time - System.currentTimeMillis()) > ((long) 10) * 365 * 24 * 60 * 60 * 1000) {
- throw new RuntimeException("time is more than 10 years from present: " + time);
- }
specialCaseCutOverTimes[specialCaseCount] = time;
specialCaseOldCurrencies[specialCaseCount] = oldCurrency;
specialCaseOldCurrenciesDefaultFractionDigits[specialCaseCount] = getDefaultFractionDigits(oldCurrency);
diff --git a/make/jdk/src/classes/build/tools/intpoly/FieldGen.java b/make/jdk/src/classes/build/tools/intpoly/FieldGen.java
index a65150ac421b5..234f5cfce0d26 100644
--- a/make/jdk/src/classes/build/tools/intpoly/FieldGen.java
+++ b/make/jdk/src/classes/build/tools/intpoly/FieldGen.java
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2018, 2022, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2018, 2024, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -778,7 +778,7 @@ private String generate(FieldParams params) throws IOException {
result.appendLine("}");
result.appendLine("@Override");
- result.appendLine("protected void mult(long[] a, long[] b, long[] r) {");
+ result.appendLine("protected int mult(long[] a, long[] b, long[] r) {");
result.incrIndent();
for (int i = 0; i < 2 * params.getNumLimbs() - 1; i++) {
result.appendIndent();
@@ -804,6 +804,9 @@ private String generate(FieldParams params) throws IOException {
}
}
result.append(");\n");
+ result.appendIndent();
+ result.append("return 0;");
+ result.appendLine();
result.decrIndent();
result.appendLine("}");
@@ -833,7 +836,7 @@ private String generate(FieldParams params) throws IOException {
// }
// }
result.appendLine("@Override");
- result.appendLine("protected void square(long[] a, long[] r) {");
+ result.appendLine("protected int square(long[] a, long[] r) {");
result.incrIndent();
for (int i = 0; i < 2 * params.getNumLimbs() - 1; i++) {
result.appendIndent();
@@ -874,6 +877,9 @@ private String generate(FieldParams params) throws IOException {
}
}
result.append(");\n");
+ result.appendIndent();
+ result.append("return 0;");
+ result.appendLine();
result.decrIndent();
result.appendLine("}");
diff --git a/make/jdk/src/classes/build/tools/taglet/JSpec.java b/make/jdk/src/classes/build/tools/taglet/JSpec.java
index ff8abc46ffd71..fda372c8b0114 100644
--- a/make/jdk/src/classes/build/tools/taglet/JSpec.java
+++ b/make/jdk/src/classes/build/tools/taglet/JSpec.java
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2017, 2019, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2017, 2024, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -30,6 +30,8 @@
import java.util.Set;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
+import java.util.stream.Collectors;
+
import javax.lang.model.element.Element;
import com.sun.source.doctree.DocTree;
@@ -157,7 +159,10 @@ public String toString(List extends DocTree> tags, Element elem) {
continue;
}
- String tagText = contents.toString().trim();
+ String tagText = contents.stream()
+ .map(Object::toString)
+ .collect(Collectors.joining())
+ .trim();
Matcher m = TAG_PATTERN.matcher(tagText);
if (m.find()) {
String chapter = m.group("chapter");
diff --git a/make/jdk/src/classes/build/tools/taglet/ToolGuide.java b/make/jdk/src/classes/build/tools/taglet/ToolGuide.java
index f3ce80cf241da..af38e0a765c3c 100644
--- a/make/jdk/src/classes/build/tools/taglet/ToolGuide.java
+++ b/make/jdk/src/classes/build/tools/taglet/ToolGuide.java
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2017, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2017, 2024, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -30,6 +30,8 @@
import java.util.Set;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
+import java.util.stream.Collectors;
+
import javax.lang.model.element.Element;
import javax.lang.model.element.PackageElement;
import javax.lang.model.element.TypeElement;
@@ -105,8 +107,11 @@ public String toString(List extends DocTree> tags, Element elem) {
continue;
}
- UnknownBlockTagTree blockTag = (UnknownBlockTagTree)tag;
- String tagText = blockTag.getContent().toString().trim();
+ UnknownBlockTagTree blockTag = (UnknownBlockTagTree) tag;
+ String tagText = blockTag.getContent().stream()
+ .map(Object::toString)
+ .collect(Collectors.joining())
+ .trim();
Matcher m = TAG_PATTERN.matcher(tagText);
if (m.matches()) {
String name = m.group("name");
diff --git a/make/langtools/src/classes/build/tools/symbolgenerator/CreateSymbols.java b/make/langtools/src/classes/build/tools/symbolgenerator/CreateSymbols.java
index 154810e20f2d2..a88eabe01b27e 100644
--- a/make/langtools/src/classes/build/tools/symbolgenerator/CreateSymbols.java
+++ b/make/langtools/src/classes/build/tools/symbolgenerator/CreateSymbols.java
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2006, 2023, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2006, 2024, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -970,7 +970,7 @@ private void addAttributes(ModuleDescription md,
}
if (header.moduleMainClass != null) {
int attrIdx = addString(cp, Attribute.ModuleMainClass);
- int targetIdx = addString(cp, header.moduleMainClass);
+ int targetIdx = addClassName(cp, header.moduleMainClass);
attributes.put(Attribute.ModuleMainClass,
new ModuleMainClass_attribute(attrIdx, targetIdx));
}
diff --git a/make/modules/java.base/Java.gmk b/make/modules/java.base/Java.gmk
index 5820c280fe9d1..cbe6eecf187df 100644
--- a/make/modules/java.base/Java.gmk
+++ b/make/modules/java.base/Java.gmk
@@ -27,7 +27,7 @@
# new warning is added to javac, it can be temporarily added to the
# disabled warnings list.
#
-# DISABLED_WARNINGS_java +=
+DISABLED_WARNINGS_java += dangling-doc-comments
DOCLINT += -Xdoclint:all/protected \
'-Xdoclint/package:java.*,javax.*'
diff --git a/make/modules/java.xml/Copy.gmk b/make/modules/java.xml/Copy.gmk
index 3b6c66e42c5ed..f242cb2ac7611 100644
--- a/make/modules/java.xml/Copy.gmk
+++ b/make/modules/java.xml/Copy.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2023, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2023, 2024, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -24,14 +24,17 @@
#
include CopyCommon.gmk
+include Modules.gmk
################################################################################
+#
+# Copy property and template files from share/conf to CONF_DST_DIR
+#
+$(eval $(call SetupCopyFiles, COPY_XML_MODULE_CONF, \
+ DEST := $(CONF_DST_DIR), \
+ SRC := $(TOPDIR)/src/java.xml/share/conf, \
+ FILES := jaxp.properties jaxp-strict.properties.template, \
+))
-XML_LIB_SRC := $(TOPDIR)/src/java.xml/share/conf
-
-$(CONF_DST_DIR)/jaxp.properties: $(XML_LIB_SRC)/jaxp.properties
- $(call install-file)
-
-TARGETS := $(CONF_DST_DIR)/jaxp.properties
-
+TARGETS += $(COPY_XML_MODULE_CONF)
################################################################################
diff --git a/make/modules/jdk.httpserver/Java.gmk b/make/modules/jdk.httpserver/Java.gmk
index d7b2b2f70ca21..799ff8695a74c 100644
--- a/make/modules/jdk.httpserver/Java.gmk
+++ b/make/modules/jdk.httpserver/Java.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2020, 2023, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2020, 2024, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -24,3 +24,5 @@
#
DISABLED_WARNINGS_java += missing-explicit-ctor this-escape
+
+COPY += .ico
diff --git a/make/modules/jdk.incubator.vector/Java.gmk b/make/modules/jdk.incubator.vector/Java.gmk
index a2013179acef9..6cd5301428f7e 100644
--- a/make/modules/jdk.incubator.vector/Java.gmk
+++ b/make/modules/jdk.incubator.vector/Java.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2020, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2020, 2024, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -23,4 +23,6 @@
# questions.
#
+DISABLED_WARNINGS_java += dangling-doc-comments
+
DOCLINT += -Xdoclint:all/protected
diff --git a/make/modules/jdk.internal.le/Lib.gmk b/make/modules/jdk.internal.md/Java.gmk
similarity index 63%
rename from make/modules/jdk.internal.le/Lib.gmk
rename to make/modules/jdk.internal.md/Java.gmk
index e81137e0d5387..2a850f5563743 100644
--- a/make/modules/jdk.internal.le/Lib.gmk
+++ b/make/modules/jdk.internal.md/Java.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2015, 2024, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2024, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -23,23 +23,4 @@
# questions.
#
-include LibCommon.gmk
-
-ifeq ($(call isTargetOs, aix), false)
- ##############################################################################
- ## Build lible
- ##############################################################################
-
- $(eval $(call SetupJdkLibrary, BUILD_LIBLE, \
- NAME := le, \
- LINK_TYPE := C++, \
- OPTIMIZATION := LOW, \
- EXTRA_HEADER_DIRS := \
- java.base:libjava \
- java.base:libjvm, \
- LD_SET_ORIGIN := false, \
- LIBS_windows := user32.lib, \
- ))
-
- TARGETS += $(BUILD_LIBLE)
-endif
+COPY += .txt
diff --git a/make/modules/jdk.jpackage/Java.gmk b/make/modules/jdk.jpackage/Java.gmk
index acb8529f9ef0d..9d31e5417e905 100644
--- a/make/modules/jdk.jpackage/Java.gmk
+++ b/make/modules/jdk.jpackage/Java.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2020, 2023, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2020, 2024, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -23,6 +23,8 @@
# questions.
#
+DISABLED_WARNINGS_java += dangling-doc-comments
+
COPY += .gif .png .txt .spec .script .prerm .preinst \
.postrm .postinst .list .sh .desktop .copyright .control .plist .template \
.icns .scpt .wxs .wxl .wxi .ico .bmp .tiff .service
diff --git a/make/modules/jdk.unsupported/Java.gmk b/make/modules/jdk.unsupported/Java.gmk
new file mode 100644
index 0000000000000..fc8b2f832d713
--- /dev/null
+++ b/make/modules/jdk.unsupported/Java.gmk
@@ -0,0 +1,26 @@
+#
+# Copyright (c) 2024, Oracle and/or its affiliates. All rights reserved.
+# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
+#
+# This code is free software; you can redistribute it and/or modify it
+# under the terms of the GNU General Public License version 2 only, as
+# published by the Free Software Foundation. Oracle designates this
+# particular file as subject to the "Classpath" exception as provided
+# by Oracle in the LICENSE file that accompanied this code.
+#
+# This code is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+# version 2 for more details (a copy is included in the LICENSE file that
+# accompanied this code).
+#
+# You should have received a copy of the GNU General Public License version
+# 2 along with this work; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
+# or visit www.oracle.com if you need additional information or have any
+# questions.
+#
+
+DISABLED_WARNINGS_java += dangling-doc-comments
diff --git a/make/test/BuildFailureHandler.gmk b/make/test/BuildFailureHandler.gmk
index 005cbb330f625..e8f49e76b3268 100644
--- a/make/test/BuildFailureHandler.gmk
+++ b/make/test/BuildFailureHandler.gmk
@@ -81,6 +81,10 @@ IMAGES_TARGETS += $(COPY_FH)
# Use JTREG_TEST_OPTS for test VM options
# Use JTREG_TESTS for jtreg tests parameter
#
+# Most likely you want to select a specific test from test/failure_handler/test
+# and manually inspect the results. This target does not actually verify
+# anything about the failure_handler's output or even if it ran at all.
+#
RUN_DIR := $(FH_SUPPORT)/test
test:
diff --git a/make/test/BuildMicrobenchmark.gmk b/make/test/BuildMicrobenchmark.gmk
index 3d6ec9950d715..32d26be22702b 100644
--- a/make/test/BuildMicrobenchmark.gmk
+++ b/make/test/BuildMicrobenchmark.gmk
@@ -1,5 +1,5 @@
#
-# Copyright (c) 2018, 2023, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2018, 2024, Oracle and/or its affiliates. All rights reserved.
# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
#
# This code is free software; you can redistribute it and/or modify it
@@ -94,8 +94,8 @@ $(eval $(call SetupJavaCompilation, BUILD_JDK_MICROBENCHMARK, \
TARGET_RELEASE := $(TARGET_RELEASE_NEWJDK_UPGRADED), \
SMALL_JAVA := false, \
CLASSPATH := $(JMH_COMPILE_JARS), \
- DISABLED_WARNINGS := restricted this-escape processing rawtypes cast \
- serial preview, \
+ DISABLED_WARNINGS := restricted this-escape processing rawtypes removal cast \
+ serial preview dangling-doc-comments, \
SRC := $(MICROBENCHMARK_SRC), \
BIN := $(MICROBENCHMARK_CLASSES), \
JAVAC_FLAGS := \
@@ -109,6 +109,8 @@ $(eval $(call SetupJavaCompilation, BUILD_JDK_MICROBENCHMARK, \
--add-exports java.base/jdk.internal.vm=ALL-UNNAMED \
--add-exports java.base/sun.invoke.util=ALL-UNNAMED \
--add-exports java.base/sun.security.util=ALL-UNNAMED \
+ --add-exports java.base/sun.security.util.math=ALL-UNNAMED \
+ --add-exports java.base/sun.security.util.math.intpoly=ALL-UNNAMED \
--enable-preview \
-XDsuppressNotes \
-processor org.openjdk.jmh.generators.BenchmarkProcessor, \
diff --git a/src/demo/share/jfc/Font2DTest/Font2DTest.java b/src/demo/share/jfc/Font2DTest/Font2DTest.java
index e237ee680ce29..e474e638105e3 100644
--- a/src/demo/share/jfc/Font2DTest/Font2DTest.java
+++ b/src/demo/share/jfc/Font2DTest/Font2DTest.java
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 1999, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1999, 2024, Oracle and/or its affiliates. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -67,22 +67,20 @@
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
+import java.lang.reflect.InvocationTargetException;
import java.util.StringTokenizer;
+import java.util.ArrayList;
import java.util.BitSet;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
import javax.swing.*;
import javax.swing.event.*;
+import javax.swing.plaf.nimbus.NimbusLookAndFeel;
import static java.nio.charset.StandardCharsets.UTF_16;
-/**
- * Font2DTest.java
- *
- * @author Shinsuke Fukuda
- * @author Ankit Patel [Conversion to Swing - 01/07/30]
- */
-
-/// Main Font2DTest Class
-
public final class Font2DTest extends JPanel
implements ActionListener, ItemListener, ChangeListener {
@@ -95,6 +93,12 @@ public final class Font2DTest extends JPanel
/// Other menus to set parameters for text drawing
private final ChoiceV2 fontMenu;
+ private JPanel fontMenuPanel;
+ private JPanel stylePanel;
+ private LabelV2 fontMenuLabel = null;
+ private LabelV2 styleLabel = null;
+ private ChoiceV2 fontNameMenu;
+ private ChoiceV2 fontSubFamilyMenu;
private final JTextField sizeField;
private final ChoiceV2 styleMenu;
private final ChoiceV2 textMenu;
@@ -111,6 +115,9 @@ public final class Font2DTest extends JPanel
private CheckboxMenuItemV2 displayGridCBMI;
private CheckboxMenuItemV2 force16ColsCBMI;
private CheckboxMenuItemV2 showFontInfoCBMI;
+ private JRadioButtonMenuItem familyAndStyleRBMI;
+ private JRadioButtonMenuItem familyAndSubFamilyRBMI;
+ private JRadioButtonMenuItem fontNameRBMI;
/// JDialog boxes
private JDialog userTextDialog;
@@ -126,6 +133,7 @@ public final class Font2DTest extends JPanel
/// Status bar
private final LabelV2 statusBar;
+ private String currentFontName = Font.DIALOG;
private int[] fontStyles = {Font.PLAIN, Font.BOLD, Font.ITALIC, Font.BOLD | Font.ITALIC};
/// Text filename
@@ -133,6 +141,7 @@ public final class Font2DTest extends JPanel
// Enabled or disabled status of canDisplay check
private static boolean canDisplayCheck = true;
+ private static final Locale l = Locale.getDefault();
/// Initialize GUI variables and its layouts
public Font2DTest( JFrame f) {
@@ -143,6 +152,8 @@ public Font2DTest( JFrame f) {
statusBar = new LabelV2("");
fontMenu = new ChoiceV2( this, canDisplayCheck );
+ fontNameMenu = new ChoiceV2( this, false );
+ fontSubFamilyMenu = new ChoiceV2( this, false );
sizeField = new JTextField( "12", 3 );
sizeField.addActionListener( this );
styleMenu = new ChoiceV2( this );
@@ -175,6 +186,46 @@ public Font2DTest( JFrame f) {
}
}
+ private void addFontMenuToGBL(String labelText,
+ JComponent menuContainer,
+ GridBagLayout gbl,
+ GridBagConstraints gbc,
+ int leftInset,
+ Container target) {
+
+ fontMenuLabel = new LabelV2(labelText);
+ fontMenuLabel.setLabelFor(menuContainer);
+ GridBagConstraints gbcLabel = (GridBagConstraints) gbc.clone();
+ gbcLabel.insets = new Insets(2, leftInset, 2, 0);
+ gbcLabel.gridwidth = 1;
+ gbcLabel.weightx = 0;
+ gbcLabel.anchor = GridBagConstraints.EAST;
+ gbl.setConstraints(fontMenuLabel, gbcLabel);
+ target.add(fontMenuLabel);
+ gbl.setConstraints(menuContainer, gbc);
+ target.add( menuContainer );
+ }
+
+ private void addStyleMenuToGBL(String labelText,
+ JComponent menuContainer,
+ GridBagLayout gbl,
+ GridBagConstraints gbc,
+ int leftInset,
+ Container target) {
+
+ styleLabel = new LabelV2(labelText);
+ styleLabel.setLabelFor(menuContainer);
+ GridBagConstraints gbcLabel = (GridBagConstraints) gbc.clone();
+ gbcLabel.insets = new Insets(2, leftInset, 2, 0);
+ gbcLabel.gridwidth = 1;
+ gbcLabel.weightx = 0;
+ gbcLabel.anchor = GridBagConstraints.EAST;
+ gbl.setConstraints(styleLabel, gbcLabel);
+ target.add(styleLabel);
+ gbl.setConstraints(menuContainer, gbc);
+ target.add(menuContainer);
+ }
+
/// Set up the main interface panel
private void setupPanel() {
GridBagLayout gbl = new GridBagLayout();
@@ -184,43 +235,49 @@ private void setupPanel() {
gbc.insets = new Insets( 2, 0, 2, 2 );
this.setLayout( gbl );
- addLabeledComponentToGBL( "Font: ", fontMenu, gbl, gbc, this );
- addLabeledComponentToGBL( "Size: ", sizeField, gbl, gbc, this );
+ fontMenuPanel = new JPanel();
+ fontMenuPanel.setLayout(new GridLayout());
+ fontMenuPanel.add(fontMenu);
+ addFontMenuToGBL(FAMILY_LABEL_TEXT, fontMenuPanel, gbl, gbc, 2, this );
+
+ stylePanel = new JPanel();
+ stylePanel.setLayout(new GridLayout());
+ stylePanel.add(styleMenu);
+ addStyleMenuToGBL(STYLE_LABEL_TEXT, stylePanel, gbl, gbc, 40, this );
+
gbc.gridwidth = GridBagConstraints.REMAINDER;
addLabeledComponentToGBL( "Font Transform:",
- transformMenu, gbl, gbc, this );
+ transformMenu, gbl, gbc, 20, this );
gbc.gridwidth = 1;
- addLabeledComponentToGBL( "Range: ", rm, gbl, gbc, this );
- addLabeledComponentToGBL( "Style: ", styleMenu, gbl, gbc, this );
+ addLabeledComponentToGBL( "Range: ", rm, gbl, gbc, 2, this );
+ addLabeledComponentToGBL( "Size: ", sizeField, gbl, gbc, 40, this );
gbc.gridwidth = GridBagConstraints.REMAINDER;
addLabeledComponentToGBL( "Graphics Transform: ",
- transformMenuG2, gbl, gbc, this );
+ transformMenuG2, gbl, gbc, 20, this );
gbc.gridwidth = 1;
gbc.anchor = GridBagConstraints.WEST;
- addLabeledComponentToGBL( "Method: ", methodsMenu, gbl, gbc, this );
- addLabeledComponentToGBL("", null, gbl, gbc, this);
+ addLabeledComponentToGBL( "Method: ", methodsMenu, gbl, gbc, 2, this );
+ addLabeledComponentToGBL("", null, gbl, gbc, 40, this);
gbc.anchor = GridBagConstraints.EAST;
+
gbc.gridwidth = GridBagConstraints.REMAINDER;
- addLabeledComponentToGBL( "Text to use:", textMenu, gbl, gbc, this );
+ addLabeledComponentToGBL( "Text to use:", textMenu, gbl, gbc, 20, this );
- gbc.weightx=1;
gbc.gridwidth = 1;
- gbc.fill = GridBagConstraints.HORIZONTAL;
gbc.anchor = GridBagConstraints.WEST;
addLabeledComponentToGBL("LCD contrast: ",
- contrastSlider, gbl, gbc, this);
+ contrastSlider, gbl, gbc, 2, this);
gbc.gridwidth = 1;
- gbc.fill = GridBagConstraints.NONE;
addLabeledComponentToGBL("Antialiasing: ",
- antiAliasMenu, gbl, gbc, this);
+ antiAliasMenu, gbl, gbc, 40, this);
gbc.anchor = GridBagConstraints.EAST;
gbc.gridwidth = GridBagConstraints.REMAINDER;
addLabeledComponentToGBL("Fractional metrics: ",
- fracMetricsMenu, gbl, gbc, this);
+ fracMetricsMenu, gbl, gbc, 20, this);
gbc.weightx = 1;
gbc.weighty = 1;
@@ -241,16 +298,23 @@ private void addLabeledComponentToGBL( String name,
JComponent c,
GridBagLayout gbl,
GridBagConstraints gbc,
+ int leftInset,
Container target ) {
LabelV2 l = new LabelV2( name );
+ l.setLabelFor(c);
GridBagConstraints gbcLabel = (GridBagConstraints) gbc.clone();
- gbcLabel.insets = new Insets( 2, 2, 2, 0 );
+ if (gbcLabel.gridwidth == GridBagConstraints.REMAINDER) {
+ gbcLabel.gridwidth = GridBagConstraints.RELATIVE;
+ }
+
+ gbcLabel.insets = new Insets( 2, leftInset, 2, 0 );
gbcLabel.gridwidth = 1;
gbcLabel.weightx = 0;
if ( c == null )
c = new JLabel( "" );
+ gbcLabel.anchor = GridBagConstraints.EAST;
gbl.setConstraints( l, gbcLabel );
target.add( l );
gbl.setConstraints( c, gbc );
@@ -277,6 +341,21 @@ private void setupMenu() {
optionMenu.add( displayGridCBMI );
optionMenu.add( force16ColsCBMI );
optionMenu.add( showFontInfoCBMI );
+ optionMenu.addSeparator();
+ familyAndStyleRBMI = new JRadioButtonMenuItem("Select font using Family Name and Style");
+ familyAndStyleRBMI.addActionListener(this);
+ familyAndSubFamilyRBMI = new JRadioButtonMenuItem("Select font using Family Name and SubFamily");
+ familyAndSubFamilyRBMI.addActionListener(this);
+ fontNameRBMI = new JRadioButtonMenuItem("Select font using Full Name");
+ fontNameRBMI.addActionListener(this);
+ ButtonGroup bg = new ButtonGroup();
+ bg.add(familyAndStyleRBMI);
+ bg.add(familyAndSubFamilyRBMI);
+ bg.add(fontNameRBMI);
+ familyAndStyleRBMI.setSelected(true);
+ optionMenu.add(familyAndStyleRBMI);
+ optionMenu.add(familyAndSubFamilyRBMI);
+ optionMenu.add(fontNameRBMI);
JMenuBar mb = parent.getJMenuBar();
if ( mb == null )
@@ -286,12 +365,17 @@ private void setupMenu() {
parent.setJMenuBar( mb );
- String[] fontList =
- GraphicsEnvironment.getLocalGraphicsEnvironment().getAvailableFontFamilyNames();
+ String[] fontList = getAllFamilyNames();
+ for (int i = 0; i < fontList.length; i++ ) {
+ fontMenu.addItem( fontList[i] );
+ }
+ fontMenu.setSelectedItem("Dialog");
- for ( int i = 0; i < fontList.length; i++ )
- fontMenu.addItem( fontList[i] );
- fontMenu.setSelectedItem( "Dialog" );
+ fontList = getAllFontNames();
+ for (int i = 0; i < fontList.length; i++ ) {
+ fontNameMenu.addItem( fontList[i] );
+ }
+ fontNameMenu.setSelectedItem("Dialog");
styleMenu.addItem( "Plain" );
styleMenu.addItem( "Bold" );
@@ -647,6 +731,10 @@ private void writeCurrentOptions( String fileName ) {
displayGridCBMI.getState() + "\n" +
force16ColsCBMI.getState() + "\n" +
showFontInfoCBMI.getState() + "\n" +
+ fontSelectionType + "\n" +
+ (String)fontMenu.getSelectedItem() + "\n" +
+ (String)fontNameMenu.getSelectedItem() + "\n" +
+ (String)fontSubFamilyMenu.getSelectedItem() + "\n" +
rm.getSelectedItem() + "\n" +
range[0] + "\n" + range[1] + "\n" + curOptions + tFileName);
byte[] toBeWritten = completeOptions.getBytes(UTF_16);
@@ -724,6 +812,10 @@ private void loadOptions( String fileName ) {
boolean displayGridOpt = Boolean.parseBoolean( perLine.nextToken() );
boolean force16ColsOpt = Boolean.parseBoolean( perLine.nextToken() );
boolean showFontInfoOpt = Boolean.parseBoolean( perLine.nextToken() );
+ int fontSelType = Integer.parseInt( perLine.nextToken() );
+ String fmItem = perLine.nextToken();
+ String fnmItem = perLine.nextToken();
+ String fsmItem = perLine.nextToken();
String rangeNameOpt = perLine.nextToken();
int rangeStartOpt = Integer.parseInt( perLine.nextToken() );
int rangeEndOpt = Integer.parseInt( perLine.nextToken() );
@@ -756,7 +848,11 @@ private void loadOptions( String fileName ) {
force16ColsCBMI.setState( force16ColsOpt );
showFontInfoCBMI.setState( showFontInfoOpt );
rm.setSelectedRange( rangeNameOpt, rangeStartOpt, rangeEndOpt );
- fontMenu.setSelectedItem( fontNameOpt );
+ currentFontName = fontNameOpt;
+ setFontSelectionType(fontSelType);
+ fontMenu.setSelectedItem( fmItem );
+ fontNameMenu.setSelectedItem( fnmItem );
+ fontSubFamilyMenu.setSelectedItem( fsmItem );
sizeField.setText( String.valueOf( fontSizeOpt ));
styleMenu.setSelectedIndex( fontStyleOpt );
transformMenu.setSelectedIndex( fontTransformOpt );
@@ -819,6 +915,110 @@ public void windowClosing( WindowEvent e ) {
}
}
+ static final int FAMILY_AND_STYLE = 1;
+ static final int FONT_NAME = 2;
+ static final int FAMILY_AND_SUBFAMILY = 3;
+ static int fontSelectionType = FAMILY_AND_STYLE;
+
+ static final String FAMILY_LABEL_TEXT = "Font Family:";
+ static final String NAME_LABEL_TEXT = "Font Name:";
+ static final String STYLE_LABEL_TEXT = "Style:";
+ static final String SUBFAMILY_LABEL_TEXT = "Subfamily:";
+
+ void setUseFamilyAndStyle() {
+ if (fontSelectionType == FAMILY_AND_STYLE) {
+ return;
+ }
+ fontMenuLabel.setText(FAMILY_LABEL_TEXT);
+ fontMenuPanel.removeAll();
+ fontMenuPanel.add(fontMenu);
+ if (fontSelectionType == FAMILY_AND_SUBFAMILY) {
+ styleLabel.setText(STYLE_LABEL_TEXT);
+ stylePanel.removeAll();
+ stylePanel.add(styleMenu);
+ }
+ fontSelectionType = FAMILY_AND_STYLE;
+ if (!familyAndStyleRBMI.isSelected()) {
+ familyAndStyleRBMI.setSelected(true);
+ }
+ styleMenu.setSelectedIndex(0);
+ currentFontName = (String)fontMenu.getSelectedItem();
+ fp.setFontParams(currentFontName,
+ Float.parseFloat(sizeField.getText()),
+ 0, // want to reset style to PLAIN
+ transformMenu.getSelectedIndex());
+ revalidate();
+ repaint();
+ }
+
+ void setUseFontName() {
+ if (fontSelectionType == FONT_NAME) {
+ return;
+ }
+ fontMenuLabel.setText(NAME_LABEL_TEXT);
+ fontMenuPanel.removeAll();
+ fontMenuPanel.add(fontNameMenu);
+ if (fontSelectionType == FAMILY_AND_SUBFAMILY) {
+ styleLabel.setText(STYLE_LABEL_TEXT);
+ stylePanel.removeAll();
+ stylePanel.add(styleMenu);
+ }
+ fontSelectionType = FONT_NAME;
+ if (!fontNameRBMI.isSelected()) {
+ fontNameRBMI.setSelected(true);
+ }
+ styleMenu.setSelectedIndex(0);
+ currentFontName = (String)fontNameMenu.getSelectedItem();
+ fp.setFontParams(currentFontName,
+ Float.parseFloat(sizeField.getText()),
+ 0, // want to reset style to PLAIN
+ transformMenu.getSelectedIndex());
+ revalidate();
+ repaint();
+ }
+
+ void setUseFamilyAndSubFamily() {
+ if (fontSelectionType == FAMILY_AND_SUBFAMILY) {
+ return;
+ }
+ fontMenuLabel.setText(FAMILY_LABEL_TEXT);
+ fontMenuPanel.removeAll();
+ fontMenuPanel.add(fontMenu);
+ styleLabel.setText(SUBFAMILY_LABEL_TEXT);
+ stylePanel.removeAll();
+ styleMenu.setSelectedIndex(0);
+ String family = (String)fontMenu.getSelectedItem();
+ updateSubFamilyMenu(family);
+ stylePanel.add(fontSubFamilyMenu);
+ fontSelectionType = FAMILY_AND_SUBFAMILY;
+ if (!familyAndSubFamilyRBMI.isSelected()) {
+ familyAndSubFamilyRBMI.setSelected(true);
+ }
+ String subname = (String)fontSubFamilyMenu.getSelectedItem();
+ Font font = FontFamily.getFont(family, subname);
+ currentFontName = (font != null) ? font.getFontName(l) : family;
+ fp.setFontParams(currentFontName,
+ Float.parseFloat(sizeField.getText()),
+ 0, // want to reset style to PLAIN
+ transformMenu.getSelectedIndex());
+ revalidate();
+ repaint();
+ }
+
+ void setFontSelectionType(int fsType) {
+ switch (fsType) {
+ case FAMILY_AND_STYLE :
+ setUseFamilyAndStyle();
+ break;
+ case FONT_NAME :
+ setUseFontName();
+ break;
+ case FAMILY_AND_SUBFAMILY :
+ setUseFamilyAndSubFamily();
+ break;
+ }
+ }
+
/// Interface functions...
/// ActionListener interface function
@@ -830,7 +1030,14 @@ public void actionPerformed( ActionEvent e ) {
JMenuItem mi = (JMenuItem) source;
String itemName = mi.getText();
- if ( itemName.equals( "Save Selected Options..." )) {
+ if (source == familyAndStyleRBMI) {
+ setUseFamilyAndStyle();
+ } else if (source == familyAndSubFamilyRBMI) {
+ setUseFamilyAndSubFamily();
+ } else if (source == fontNameRBMI) {
+ setUseFontName();
+ }
+ else if ( itemName.equals( "Save Selected Options..." )) {
String fileName = promptFile( true, "options.txt" );
if ( fileName != null )
writeCurrentOptions( fileName );
@@ -872,11 +1079,12 @@ else if ( source instanceof JTextField ) {
} catch (Exception se) {
sizeField.setText("12");
}
- if ( tf == sizeField )
- fp.setFontParams( fontMenu.getSelectedItem(),
+ if ( tf == sizeField ) {
+ fp.setFontParams(currentFontName,
sz,
styleMenu.getSelectedIndex(),
transformMenu.getSelectedIndex() );
+ }
}
else if ( source instanceof JButton ) {
@@ -901,7 +1109,41 @@ else if ( source instanceof JComboBox ) {
/// RangeMenu handles actions by itself and then calls fireRangeChanged,
/// so it is not listed or handled here
- if ( c == fontMenu || c == styleMenu || c == transformMenu ) {
+ if ( c == fontMenu || c == fontNameMenu || c == fontSubFamilyMenu ||
+ c == styleMenu || c == transformMenu )
+ {
+ if (c == fontNameMenu) {
+ currentFontName = (String)fontNameMenu.getSelectedItem();
+ }
+ else if ((c == fontMenu) && (fontSelectionType == FAMILY_AND_STYLE)) {
+ currentFontName = (String)fontMenu.getSelectedItem();
+ }
+ else if ((c == fontMenu) && (fontSelectionType == FAMILY_AND_SUBFAMILY)) {
+ String family = (String)fontMenu.getSelectedItem();
+ updateSubFamilyMenu(family);
+ String subname = (String)fontSubFamilyMenu.getSelectedItem();
+ Font font = FontFamily.getFont(family, subname);
+ if (font == null) return;
+ currentFontName = font.getFontName(l);
+ }
+ else if (c == fontSubFamilyMenu) {
+ /*
+ * When switching families, all items are removed from the sub family list.
+ * This triggers a synchronous recursive ActionEvent on the EDT, which should
+ * be ignored here, the code removes them adds the new items and will then
+ * use the new default selected item.
+ * If we do not return, we'll not find a match and can get an NPE.
+ * This feels unsatisfactory, but it works.
+ */
+ if (fontSubFamilyMenu.getItemCount() == 0) {
+ return;
+ }
+ String family = (String)fontMenu.getSelectedItem();
+ String subname = (String)fontSubFamilyMenu.getSelectedItem();
+ Font font = FontFamily.getFont(family, subname);
+ if (font == null) return;
+ currentFontName = font.getFontName(l);
+ }
float sz = 12f;
try {
sz = Float.parseFloat(sizeField.getText());
@@ -912,7 +1154,7 @@ else if ( source instanceof JComboBox ) {
} catch (Exception se) {
sizeField.setText("12");
}
- fp.setFontParams(fontMenu.getSelectedItem(),
+ fp.setFontParams(currentFontName,
sz,
styleMenu.getSelectedIndex(),
transformMenu.getSelectedIndex());
@@ -1008,11 +1250,177 @@ private static void printUsage() {
System.exit(0);
}
+ static class FontFamily {
+
+ static Map familyMap = new HashMap<>();
+ private static Locale l = Locale.getDefault();
+ private List fonts = new ArrayList<>();
+ private List subFamilyNames = new ArrayList<>();
+ private Map nameToFontMap = new HashMap<>();
+ private String familyName;
+
+ private FontFamily(String name) {
+ this.familyName = name;
+ }
+
+ String stripFamily(String family, String fullName) {
+ if (family.equals(fullName)) {
+ return "";
+ }
+ char[] familyChars = family.toCharArray();
+ char[] fullChars = fullName.toCharArray();
+ int familyIndex = 0;
+ int fullIndex = 0;
+ // there's probably a clever regexp way to do this
+ // iterate over the chars in the family , if they are the same
+ // keep going, if there's a '-' or ' ', skip it. In the font name,
+ // do the same. If you reach the end of the family without some
+ // other diff, return what's left of the fullName.
+ while (familyIndex < familyChars.length && fullIndex < fullChars.length) {
+ //while (familyIndex < familyChars.length) {
+ if (fullIndex == fullChars.length) {
+ System.err.println("WEIRD FONT " + family + " " + fullName);
+ break;
+ }
+ if (familyChars[familyIndex] == fullChars[fullIndex]) {
+ familyIndex++; fullIndex++;
+ }
+ else if (familyChars[familyIndex] == ' ' && fullChars[fullIndex] == '-') {
+ familyIndex++; fullIndex++;
+ }
+ else if (familyChars[familyIndex] == '-' && fullChars[fullIndex] == ' ') {
+ familyIndex++; fullIndex++;
+ }
+ else if (familyChars[familyIndex] == ' ' || familyChars[familyIndex] == '-') {
+ familyIndex++;
+ }
+ else if (fullChars[fullIndex] == ' ' || fullChars[fullIndex] == '-') {
+ fullIndex++;
+ } else {
+ break;
+ }
+ }
+ if (fullIndex == fullChars.length) {
+ return fullName;
+ } else {
+ return fullName.substring(fullIndex);
+ }
+ }
+
+ /*
+ * Getting the string to display here can be an art.
+ * If the family is "Arial Black", then for a regular font, the
+ * full name may be "Arial Black", or "Arial-Black", as reported on macOS.
+ * For this case for the specific font might want to display the
+ * full name, or synthesise "Regular". But to do that we have to
+ * recognise that ' ' to '-' mapping.
+ * For "Arial Black Italic" (Arial-Black-Italic) we want to be able to
+ * trim so we display just "Italic".
+ * Then we need to be able to map the text selection back to the
+ * right font.
+ */
+ void add(Font f) {
+ String fontName = f.getFontName(l);
+ int flen = familyName.length();
+ int nlen = fontName.length();
+ String sfn;
+
+ if (fontName.equals(familyName)) {
+ sfn = "Regular";
+ }
+ else {
+ sfn = stripFamily(familyName, fontName);
+ sfn = sfn.replace('-', ' ');
+ }
+
+ fonts.add(f);
+ subFamilyNames.add(sfn);
+ nameToFontMap.put(sfn, f);
+ }
+
+ String[] getSubFamilyNames() {
+ return subFamilyNames.stream().sorted().toArray(String[]::new);
+ }
+
+
+ Font getFontForSubFamilyName(String name) {
+ return nameToFontMap.get(name);
+ }
+
+ static FontFamily getFontFamily(String name) {
+ return familyMap.get(name);
+ }
+
+ static FontFamily createFontFamily(String name) {
+ FontFamily f = familyMap.get(name);
+ if (f == null) {
+ f = new FontFamily(name);
+ familyMap.put(name, f);
+ }
+ return f;
+ }
+
+ /*
+ * familyName must be a name of an existing FontFamily
+ * name, must be a valid "subFamilyName" within that FontFamily
+ * as returned by getSubFamilyNames()
+ */
+ static Font getFont(String familyName, String subFamilyName) {
+ FontFamily family = getFontFamily(familyName);
+ return family.getFontForSubFamilyName(subFamilyName);
+ }
+ }
+
+ static String[] familyNames;
+ static Font[] allFonts;
+ static List allFontNames;
+ static Map familyMap = new HashMap<>();
+
+ private static void buildFontInfo() {
+ GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment();
+ familyNames = ge.getAvailableFontFamilyNames();
+ allFonts = ge.getAllFonts();
+ allFontNames = new ArrayList();
+ Locale l = Locale.getDefault();
+ for (Font f : allFonts) {
+ allFontNames.add(f.getFontName(l));
+ String family = f.getFamily(l);
+ FontFamily ff = FontFamily.getFontFamily(family);
+ if (ff == null) {
+ ff = FontFamily.createFontFamily(family);
+ }
+ ff.add(f);
+ }
+ }
+
+ String getFontNameFor(String family, String subFamily) {
+ return family + " " + subFamily;
+ }
+
+ void updateSubFamilyMenu(String name) {
+ FontFamily family = FontFamily.getFontFamily(name);
+ fontSubFamilyMenu.removeAllItems();
+
+ String [] sfNames = family.getSubFamilyNames();
+ for (int i=0; i 0) {
- if(argv[0].equalsIgnoreCase("-disablecandisplaycheck") ||
+ if (argv.length > 0) {
+ if (argv[0].equalsIgnoreCase("-disablecandisplaycheck") ||
argv[0].equalsIgnoreCase("-dcdc")) {
canDisplayCheck = false;
}
@@ -1021,17 +1429,23 @@ public static void main(String[] argv) {
}
}
- UIManager.put("swing.boldMetal", Boolean.FALSE);
- final JFrame f = new JFrame( "Font2DTest" );
- final Font2DTest f2dt = new Font2DTest( f);
- f.addWindowListener( new WindowAdapter() {
- public void windowOpening( WindowEvent e ) { f2dt.repaint(); }
- public void windowClosing( WindowEvent e ) { System.exit(0); }
- });
-
- f.getContentPane().add( f2dt );
- f.pack();
- f.setVisible(true);
+ buildFontInfo();
+ try {
+ UIManager.setLookAndFeel(new NimbusLookAndFeel());
+ SwingUtilities.invokeAndWait(() -> {
+ final JFrame f = new JFrame( "Font2DTest" );
+ final Font2DTest f2dt = new Font2DTest( f);
+ f.addWindowListener( new WindowAdapter() {
+ public void windowOpening( WindowEvent e ) { f2dt.repaint(); }
+ public void windowClosing( WindowEvent e ) { System.exit(0); }
+ });
+
+ f.getContentPane().add( f2dt );
+ f.pack();
+ f.setVisible(true);
+ });
+ } catch (UnsupportedLookAndFeelException|InterruptedException|InvocationTargetException e) {
+ }
}
/// Inner class definitions...
diff --git a/src/demo/share/jfc/Font2DTest/FontPanel.java b/src/demo/share/jfc/Font2DTest/FontPanel.java
index d52f572e090e6..70b1390fd76ff 100644
--- a/src/demo/share/jfc/Font2DTest/FontPanel.java
+++ b/src/demo/share/jfc/Font2DTest/FontPanel.java
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2000, 2021, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2000, 2024, Oracle and/or its affiliates. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -740,7 +740,8 @@ private void resetScrollbar( int oldValue ) {
verticalBar.setValues( oldValue, numCharDown, 0, totalNumRows );
}
if ( totalNumRows <= numCharDown && drawStart == 0) {
- verticalBar.setEnabled( false );
+ // the disabled scroll bar looks odd with Nimbus L&F.
+ verticalBar.setEnabled( true );
}
else {
verticalBar.setEnabled( true );
diff --git a/src/hotspot/cpu/aarch64/aarch64.ad b/src/hotspot/cpu/aarch64/aarch64.ad
index bb7f7370c5f92..1b8b9801f3c71 100644
--- a/src/hotspot/cpu/aarch64/aarch64.ad
+++ b/src/hotspot/cpu/aarch64/aarch64.ad
@@ -694,6 +694,11 @@ reg_class no_special_ptr_reg %{
return _NO_SPECIAL_PTR_REG_mask;
%}
+// Class for all non_special pointer registers (excluding rfp)
+reg_class no_special_no_rfp_ptr_reg %{
+ return _NO_SPECIAL_NO_RFP_PTR_REG_mask;
+%}
+
// Class for all float registers
reg_class float_reg(
V0,
@@ -1125,6 +1130,7 @@ extern RegMask _PTR_REG_mask;
extern RegMask _NO_SPECIAL_REG32_mask;
extern RegMask _NO_SPECIAL_REG_mask;
extern RegMask _NO_SPECIAL_PTR_REG_mask;
+extern RegMask _NO_SPECIAL_NO_RFP_PTR_REG_mask;
class CallStubImpl {
@@ -1213,6 +1219,7 @@ source %{
RegMask _NO_SPECIAL_REG32_mask;
RegMask _NO_SPECIAL_REG_mask;
RegMask _NO_SPECIAL_PTR_REG_mask;
+ RegMask _NO_SPECIAL_NO_RFP_PTR_REG_mask;
void reg_mask_init() {
// We derive below RegMask(s) from the ones which are auto-generated from
@@ -1249,6 +1256,9 @@ source %{
_NO_SPECIAL_REG_mask.Remove(OptoReg::as_OptoReg(r29->as_VMReg()));
_NO_SPECIAL_PTR_REG_mask.Remove(OptoReg::as_OptoReg(r29->as_VMReg()));
}
+
+ _NO_SPECIAL_NO_RFP_PTR_REG_mask = _NO_SPECIAL_PTR_REG_mask;
+ _NO_SPECIAL_NO_RFP_PTR_REG_mask.Remove(OptoReg::as_OptoReg(r29->as_VMReg()));
}
// Optimizaton of volatile gets and puts
@@ -4690,6 +4700,18 @@ operand iRegPNoSp()
interface(REG_INTER);
%}
+// This operand is not allowed to use rfp even if
+// rfp is not used to hold the frame pointer.
+operand iRegPNoSpNoRfp()
+%{
+ constraint(ALLOC_IN_RC(no_special_no_rfp_ptr_reg));
+ match(RegP);
+ match(iRegPNoSp);
+ op_cost(0);
+ format %{ %}
+ interface(REG_INTER);
+%}
+
// Pointer 64 bit Register R0 only
operand iRegP_R0()
%{
@@ -16087,7 +16109,9 @@ instruct CallLeafNoFPDirect(method meth)
// Also known as an 'interprocedural jump'.
// Target of jump will eventually return to caller.
// TailJump below removes the return address.
-instruct TailCalljmpInd(iRegPNoSp jump_target, inline_cache_RegP method_ptr)
+// Don't use rfp for 'jump_target' because a MachEpilogNode has already been
+// emitted just above the TailCall which has reset rfp to the caller state.
+instruct TailCalljmpInd(iRegPNoSpNoRfp jump_target, inline_cache_RegP method_ptr)
%{
match(TailCall jump_target method_ptr);
@@ -16100,7 +16124,7 @@ instruct TailCalljmpInd(iRegPNoSp jump_target, inline_cache_RegP method_ptr)
ins_pipe(pipe_class_call);
%}
-instruct TailjmpInd(iRegPNoSp jump_target, iRegP_R0 ex_oop)
+instruct TailjmpInd(iRegPNoSpNoRfp jump_target, iRegP_R0 ex_oop)
%{
match(TailJump jump_target ex_oop);
@@ -16206,7 +16230,7 @@ instruct partialSubtypeCheckConstSuper(iRegP_R4 sub, iRegP_R0 super_reg, immP su
effect(KILL cr, TEMP tempR1, TEMP tempR2, TEMP tempR3, TEMP vtemp);
ins_cost(700); // smaller than the next version
- format %{ "partialSubtypeCheck $result, $sub, super" %}
+ format %{ "partialSubtypeCheck $result, $sub, $super_reg, $super_con" %}
ins_encode %{
bool success = false;
diff --git a/src/hotspot/cpu/aarch64/aarch64_vector.ad b/src/hotspot/cpu/aarch64/aarch64_vector.ad
index 4e8eb47ee5fa2..b14295ca15c88 100644
--- a/src/hotspot/cpu/aarch64/aarch64_vector.ad
+++ b/src/hotspot/cpu/aarch64/aarch64_vector.ad
@@ -1,6 +1,6 @@
//
// Copyright (c) 2020, 2024, Oracle and/or its affiliates. All rights reserved.
-// Copyright (c) 2020, 2023, Arm Limited. All rights reserved.
+// Copyright (c) 2020, 2024, Arm Limited. All rights reserved.
// DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
//
// This code is free software; you can redistribute it and/or modify it
diff --git a/src/hotspot/cpu/aarch64/aarch64_vector_ad.m4 b/src/hotspot/cpu/aarch64/aarch64_vector_ad.m4
index d31a4e05799c2..060bb4a11d4ad 100644
--- a/src/hotspot/cpu/aarch64/aarch64_vector_ad.m4
+++ b/src/hotspot/cpu/aarch64/aarch64_vector_ad.m4
@@ -1,6 +1,6 @@
//
// Copyright (c) 2020, 2024, Oracle and/or its affiliates. All rights reserved.
-// Copyright (c) 2020, 2023, Arm Limited. All rights reserved.
+// Copyright (c) 2020, 2024, Arm Limited. All rights reserved.
// DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
//
// This code is free software; you can redistribute it and/or modify it
@@ -159,9 +159,7 @@ source %{
case Op_VectorMaskGen:
case Op_LoadVectorMasked:
case Op_StoreVectorMasked:
- case Op_LoadVectorGather:
case Op_StoreVectorScatter:
- case Op_LoadVectorGatherMasked:
case Op_StoreVectorScatterMasked:
case Op_PopulateIndex:
case Op_CompressM:
@@ -170,6 +168,12 @@ source %{
return false;
}
break;
+ case Op_LoadVectorGather:
+ case Op_LoadVectorGatherMasked:
+ if (UseSVE == 0 || is_subword_type(bt)) {
+ return false;
+ }
+ break;
case Op_MulAddVS2VI:
if (length_in_bytes != 16) {
return false;
diff --git a/src/hotspot/cpu/aarch64/c1_LIRAssembler_aarch64.cpp b/src/hotspot/cpu/aarch64/c1_LIRAssembler_aarch64.cpp
index 9bd8c6b8e9f88..615c8e19ac863 100644
--- a/src/hotspot/cpu/aarch64/c1_LIRAssembler_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/c1_LIRAssembler_aarch64.cpp
@@ -1212,7 +1212,8 @@ void LIR_Assembler::emit_alloc_array(LIR_OpAllocArray* op) {
arrayOopDesc::base_offset_in_bytes(op->type()),
array_element_size(op->type()),
op->klass()->as_register(),
- *op->stub()->entry());
+ *op->stub()->entry(),
+ op->zero_array());
}
__ bind(*op->stub()->continuation());
}
@@ -2504,7 +2505,9 @@ void LIR_Assembler::emit_arraycopy(LIR_OpArrayCopy* op) {
__ call_VM_leaf(entry, 3);
}
- __ bind(*stub->continuation());
+ if (stub != nullptr) {
+ __ bind(*stub->continuation());
+ }
}
diff --git a/src/hotspot/cpu/aarch64/c1_LIRGenerator_aarch64.cpp b/src/hotspot/cpu/aarch64/c1_LIRGenerator_aarch64.cpp
index 568be65e1447e..8f1260feba3ea 100644
--- a/src/hotspot/cpu/aarch64/c1_LIRGenerator_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/c1_LIRGenerator_aarch64.cpp
@@ -878,7 +878,13 @@ void LIRGenerator::do_ArrayCopy(Intrinsic* x) {
assert(x->number_of_arguments() == 5, "wrong type");
// Make all state_for calls early since they can emit code
- CodeEmitInfo* info = state_for(x, x->state());
+ CodeEmitInfo* info = nullptr;
+ if (x->state_before() != nullptr && x->state_before()->force_reexecute()) {
+ info = state_for(x, x->state_before());
+ info->set_force_reexecute();
+ } else {
+ info = state_for(x, x->state());
+ }
LIRItem src(x->argument_at(0), this);
LIRItem src_pos(x->argument_at(1), this);
@@ -911,6 +917,9 @@ void LIRGenerator::do_ArrayCopy(Intrinsic* x) {
int flags;
ciArrayKlass* expected_type;
arraycopy_helper(x, &flags, &expected_type);
+ if (x->check_flag(Instruction::OmitChecksFlag)) {
+ flags = 0;
+ }
__ arraycopy(src.result(), src_pos.result(), dst.result(), dst_pos.result(), length.result(), tmp, expected_type, flags, info); // does add_safepoint
}
@@ -1132,7 +1141,13 @@ void LIRGenerator::do_NewInstance(NewInstance* x) {
}
void LIRGenerator::do_NewTypeArray(NewTypeArray* x) {
- CodeEmitInfo* info = state_for(x, x->state());
+ CodeEmitInfo* info = nullptr;
+ if (x->state_before() != nullptr && x->state_before()->force_reexecute()) {
+ info = state_for(x, x->state_before());
+ info->set_force_reexecute();
+ } else {
+ info = state_for(x, x->state());
+ }
LIRItem length(x->length(), this);
length.load_item_force(FrameMap::r19_opr);
@@ -1149,7 +1164,7 @@ void LIRGenerator::do_NewTypeArray(NewTypeArray* x) {
__ metadata2reg(ciTypeArrayKlass::make(elem_type)->constant_encoding(), klass_reg);
CodeStub* slow_path = new NewTypeArrayStub(klass_reg, len, reg, info);
- __ allocate_array(reg, len, tmp1, tmp2, tmp3, tmp4, elem_type, klass_reg, slow_path);
+ __ allocate_array(reg, len, tmp1, tmp2, tmp3, tmp4, elem_type, klass_reg, slow_path, x->zero_array());
LIR_Opr result = rlock_result(x);
__ move(reg, result);
diff --git a/src/hotspot/cpu/aarch64/c1_MacroAssembler_aarch64.cpp b/src/hotspot/cpu/aarch64/c1_MacroAssembler_aarch64.cpp
index e48d64d90696c..c0455ad1bff69 100644
--- a/src/hotspot/cpu/aarch64/c1_MacroAssembler_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/c1_MacroAssembler_aarch64.cpp
@@ -272,7 +272,7 @@ void C1_MacroAssembler::initialize_object(Register obj, Register klass, Register
verify_oop(obj);
}
-void C1_MacroAssembler::allocate_array(Register obj, Register len, Register t1, Register t2, int base_offset_in_bytes, int f, Register klass, Label& slow_case) {
+void C1_MacroAssembler::allocate_array(Register obj, Register len, Register t1, Register t2, int base_offset_in_bytes, int f, Register klass, Label& slow_case, bool zero_array) {
assert_different_registers(obj, len, t1, t2, klass);
// determine alignment mask
@@ -297,7 +297,9 @@ void C1_MacroAssembler::allocate_array(Register obj, Register len, Register t1,
// following the length field in initialize_header().
int base_offset = align_up(base_offset_in_bytes, BytesPerWord);
// clear rest of allocated space
- initialize_body(obj, arr_size, base_offset, t1, t2);
+ if (zero_array) {
+ initialize_body(obj, arr_size, base_offset, t1, t2);
+ }
if (Compilation::current()->bailed_out()) {
return;
}
diff --git a/src/hotspot/cpu/aarch64/c1_MacroAssembler_aarch64.hpp b/src/hotspot/cpu/aarch64/c1_MacroAssembler_aarch64.hpp
index 3a4c868744c7c..fc8e83d706b50 100644
--- a/src/hotspot/cpu/aarch64/c1_MacroAssembler_aarch64.hpp
+++ b/src/hotspot/cpu/aarch64/c1_MacroAssembler_aarch64.hpp
@@ -100,7 +100,8 @@ using MacroAssembler::null_check;
// base_offset_in_bytes: offset of first array element, in bytes
// f : element scale factor
// slow_case : exit to slow case implementation if fast allocation fails
- void allocate_array(Register obj, Register len, Register t, Register t2, int base_offset_in_bytes, int f, Register klass, Label& slow_case);
+ // zero_array : zero the allocated array or not
+ void allocate_array(Register obj, Register len, Register t, Register t2, int base_offset_in_bytes, int f, Register klass, Label& slow_case, bool zero_array);
int rsp_offset() const { return _rsp_offset; }
void set_rsp_offset(int n) { _rsp_offset = n; }
diff --git a/src/hotspot/cpu/aarch64/compressedKlass_aarch64.cpp b/src/hotspot/cpu/aarch64/compressedKlass_aarch64.cpp
index c918d9bdf0eb5..54af69ffaba6c 100644
--- a/src/hotspot/cpu/aarch64/compressedKlass_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/compressedKlass_aarch64.cpp
@@ -69,7 +69,7 @@ static char* reserve_at_eor_compatible_address(size_t size, bool aslr) {
const uint64_t immediate = ((uint64_t)immediates[index]) << 32;
assert(immediate > 0 && Assembler::operand_valid_for_logical_immediate(/*is32*/false, immediate),
"Invalid immediate %d " UINT64_FORMAT, index, immediate);
- result = os::attempt_reserve_memory_at((char*)immediate, size, !ExecMem, mtClass);
+ result = os::attempt_reserve_memory_at((char*)immediate, size, false);
if (result == nullptr) {
log_trace(metaspace, map)("Failed to attach at " UINT64_FORMAT_X, immediate);
}
@@ -112,7 +112,7 @@ char* CompressedKlassPointers::reserve_address_space_for_compressed_classes(size
if (result == nullptr) {
constexpr size_t alignment = nth_bit(32);
log_debug(metaspace, map)("Trying to reserve at a 32-bit-aligned address");
- result = os::reserve_memory_aligned(size, alignment, !ExecMem, mtClass);
+ result = os::reserve_memory_aligned(size, alignment, false);
}
return result;
diff --git a/src/hotspot/cpu/aarch64/gc/g1/g1BarrierSetAssembler_aarch64.cpp b/src/hotspot/cpu/aarch64/gc/g1/g1BarrierSetAssembler_aarch64.cpp
index 427987da97141..d02038b6e9193 100644
--- a/src/hotspot/cpu/aarch64/gc/g1/g1BarrierSetAssembler_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/gc/g1/g1BarrierSetAssembler_aarch64.cpp
@@ -207,7 +207,7 @@ void G1BarrierSetAssembler::g1_write_barrier_post(MacroAssembler* masm,
// Does store cross heap regions?
__ eor(tmp1, store_addr, new_val);
- __ lsr(tmp1, tmp1, HeapRegion::LogOfHRGrainBytes);
+ __ lsr(tmp1, tmp1, G1HeapRegion::LogOfHRGrainBytes);
__ cbz(tmp1, done);
// crosses regions, storing null?
diff --git a/src/hotspot/cpu/aarch64/gc/shared/barrierSetAssembler_aarch64.cpp b/src/hotspot/cpu/aarch64/gc/shared/barrierSetAssembler_aarch64.cpp
index 62414611cce5f..073d992235517 100644
--- a/src/hotspot/cpu/aarch64/gc/shared/barrierSetAssembler_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/gc/shared/barrierSetAssembler_aarch64.cpp
@@ -269,21 +269,6 @@ void BarrierSetAssembler::tlab_allocate(MacroAssembler* masm, Register obj,
// verify_tlab();
}
-void BarrierSetAssembler::incr_allocated_bytes(MacroAssembler* masm,
- Register var_size_in_bytes,
- int con_size_in_bytes,
- Register t1) {
- assert(t1->is_valid(), "need temp reg");
-
- __ ldr(t1, Address(rthread, in_bytes(JavaThread::allocated_bytes_offset())));
- if (var_size_in_bytes->is_valid()) {
- __ add(t1, t1, var_size_in_bytes);
- } else {
- __ add(t1, t1, con_size_in_bytes);
- }
- __ str(t1, Address(rthread, in_bytes(JavaThread::allocated_bytes_offset())));
-}
-
static volatile uint32_t _patching_epoch = 0;
address BarrierSetAssembler::patching_epoch_addr() {
@@ -476,7 +461,7 @@ void SaveLiveRegisters::initialize(BarrierStubC2* stub) {
GrowableArray registers;
VMReg prev_vm_reg = VMRegImpl::Bad();
- RegMaskIterator rmi(stub->live());
+ RegMaskIterator rmi(stub->preserve_set());
while (rmi.has_next()) {
OptoReg::Name opto_reg = rmi.next();
VMReg vm_reg = OptoReg::as_VMReg(opto_reg);
@@ -491,7 +476,7 @@ void SaveLiveRegisters::initialize(BarrierStubC2* stub) {
index = registers.append(reg_data);
}
} else if (vm_reg->is_FloatRegister()) {
- // We have size encoding in OptoReg of stub->live()
+ // We have size encoding in OptoReg of stub->preserve_set()
// After encoding, float/neon/sve register has only one slot in regmask
// Decode it to get the actual size
VMReg vm_reg_base = vm_reg->as_FloatRegister()->as_VMReg();
@@ -532,12 +517,8 @@ void SaveLiveRegisters::initialize(BarrierStubC2* stub) {
}
}
- // Remove C-ABI SOE registers, scratch regs and _ref register that will be updated
- if (stub->result() != noreg) {
- _gp_regs -= RegSet::range(r19, r30) + RegSet::of(r8, r9, stub->result());
- } else {
- _gp_regs -= RegSet::range(r19, r30) + RegSet::of(r8, r9);
- }
+ // Remove C-ABI SOE registers and scratch regs
+ _gp_regs -= RegSet::range(r19, r30) + RegSet::of(r8, r9);
// Remove C-ABI SOE fp registers
_fp_regs -= FloatRegSet::range(v8, v15);
diff --git a/src/hotspot/cpu/aarch64/gc/shared/barrierSetAssembler_aarch64.hpp b/src/hotspot/cpu/aarch64/gc/shared/barrierSetAssembler_aarch64.hpp
index 8dc475e0597a1..d0e2aa9588827 100644
--- a/src/hotspot/cpu/aarch64/gc/shared/barrierSetAssembler_aarch64.hpp
+++ b/src/hotspot/cpu/aarch64/gc/shared/barrierSetAssembler_aarch64.hpp
@@ -44,11 +44,6 @@ enum class NMethodPatchingType {
};
class BarrierSetAssembler: public CHeapObj {
-private:
- void incr_allocated_bytes(MacroAssembler* masm,
- Register var_size_in_bytes, int con_size_in_bytes,
- Register t1 = noreg);
-
public:
virtual void arraycopy_prologue(MacroAssembler* masm, DecoratorSet decorators, bool is_oop,
Register src, Register dst, Register count, RegSet saved_regs) {}
diff --git a/src/hotspot/cpu/aarch64/gc/shenandoah/shenandoahBarrierSetAssembler_aarch64.cpp b/src/hotspot/cpu/aarch64/gc/shenandoah/shenandoahBarrierSetAssembler_aarch64.cpp
index fe4df9b8c0d1a..5db29729239d3 100644
--- a/src/hotspot/cpu/aarch64/gc/shenandoah/shenandoahBarrierSetAssembler_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/gc/shenandoah/shenandoahBarrierSetAssembler_aarch64.cpp
@@ -109,18 +109,13 @@ void ShenandoahBarrierSetAssembler::satb_write_barrier_pre(MacroAssembler* masm,
assert_different_registers(obj, pre_val, tmp1, tmp2);
assert(pre_val != noreg && tmp1 != noreg && tmp2 != noreg, "expecting a register");
- Address in_progress(thread, in_bytes(ShenandoahThreadLocalData::satb_mark_queue_active_offset()));
Address index(thread, in_bytes(ShenandoahThreadLocalData::satb_mark_queue_index_offset()));
Address buffer(thread, in_bytes(ShenandoahThreadLocalData::satb_mark_queue_buffer_offset()));
// Is marking active?
- if (in_bytes(SATBMarkQueue::byte_width_of_active()) == 4) {
- __ ldrw(tmp1, in_progress);
- } else {
- assert(in_bytes(SATBMarkQueue::byte_width_of_active()) == 1, "Assumption");
- __ ldrb(tmp1, in_progress);
- }
- __ cbzw(tmp1, done);
+ Address gc_state(thread, in_bytes(ShenandoahThreadLocalData::gc_state_offset()));
+ __ ldrb(tmp1, gc_state);
+ __ tbz(tmp1, ShenandoahHeap::MARKING_BITPOS, done);
// Do we need to load the previous value?
if (obj != noreg) {
diff --git a/src/hotspot/cpu/aarch64/gc/x/xGlobals_aarch64.cpp b/src/hotspot/cpu/aarch64/gc/x/xGlobals_aarch64.cpp
index 6204f21270374..a9c53da3d0186 100644
--- a/src/hotspot/cpu/aarch64/gc/x/xGlobals_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/gc/x/xGlobals_aarch64.cpp
@@ -142,9 +142,11 @@
// * 63-48 Fixed (16-bits, always zero)
//
-// Default value if probing is not implemented for a certain platform: 128TB
-static const size_t DEFAULT_MAX_ADDRESS_BIT = 47;
-// Minimum value returned, if probing fails: 64GB
+// Default value if probing is not implemented for a certain platform
+// Max address bit is restricted by implicit assumptions in the code, for instance
+// the bit layout of XForwardingEntry or Partial array entry (see XMarkStackEntry) in mark stack
+static const size_t DEFAULT_MAX_ADDRESS_BIT = 46;
+// Minimum value returned, if probing fails
static const size_t MINIMUM_MAX_ADDRESS_BIT = 36;
static size_t probe_valid_max_address_bit() {
diff --git a/src/hotspot/cpu/aarch64/gc/z/zAddress_aarch64.cpp b/src/hotspot/cpu/aarch64/gc/z/zAddress_aarch64.cpp
index 6c3cea73d1a79..cd834969e1a4f 100644
--- a/src/hotspot/cpu/aarch64/gc/z/zAddress_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/gc/z/zAddress_aarch64.cpp
@@ -36,9 +36,11 @@
#include
#endif // LINUX
-// Default value if probing is not implemented for a certain platform: 128TB
-static const size_t DEFAULT_MAX_ADDRESS_BIT = 47;
-// Minimum value returned, if probing fails: 64GB
+// Default value if probing is not implemented for a certain platform
+// Max address bit is restricted by implicit assumptions in the code, for instance
+// the bit layout of ZForwardingEntry or Partial array entry (see ZMarkStackEntry) in mark stack
+static const size_t DEFAULT_MAX_ADDRESS_BIT = 46;
+// Minimum value returned, if probing fail
static const size_t MINIMUM_MAX_ADDRESS_BIT = 36;
static size_t probe_valid_max_address_bit() {
diff --git a/src/hotspot/cpu/aarch64/gc/z/zBarrierSetAssembler_aarch64.cpp b/src/hotspot/cpu/aarch64/gc/z/zBarrierSetAssembler_aarch64.cpp
index 466e77a4460d0..306ff5f8c6e5e 100644
--- a/src/hotspot/cpu/aarch64/gc/z/zBarrierSetAssembler_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/gc/z/zBarrierSetAssembler_aarch64.cpp
@@ -1141,7 +1141,6 @@ class ZSetupArguments {
#define __ masm->
void ZBarrierSetAssembler::generate_c2_load_barrier_stub(MacroAssembler* masm, ZLoadBarrierStubC2* stub) const {
- Assembler::InlineSkippedInstructionsCounter skipped_counter(masm);
BLOCK_COMMENT("ZLoadBarrierStubC2");
// Stub entry
@@ -1160,7 +1159,6 @@ void ZBarrierSetAssembler::generate_c2_load_barrier_stub(MacroAssembler* masm, Z
}
void ZBarrierSetAssembler::generate_c2_store_barrier_stub(MacroAssembler* masm, ZStoreBarrierStubC2* stub) const {
- Assembler::InlineSkippedInstructionsCounter skipped_counter(masm);
BLOCK_COMMENT("ZStoreBarrierStubC2");
// Stub entry
diff --git a/src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp b/src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp
index b90b5862ee50e..1929077fa167a 100644
--- a/src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/macroAssembler_aarch64.cpp
@@ -3879,7 +3879,7 @@ void MacroAssembler::multiply_128_x_128_loop(Register y, Register z,
* r2: y
* r3: ylen
* r4: z
- * r5: zlen
+ * r5: tmp0
* r10: tmp1
* r11: tmp2
* r12: tmp3
@@ -3890,11 +3890,11 @@ void MacroAssembler::multiply_128_x_128_loop(Register y, Register z,
*
*/
void MacroAssembler::multiply_to_len(Register x, Register xlen, Register y, Register ylen,
- Register z, Register zlen,
+ Register z, Register tmp0,
Register tmp1, Register tmp2, Register tmp3, Register tmp4,
Register tmp5, Register tmp6, Register product_hi) {
- assert_different_registers(x, xlen, y, ylen, z, zlen, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6);
+ assert_different_registers(x, xlen, y, ylen, z, tmp0, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, product_hi);
const Register idx = tmp1;
const Register kdx = tmp2;
@@ -3903,7 +3903,7 @@ void MacroAssembler::multiply_to_len(Register x, Register xlen, Register y, Regi
const Register y_idx = tmp4;
const Register carry = tmp5;
const Register product = xlen;
- const Register x_xstart = zlen; // reuse register
+ const Register x_xstart = tmp0;
// First Loop.
//
@@ -3919,9 +3919,9 @@ void MacroAssembler::multiply_to_len(Register x, Register xlen, Register y, Regi
// z[xstart] = (int)carry;
//
- movw(idx, ylen); // idx = ylen;
- movw(kdx, zlen); // kdx = xlen+ylen;
- mov(carry, zr); // carry = 0;
+ movw(idx, ylen); // idx = ylen;
+ addw(kdx, xlen, ylen); // kdx = xlen+ylen;
+ mov(carry, zr); // carry = 0;
Label L_done;
@@ -5535,12 +5535,22 @@ address MacroAssembler::arrays_equals(Register a1, Register a2, Register tmp3,
Label DONE, SAME;
Register tmp1 = rscratch1;
Register tmp2 = rscratch2;
- Register cnt2 = tmp2; // cnt2 only used in array length compare
int elem_per_word = wordSize/elem_size;
int log_elem_size = exact_log2(elem_size);
+ int klass_offset = arrayOopDesc::klass_offset_in_bytes();
int length_offset = arrayOopDesc::length_offset_in_bytes();
int base_offset
= arrayOopDesc::base_offset_in_bytes(elem_size == 2 ? T_CHAR : T_BYTE);
+ // When the length offset is not aligned to 8 bytes,
+ // then we align it down. This is valid because the new
+ // offset will always be the klass which is the same
+ // for type arrays.
+ int start_offset = align_down(length_offset, BytesPerWord);
+ int extra_length = base_offset - start_offset;
+ assert(start_offset == length_offset || start_offset == klass_offset,
+ "start offset must be 8-byte-aligned or be the klass offset");
+ assert(base_offset != start_offset, "must include the length field");
+ extra_length = extra_length / elem_size; // We count in elements, not bytes.
int stubBytesThreshold = 3 * 64 + (UseSIMDForArrayEquals ? 0 : 16);
assert(elem_size == 1 || elem_size == 2, "must be char or byte");
@@ -5574,11 +5584,10 @@ address MacroAssembler::arrays_equals(Register a1, Register a2, Register tmp3,
// return false;
bind(A_IS_NOT_NULL);
ldrw(cnt1, Address(a1, length_offset));
- ldrw(cnt2, Address(a2, length_offset));
- eorw(tmp5, cnt1, cnt2);
- cbnzw(tmp5, DONE);
- lea(a1, Address(a1, base_offset));
- lea(a2, Address(a2, base_offset));
+ // Increase loop counter by diff between base- and actual start-offset.
+ addw(cnt1, cnt1, extra_length);
+ lea(a1, Address(a1, start_offset));
+ lea(a2, Address(a2, start_offset));
// Check for short strings, i.e. smaller than wordSize.
subs(cnt1, cnt1, elem_per_word);
br(Assembler::LT, SHORT);
@@ -5641,18 +5650,18 @@ address MacroAssembler::arrays_equals(Register a1, Register a2, Register tmp3,
cbz(a1, DONE);
ldrw(cnt1, Address(a1, length_offset));
cbz(a2, DONE);
- ldrw(cnt2, Address(a2, length_offset));
+ // Increase loop counter by diff between base- and actual start-offset.
+ addw(cnt1, cnt1, extra_length);
+
// on most CPUs a2 is still "locked"(surprisingly) in ldrw and it's
// faster to perform another branch before comparing a1 and a2
cmp(cnt1, (u1)elem_per_word);
br(LE, SHORT); // short or same
- ldr(tmp3, Address(pre(a1, base_offset)));
+ ldr(tmp3, Address(pre(a1, start_offset)));
subs(zr, cnt1, stubBytesThreshold);
br(GE, STUB);
- ldr(tmp4, Address(pre(a2, base_offset)));
+ ldr(tmp4, Address(pre(a2, start_offset)));
sub(tmp5, zr, cnt1, LSL, 3 + log_elem_size);
- cmp(cnt2, cnt1);
- br(NE, DONE);
// Main 16 byte comparison loop with 2 exits
bind(NEXT_DWORD); {
@@ -5684,9 +5693,7 @@ address MacroAssembler::arrays_equals(Register a1, Register a2, Register tmp3,
b(LAST_CHECK);
bind(STUB);
- ldr(tmp4, Address(pre(a2, base_offset)));
- cmp(cnt2, cnt1);
- br(NE, DONE);
+ ldr(tmp4, Address(pre(a2, start_offset)));
if (elem_size == 2) { // convert to byte counter
lsl(cnt1, cnt1, 1);
}
@@ -5707,12 +5714,9 @@ address MacroAssembler::arrays_equals(Register a1, Register a2, Register tmp3,
mov(result, a2);
b(DONE);
bind(SHORT);
- cmp(cnt2, cnt1);
- br(NE, DONE);
- cbz(cnt1, SAME);
sub(tmp5, zr, cnt1, LSL, 3 + log_elem_size);
- ldr(tmp3, Address(a1, base_offset));
- ldr(tmp4, Address(a2, base_offset));
+ ldr(tmp3, Address(a1, start_offset));
+ ldr(tmp4, Address(a2, start_offset));
bind(LAST_CHECK);
eor(tmp4, tmp3, tmp4);
lslv(tmp5, tmp4, tmp5);
diff --git a/src/hotspot/cpu/aarch64/macroAssembler_aarch64.hpp b/src/hotspot/cpu/aarch64/macroAssembler_aarch64.hpp
index 67fd7236842a7..7b7ecf49c8c34 100644
--- a/src/hotspot/cpu/aarch64/macroAssembler_aarch64.hpp
+++ b/src/hotspot/cpu/aarch64/macroAssembler_aarch64.hpp
@@ -1510,7 +1510,7 @@ class MacroAssembler: public Assembler {
void ghash_load_wide(int index, Register data, FloatRegister result, FloatRegister state);
public:
void multiply_to_len(Register x, Register xlen, Register y, Register ylen, Register z,
- Register zlen, Register tmp1, Register tmp2, Register tmp3,
+ Register tmp0, Register tmp1, Register tmp2, Register tmp3,
Register tmp4, Register tmp5, Register tmp6, Register tmp7);
void mul_add(Register out, Register in, Register offs, Register len, Register k);
void ghash_multiply(FloatRegister result_lo, FloatRegister result_hi,
diff --git a/src/hotspot/cpu/aarch64/relocInfo_aarch64.cpp b/src/hotspot/cpu/aarch64/relocInfo_aarch64.cpp
index 5424f0d9c7572..332f24996930f 100644
--- a/src/hotspot/cpu/aarch64/relocInfo_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/relocInfo_aarch64.cpp
@@ -32,7 +32,7 @@
#include "runtime/safepoint.hpp"
-void Relocation::pd_set_data_value(address x, intptr_t o, bool verify_only) {
+void Relocation::pd_set_data_value(address x, bool verify_only) {
if (verify_only)
return;
diff --git a/src/hotspot/cpu/aarch64/stubGenerator_aarch64.cpp b/src/hotspot/cpu/aarch64/stubGenerator_aarch64.cpp
index 8d207b3f66569..4625bb44f6dcd 100644
--- a/src/hotspot/cpu/aarch64/stubGenerator_aarch64.cpp
+++ b/src/hotspot/cpu/aarch64/stubGenerator_aarch64.cpp
@@ -4645,7 +4645,6 @@ class StubGenerator: public StubCodeGenerator {
* c_rarg2 - y address
* c_rarg3 - y length
* c_rarg4 - z address
- * c_rarg5 - z length
*/
address generate_multiplyToLen() {
__ align(CodeEntryAlignment);
@@ -4657,8 +4656,8 @@ class StubGenerator: public StubCodeGenerator {
const Register y = r2;
const Register ylen = r3;
const Register z = r4;
- const Register zlen = r5;
+ const Register tmp0 = r5;
const Register tmp1 = r10;
const Register tmp2 = r11;
const Register tmp3 = r12;
@@ -4669,7 +4668,7 @@ class StubGenerator: public StubCodeGenerator {
BLOCK_COMMENT("Entry:");
__ enter(); // required for proper stackwalking of RuntimeStub frame
- __ multiply_to_len(x, xlen, y, ylen, z, zlen, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, tmp7);
+ __ multiply_to_len(x, xlen, y, ylen, z, tmp0, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, tmp7);
__ leave(); // required for proper stackwalking of RuntimeStub frame
__ ret(lr);
@@ -4687,10 +4686,10 @@ class StubGenerator: public StubCodeGenerator {
const Register x = r0;
const Register xlen = r1;
const Register z = r2;
- const Register zlen = r3;
const Register y = r4; // == x
const Register ylen = r5; // == xlen
+ const Register tmp0 = r3;
const Register tmp1 = r10;
const Register tmp2 = r11;
const Register tmp3 = r12;
@@ -4705,7 +4704,7 @@ class StubGenerator: public StubCodeGenerator {
__ push(spilled_regs, sp);
__ mov(y, x);
__ mov(ylen, xlen);
- __ multiply_to_len(x, xlen, y, ylen, z, zlen, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, tmp7);
+ __ multiply_to_len(x, xlen, y, ylen, z, tmp0, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, tmp7);
__ pop(spilled_regs, sp);
__ leave();
__ ret(lr);
diff --git a/src/hotspot/cpu/arm/gc/g1/g1BarrierSetAssembler_arm.cpp b/src/hotspot/cpu/arm/gc/g1/g1BarrierSetAssembler_arm.cpp
index 6d724c750aa34..3c5e29aa8710f 100644
--- a/src/hotspot/cpu/arm/gc/g1/g1BarrierSetAssembler_arm.cpp
+++ b/src/hotspot/cpu/arm/gc/g1/g1BarrierSetAssembler_arm.cpp
@@ -207,7 +207,7 @@ void G1BarrierSetAssembler::g1_write_barrier_post(MacroAssembler* masm,
// Does store cross heap regions?
__ eor(tmp1, store_addr, new_val);
- __ movs(tmp1, AsmOperand(tmp1, lsr, HeapRegion::LogOfHRGrainBytes));
+ __ movs(tmp1, AsmOperand(tmp1, lsr, G1HeapRegion::LogOfHRGrainBytes));
__ b(done, eq);
// crosses regions, storing null?
diff --git a/src/hotspot/cpu/arm/gc/shared/barrierSetAssembler_arm.cpp b/src/hotspot/cpu/arm/gc/shared/barrierSetAssembler_arm.cpp
index a4e7f35ec81fa..ea19730673cb6 100644
--- a/src/hotspot/cpu/arm/gc/shared/barrierSetAssembler_arm.cpp
+++ b/src/hotspot/cpu/arm/gc/shared/barrierSetAssembler_arm.cpp
@@ -159,46 +159,6 @@ void BarrierSetAssembler::tlab_allocate(MacroAssembler* masm, Register obj, Regi
__ str(obj_end, Address(Rthread, JavaThread::tlab_top_offset()));
}
-void BarrierSetAssembler::incr_allocated_bytes(MacroAssembler* masm, RegisterOrConstant size_in_bytes, Register tmp) {
- // Bump total bytes allocated by this thread
- Label done;
-
- // Borrow the Rthread for alloc counter
- Register Ralloc = Rthread;
- __ add(Ralloc, Ralloc, in_bytes(JavaThread::allocated_bytes_offset()));
- __ ldr(tmp, Address(Ralloc));
- __ adds(tmp, tmp, size_in_bytes);
- __ str(tmp, Address(Ralloc), cc);
- __ b(done, cc);
-
- // Increment the high word and store single-copy atomically (that is an unlikely scenario on typical embedded systems as it means >4GB has been allocated)
- // To do so ldrd/strd instructions used which require an even-odd pair of registers. Such a request could be difficult to satisfy by
- // allocating those registers on a higher level, therefore the routine is ready to allocate a pair itself.
- Register low, high;
- // Select ether R0/R1 or R2/R3
-
- if (size_in_bytes.is_register() && (size_in_bytes.as_register() == R0 || size_in_bytes.as_register() == R1)) {
- low = R2;
- high = R3;
- } else {
- low = R0;
- high = R1;
- }
- __ push(RegisterSet(low, high));
-
- __ ldrd(low, Address(Ralloc));
- __ adds(low, low, size_in_bytes);
- __ adc(high, high, 0);
- __ strd(low, Address(Ralloc));
-
- __ pop(RegisterSet(low, high));
-
- __ bind(done);
-
- // Unborrow the Rthread
- __ sub(Rthread, Ralloc, in_bytes(JavaThread::allocated_bytes_offset()));
-}
-
void BarrierSetAssembler::nmethod_entry_barrier(MacroAssembler* masm) {
BarrierSetNMethod* bs_nm = BarrierSet::barrier_set()->barrier_set_nmethod();
diff --git a/src/hotspot/cpu/arm/gc/shared/barrierSetAssembler_arm.hpp b/src/hotspot/cpu/arm/gc/shared/barrierSetAssembler_arm.hpp
index ae51a054540c6..60021390ea26f 100644
--- a/src/hotspot/cpu/arm/gc/shared/barrierSetAssembler_arm.hpp
+++ b/src/hotspot/cpu/arm/gc/shared/barrierSetAssembler_arm.hpp
@@ -40,12 +40,6 @@ enum class NMethodPatchingType {
};
class BarrierSetAssembler: public CHeapObj {
-private:
- void incr_allocated_bytes(MacroAssembler* masm,
- RegisterOrConstant size_in_bytes,
- Register tmp
-);
-
public:
virtual void arraycopy_prologue(MacroAssembler* masm, DecoratorSet decorators, bool is_oop,
Register addr, Register count, int callee_saved_regs) {}
diff --git a/src/hotspot/cpu/arm/relocInfo_arm.cpp b/src/hotspot/cpu/arm/relocInfo_arm.cpp
index fc5b2981ce69c..fb112cdcfc0fa 100644
--- a/src/hotspot/cpu/arm/relocInfo_arm.cpp
+++ b/src/hotspot/cpu/arm/relocInfo_arm.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2008, 2023, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2008, 2024, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -30,13 +30,13 @@
#include "oops/oop.hpp"
#include "runtime/safepoint.hpp"
-void Relocation::pd_set_data_value(address x, intptr_t o, bool verify_only) {
+void Relocation::pd_set_data_value(address x, bool verify_only) {
NativeMovConstReg* ni = nativeMovConstReg_at(addr());
if (verify_only) {
- guarantee(ni->data() == (intptr_t)(x + o), "instructions must match");
+ guarantee(ni->data() == (intptr_t)x, "instructions must match");
} else {
- ni->set_data((intptr_t)(x + o));
+ ni->set_data((intptr_t)x);
}
}
diff --git a/src/hotspot/cpu/ppc/assembler_ppc.cpp b/src/hotspot/cpu/ppc/assembler_ppc.cpp
index 018b66310e2ce..26fc1aacad3a3 100644
--- a/src/hotspot/cpu/ppc/assembler_ppc.cpp
+++ b/src/hotspot/cpu/ppc/assembler_ppc.cpp
@@ -1,6 +1,6 @@
/*
- * Copyright (c) 1997, 2023, Oracle and/or its affiliates. All rights reserved.
- * Copyright (c) 2012, 2023 SAP SE. All rights reserved.
+ * Copyright (c) 1997, 2024, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2012, 2024 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -78,9 +78,9 @@ int Assembler::branch_destination(int inst, int pos) {
// Low-level andi-one-instruction-macro.
void Assembler::andi(Register a, Register s, const long ui16) {
- if (is_power_of_2(((jlong) ui16)+1)) {
+ if (is_power_of_2(((unsigned long) ui16)+1)) {
// pow2minus1
- clrldi(a, s, 64 - log2i_exact((((jlong) ui16)+1)));
+ clrldi(a, s, 64 - log2i_exact((((unsigned long) ui16)+1)));
} else if (is_power_of_2((jlong) ui16)) {
// pow2
rlwinm(a, s, 0, 31 - log2i_exact((jlong) ui16), 31 - log2i_exact((jlong) ui16));
diff --git a/src/hotspot/cpu/ppc/c1_LIRGenerator_ppc.cpp b/src/hotspot/cpu/ppc/c1_LIRGenerator_ppc.cpp
index b58defc1847a3..2caca1dc556e2 100644
--- a/src/hotspot/cpu/ppc/c1_LIRGenerator_ppc.cpp
+++ b/src/hotspot/cpu/ppc/c1_LIRGenerator_ppc.cpp
@@ -578,7 +578,7 @@ inline bool can_handle_logic_op_as_uimm(ValueType *type, Bytecodes::Code bc) {
is_power_of_2(int_or_long_const) ||
is_power_of_2(-int_or_long_const))) return true;
if (bc == Bytecodes::_land &&
- (is_power_of_2(int_or_long_const+1) ||
+ (is_power_of_2((unsigned long)int_or_long_const+1) ||
(Assembler::is_uimm(int_or_long_const, 32) && is_power_of_2(int_or_long_const)) ||
(int_or_long_const != min_jlong && is_power_of_2(-int_or_long_const)))) return true;
diff --git a/src/hotspot/cpu/ppc/c1_MacroAssembler_ppc.cpp b/src/hotspot/cpu/ppc/c1_MacroAssembler_ppc.cpp
index 219aeaf316d87..9da5a2c658009 100644
--- a/src/hotspot/cpu/ppc/c1_MacroAssembler_ppc.cpp
+++ b/src/hotspot/cpu/ppc/c1_MacroAssembler_ppc.cpp
@@ -183,7 +183,7 @@ void C1_MacroAssembler::try_allocate(
Register obj, // result: pointer to object after successful allocation
Register var_size_in_bytes, // object size in bytes if unknown at compile time; invalid otherwise
int con_size_in_bytes, // object size in bytes if known at compile time
- Register t1, // temp register, must be global register for incr_allocated_bytes
+ Register t1, // temp register
Register t2, // temp register
Label& slow_case // continuation point if fast allocation fails
) {
diff --git a/src/hotspot/cpu/ppc/gc/g1/g1BarrierSetAssembler_ppc.cpp b/src/hotspot/cpu/ppc/gc/g1/g1BarrierSetAssembler_ppc.cpp
index ab520162d350e..45438f81345c7 100644
--- a/src/hotspot/cpu/ppc/gc/g1/g1BarrierSetAssembler_ppc.cpp
+++ b/src/hotspot/cpu/ppc/gc/g1/g1BarrierSetAssembler_ppc.cpp
@@ -243,7 +243,7 @@ void G1BarrierSetAssembler::g1_write_barrier_post(MacroAssembler* masm, Decorato
// Does store cross heap regions?
__ xorr(tmp1, store_addr, new_val);
- __ srdi_(tmp1, tmp1, HeapRegion::LogOfHRGrainBytes);
+ __ srdi_(tmp1, tmp1, G1HeapRegion::LogOfHRGrainBytes);
__ beq(CCR0, filtered);
// Crosses regions, storing null?
diff --git a/src/hotspot/cpu/ppc/gc/shared/barrierSetAssembler_ppc.cpp b/src/hotspot/cpu/ppc/gc/shared/barrierSetAssembler_ppc.cpp
index 3bb205d659f0c..16de0b6a7ddd5 100644
--- a/src/hotspot/cpu/ppc/gc/shared/barrierSetAssembler_ppc.cpp
+++ b/src/hotspot/cpu/ppc/gc/shared/barrierSetAssembler_ppc.cpp
@@ -282,7 +282,7 @@ OptoReg::Name BarrierSetAssembler::refine_register(const Node* node, OptoReg::Na
#define __ _masm->
SaveLiveRegisters::SaveLiveRegisters(MacroAssembler *masm, BarrierStubC2 *stub)
- : _masm(masm), _reg_mask(stub->live()), _result_reg(stub->result()) {
+ : _masm(masm), _reg_mask(stub->preserve_set()) {
const int register_save_size = iterate_over_register_mask(ACTION_COUNT_ONLY) * BytesPerWord;
_frame_size = align_up(register_save_size, frame::alignment_in_bytes)
@@ -317,11 +317,6 @@ int SaveLiveRegisters::iterate_over_register_mask(IterationAction action, int of
if (vm_reg->is_Register()) {
Register std_reg = vm_reg->as_Register();
- // '_result_reg' will hold the end result of the operation. Its content must thus not be preserved.
- if (std_reg == _result_reg) {
- continue;
- }
-
if (std_reg->encoding() >= R2->encoding() && std_reg->encoding() <= R12->encoding()) {
reg_save_index++;
diff --git a/src/hotspot/cpu/ppc/gc/shared/barrierSetAssembler_ppc.hpp b/src/hotspot/cpu/ppc/gc/shared/barrierSetAssembler_ppc.hpp
index d70fce77acc91..9c1cef5eea46c 100644
--- a/src/hotspot/cpu/ppc/gc/shared/barrierSetAssembler_ppc.hpp
+++ b/src/hotspot/cpu/ppc/gc/shared/barrierSetAssembler_ppc.hpp
@@ -98,7 +98,6 @@ class BarrierSetAssembler: public CHeapObj {
class SaveLiveRegisters {
MacroAssembler* _masm;
RegMask _reg_mask;
- Register _result_reg;
int _frame_size;
public:
diff --git a/src/hotspot/cpu/ppc/gc/z/zBarrierSetAssembler_ppc.cpp b/src/hotspot/cpu/ppc/gc/z/zBarrierSetAssembler_ppc.cpp
index 5696b78eba858..a54e2844e39f1 100644
--- a/src/hotspot/cpu/ppc/gc/z/zBarrierSetAssembler_ppc.cpp
+++ b/src/hotspot/cpu/ppc/gc/z/zBarrierSetAssembler_ppc.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2021, 2023, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2021, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2021, 2023 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@@ -887,7 +887,6 @@ class ZSetupArguments {
#define __ masm->
void ZBarrierSetAssembler::generate_c2_load_barrier_stub(MacroAssembler* masm, ZLoadBarrierStubC2* stub) const {
- Assembler::InlineSkippedInstructionsCounter skipped_counter(masm);
__ block_comment("generate_c2_load_barrier_stub (zgc) {");
__ bind(*stub->entry());
@@ -911,7 +910,6 @@ void ZBarrierSetAssembler::generate_c2_load_barrier_stub(MacroAssembler* masm, Z
}
void ZBarrierSetAssembler::generate_c2_store_barrier_stub(MacroAssembler* masm, ZStoreBarrierStubC2* stub) const {
- Assembler::InlineSkippedInstructionsCounter skipped_counter(masm);
__ block_comment("ZStoreBarrierStubC2");
// Stub entry
diff --git a/src/hotspot/cpu/ppc/macroAssembler_ppc.cpp b/src/hotspot/cpu/ppc/macroAssembler_ppc.cpp
index 701cd5cf637e9..4b74b1ae941d8 100644
--- a/src/hotspot/cpu/ppc/macroAssembler_ppc.cpp
+++ b/src/hotspot/cpu/ppc/macroAssembler_ppc.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 1997, 2023, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1997, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2012, 2023 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@@ -2209,9 +2209,6 @@ void MacroAssembler::tlab_allocate(
std(new_top, in_bytes(JavaThread::tlab_top_offset()), R16_thread);
//verify_tlab(); not implemented
}
-void MacroAssembler::incr_allocated_bytes(RegisterOrConstant size_in_bytes, Register t1, Register t2) {
- unimplemented("incr_allocated_bytes");
-}
address MacroAssembler::emit_trampoline_stub(int destination_toc_offset,
int insts_call_instruction_offset, Register Rtoc) {
@@ -3901,7 +3898,7 @@ void MacroAssembler::muladd(Register out, Register in,
void MacroAssembler::multiply_to_len(Register x, Register xlen,
Register y, Register ylen,
- Register z, Register zlen,
+ Register z,
Register tmp1, Register tmp2,
Register tmp3, Register tmp4,
Register tmp5, Register tmp6,
@@ -3912,11 +3909,11 @@ void MacroAssembler::multiply_to_len(Register x, Register xlen,
ShortBranchVerifier sbv(this);
- assert_different_registers(x, xlen, y, ylen, z, zlen,
+ assert_different_registers(x, xlen, y, ylen, z,
tmp1, tmp2, tmp3, tmp4, tmp5, tmp6);
- assert_different_registers(x, xlen, y, ylen, z, zlen,
+ assert_different_registers(x, xlen, y, ylen, z,
tmp1, tmp2, tmp3, tmp4, tmp5, tmp7);
- assert_different_registers(x, xlen, y, ylen, z, zlen,
+ assert_different_registers(x, xlen, y, ylen, z,
tmp1, tmp2, tmp3, tmp4, tmp5, tmp8);
const Register idx = tmp1;
@@ -3944,7 +3941,7 @@ void MacroAssembler::multiply_to_len(Register x, Register xlen,
// z[xstart] = (int)carry;
mr_if_needed(idx, ylen); // idx = ylen
- mr_if_needed(kdx, zlen); // kdx = xlen + ylen
+ add(kdx, xlen, ylen); // kdx = xlen + ylen
li(carry, 0); // carry = 0
Label L_done;
diff --git a/src/hotspot/cpu/ppc/macroAssembler_ppc.hpp b/src/hotspot/cpu/ppc/macroAssembler_ppc.hpp
index 92db8a86b42ff..c92f856437635 100644
--- a/src/hotspot/cpu/ppc/macroAssembler_ppc.hpp
+++ b/src/hotspot/cpu/ppc/macroAssembler_ppc.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2002, 2023, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2002, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2012, 2023 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@@ -626,7 +626,6 @@ class MacroAssembler: public Assembler {
Register t1, // temp register
Label& slow_case // continuation point if fast allocation fails
);
- void incr_allocated_bytes(RegisterOrConstant size_in_bytes, Register t1, Register t2);
enum { trampoline_stub_size = 6 * 4 };
address emit_trampoline_stub(int destination_toc_offset, int insts_call_instruction_offset, Register Rtoc = noreg);
@@ -785,7 +784,7 @@ class MacroAssembler: public Assembler {
Register tmp1, Register tmp2, Register carry);
void multiply_to_len(Register x, Register xlen,
Register y, Register ylen,
- Register z, Register zlen,
+ Register z,
Register tmp1, Register tmp2, Register tmp3, Register tmp4, Register tmp5,
Register tmp6, Register tmp7, Register tmp8, Register tmp9, Register tmp10,
Register tmp11, Register tmp12, Register tmp13);
diff --git a/src/hotspot/cpu/ppc/relocInfo_ppc.cpp b/src/hotspot/cpu/ppc/relocInfo_ppc.cpp
index 39e50465231ae..5d0d2785bf427 100644
--- a/src/hotspot/cpu/ppc/relocInfo_ppc.cpp
+++ b/src/hotspot/cpu/ppc/relocInfo_ppc.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2000, 2023, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2000, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2012, 2015 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@@ -32,10 +32,7 @@
#include "oops/oop.hpp"
#include "runtime/safepoint.hpp"
-void Relocation::pd_set_data_value(address x, intptr_t o, bool verify_only) {
- // Currently we don't support splitting of relocations.
- assert(o == 0, "tried to split relocations");
-
+void Relocation::pd_set_data_value(address x, bool verify_only) {
if (!verify_only) {
if (format() != 1) {
nativeMovConstReg_at(addr())->set_data_plain(((intptr_t)x), code());
diff --git a/src/hotspot/cpu/ppc/stubGenerator_ppc.cpp b/src/hotspot/cpu/ppc/stubGenerator_ppc.cpp
index 63502c00b2341..ad8b76ba667a4 100644
--- a/src/hotspot/cpu/ppc/stubGenerator_ppc.cpp
+++ b/src/hotspot/cpu/ppc/stubGenerator_ppc.cpp
@@ -3204,7 +3204,6 @@ class StubGenerator: public StubCodeGenerator {
// R5 - y address
// R6 - y length
// R7 - z address
- // R8 - z length
//
address generate_multiplyToLen() {
@@ -3217,7 +3216,6 @@ class StubGenerator: public StubCodeGenerator {
const Register y = R5;
const Register ylen = R6;
const Register z = R7;
- const Register zlen = R8;
const Register tmp1 = R2; // TOC not used.
const Register tmp2 = R9;
@@ -3240,7 +3238,6 @@ class StubGenerator: public StubCodeGenerator {
// C2 does not respect int to long conversion for stub calls.
__ clrldi(xlen, xlen, 32);
__ clrldi(ylen, ylen, 32);
- __ clrldi(zlen, zlen, 32);
// Save non-volatile regs (frameless).
int current_offs = 8;
@@ -3253,7 +3250,7 @@ class StubGenerator: public StubCodeGenerator {
__ std(R30, -current_offs, R1_SP); current_offs += 8;
__ std(R31, -current_offs, R1_SP);
- __ multiply_to_len(x, xlen, y, ylen, z, zlen, tmp1, tmp2, tmp3, tmp4, tmp5,
+ __ multiply_to_len(x, xlen, y, ylen, z, tmp1, tmp2, tmp3, tmp4, tmp5,
tmp6, tmp7, tmp8, tmp9, tmp10, tmp11, tmp12, tmp13);
// Restore non-volatile regs.
diff --git a/src/hotspot/cpu/riscv/assembler_riscv.hpp b/src/hotspot/cpu/riscv/assembler_riscv.hpp
index 388e7f9eb941f..616aee82b996a 100644
--- a/src/hotspot/cpu/riscv/assembler_riscv.hpp
+++ b/src/hotspot/cpu/riscv/assembler_riscv.hpp
@@ -656,39 +656,35 @@ class Assembler : public AbstractAssembler {
#undef INSN
-#define INSN(NAME, op) \
- void NAME(Register Rd, const int32_t offset) { \
- guarantee(is_simm21(offset) && ((offset % 2) == 0), "offset is invalid."); \
- unsigned insn = 0; \
- patch((address)&insn, 6, 0, op); \
- patch_reg((address)&insn, 7, Rd); \
- patch((address)&insn, 19, 12, (uint32_t)((offset >> 12) & 0xff)); \
- patch((address)&insn, 20, (uint32_t)((offset >> 11) & 0x1)); \
- patch((address)&insn, 30, 21, (uint32_t)((offset >> 1) & 0x3ff)); \
- patch((address)&insn, 31, (uint32_t)((offset >> 20) & 0x1)); \
- emit(insn); \
+ private:
+ // All calls and jumps must go via MASM.
+ // Format J-type
+ void _jal(Register Rd, const int32_t offset) {
+ guarantee(is_simm21(offset) && ((offset % 2) == 0), "offset is invalid.");
+ unsigned insn = 0;
+ patch((address)&insn, 6, 0, 0b1101111);
+ patch_reg((address)&insn, 7, Rd);
+ patch((address)&insn, 19, 12, (uint32_t)((offset >> 12) & 0xff));
+ patch((address)&insn, 20, (uint32_t)((offset >> 11) & 0x1));
+ patch((address)&insn, 30, 21, (uint32_t)((offset >> 1) & 0x3ff));
+ patch((address)&insn, 31, (uint32_t)((offset >> 20) & 0x1));
+ emit(insn);
}
- INSN(jal, 0b1101111);
-
-#undef INSN
-
-#define INSN(NAME, op, funct) \
- void NAME(Register Rd, Register Rs, const int32_t offset) { \
- guarantee(is_simm12(offset), "offset is invalid."); \
- unsigned insn = 0; \
- patch((address)&insn, 6, 0, op); \
- patch_reg((address)&insn, 7, Rd); \
- patch((address)&insn, 14, 12, funct); \
- patch_reg((address)&insn, 15, Rs); \
- int32_t val = offset & 0xfff; \
- patch((address)&insn, 31, 20, val); \
- emit(insn); \
+ // Format I-type
+ void _jalr(Register Rd, Register Rs, const int32_t offset) {
+ guarantee(is_simm12(offset), "offset is invalid.");
+ unsigned insn = 0;
+ patch((address)&insn, 6, 0, 0b1100111);
+ patch_reg((address)&insn, 7, Rd);
+ patch((address)&insn, 14, 12, 0b000);
+ patch_reg((address)&insn, 15, Rs);
+ int32_t val = offset & 0xfff;
+ patch((address)&insn, 31, 20, val);
+ emit(insn);
}
- INSN(_jalr, 0b1100111, 0b000);
-
-#undef INSN
+ public:
enum barrier {
i = 0b1000, o = 0b0100, r = 0b0010, w = 0b0001,
@@ -1867,28 +1863,65 @@ enum Nf {
patch_VArith(op, Vd, funct3, Vs1->raw_encoding(), Vs2, vm, funct6); \
}
- // Vector Bit-manipulation used in Cryptography (Zvkb) Extension
+ // Vector Bit-manipulation used in Cryptography (Zvbb) Extension
INSN(vandn_vv, 0b1010111, 0b000, 0b000001);
- INSN(vandn_vx, 0b1010111, 0b100, 0b000001);
- INSN(vclmul_vv, 0b1010111, 0b010, 0b001100);
- INSN(vclmul_vx, 0b1010111, 0b110, 0b001100);
- INSN(vclmulh_vv, 0b1010111, 0b010, 0b001101);
- INSN(vclmulh_vx, 0b1010111, 0b110, 0b001101);
INSN(vror_vv, 0b1010111, 0b000, 0b010100);
- INSN(vror_vx, 0b1010111, 0b100, 0b010100);
INSN(vrol_vv, 0b1010111, 0b000, 0b010101);
+
+ // Vector Bit-manipulation used in Cryptography (Zvbc) Extension
+ INSN(vclmul_vv, 0b1010111, 0b010, 0b001100);
+ INSN(vclmulh_vv, 0b1010111, 0b010, 0b001101);
+
+#undef INSN
+
+#define INSN(NAME, op, funct3, funct6) \
+ void NAME(VectorRegister Vd, VectorRegister Vs2, Register Rs1, VectorMask vm = unmasked) { \
+ patch_VArith(op, Vd, funct3, Rs1->raw_encoding(), Vs2, vm, funct6); \
+ }
+
+ // Vector Bit-manipulation used in Cryptography (Zvbb) Extension
INSN(vrol_vx, 0b1010111, 0b100, 0b010101);
+ INSN(vror_vx, 0b1010111, 0b100, 0b010100);
#undef INSN
+#define patch_VArith_imm6(op, Reg, funct3, Reg_or_Imm5, I5, Vs2, vm, funct6) \
+ unsigned insn = 0; \
+ patch((address)&insn, 6, 0, op); \
+ patch((address)&insn, 14, 12, funct3); \
+ patch((address)&insn, 19, 15, Reg_or_Imm5); \
+ patch((address)&insn, 25, vm); \
+ patch((address)&insn, 26, I5); \
+ patch((address)&insn, 31, 27, funct6); \
+ patch_reg((address)&insn, 7, Reg); \
+ patch_reg((address)&insn, 20, Vs2); \
+ emit(insn)
+
+#define INSN(NAME, op, funct3, funct6) \
+ void NAME(VectorRegister Vd, VectorRegister Vs2, uint32_t imm, VectorMask vm = unmasked) { \
+ guarantee(is_uimm6(imm), "uimm is invalid"); \
+ patch_VArith_imm6(op, Vd, funct3, (uint32_t)(imm & 0x1f), (uint32_t)((imm >> 5) & 0x1), Vs2, vm, funct6); \
+ }
+
+ // Vector Bit-manipulation used in Cryptography (Zvbb) Extension
+ // NOTE: there is no corresponding vrol.vi supplied by the extension, but it can be emulated with vror.vi easily.
+ INSN(vror_vi, 0b1010111, 0b011, 0b01010);
+
+#undef INSN
+#undef patch_VArith_imm6
+
#define INSN(NAME, op, funct3, Vs1, funct6) \
void NAME(VectorRegister Vd, VectorRegister Vs2, VectorMask vm = unmasked) { \
patch_VArith(op, Vd, funct3, Vs1, Vs2, vm, funct6); \
}
// Vector Bit-manipulation used in Cryptography (Zvkb) Extension
- INSN(vbrev8_v, 0b1010111, 0b010, 0b01000, 0b010010);
- INSN(vrev8_v, 0b1010111, 0b010, 0b01001, 0b010010);
+ INSN(vbrev_v, 0b1010111, 0b010, 0b01010, 0b010010); // reverse bits in every element
+ INSN(vbrev8_v, 0b1010111, 0b010, 0b01000, 0b010010); // reverse bits in every byte of element
+ INSN(vrev8_v, 0b1010111, 0b010, 0b01001, 0b010010); // reverse bytes in every elememt
+
+ INSN(vclz_v, 0b1010111, 0b010, 0b01100, 0b010010); // count leading zeros
+ INSN(vctz_v, 0b1010111, 0b010, 0b01101, 0b010010); // count trailing zeros
#undef INSN
@@ -1904,6 +1937,16 @@ enum Nf {
#undef INSN
+#define INSN(NAME, op, funct3, Vs1, funct6) \
+ void NAME(VectorRegister Vd, VectorRegister Vs2, VectorMask vm = unmasked) { \
+ patch_VArith(op, Vd, funct3, Vs1, Vs2, vm, funct6); \
+ }
+
+ // Vector Basic Bit-manipulation (Zvbb) Extension
+ INSN(vcpop_v, 0b1010111, 0b010, 0b01110, 0b010010);
+
+#undef INSN
+
#undef patch_VArith
// ====================================
@@ -2285,21 +2328,23 @@ enum Nf {
#undef INSN
-#define INSN(NAME, funct4, op) \
- void NAME(Register Rs1) { \
- assert_cond(Rs1 != x0); \
- uint16_t insn = 0; \
- c_patch((address)&insn, 1, 0, op); \
- c_patch_reg((address)&insn, 2, x0); \
- c_patch_reg((address)&insn, 7, Rs1); \
- c_patch((address)&insn, 15, 12, funct4); \
- emit_int16(insn); \
+ private:
+ // All calls and jumps must go via MASM.
+ // Format CR, c.jr/c.jalr
+ // Note C instruction can't be changed, i.e. relocation patching.
+ template
+ void c_cr_if(Register Rs1) {
+ assert_cond(Rs1 != x0);
+ uint16_t insn = 0;
+ c_patch((address)&insn, 1, 0, FunctionType);
+ c_patch_reg((address)&insn, 2, x0);
+ c_patch_reg((address)&insn, 7, Rs1);
+ c_patch((address)&insn, 15, 12, InstructionType);
+ emit_int16(insn);
}
- INSN(c_jr, 0b1000, 0b10);
- INSN(c_jalr, 0b1001, 0b10);
-
-#undef INSN
+ void c_jr(Register Rs1) { c_cr_if<0b1000, 0b10>(Rs1); }
+ void c_jalr(Register Rs1) { c_cr_if<0b1001, 0b10>(Rs1); }
typedef void (Assembler::* j_c_insn)(address dest);
typedef void (Assembler::* compare_and_branch_c_insn)(Register Rs1, address dest);
@@ -2322,35 +2367,36 @@ enum Nf {
}
}
-#define INSN(NAME, funct3, op) \
- void NAME(int32_t offset) { \
- assert(is_simm12(offset) && ((offset % 2) == 0), "invalid encoding"); \
- uint16_t insn = 0; \
- c_patch((address)&insn, 1, 0, op); \
- c_patch((address)&insn, 2, 2, (offset & nth_bit(5)) >> 5); \
- c_patch((address)&insn, 5, 3, (offset & right_n_bits(4)) >> 1); \
- c_patch((address)&insn, 6, 6, (offset & nth_bit(7)) >> 7); \
- c_patch((address)&insn, 7, 7, (offset & nth_bit(6)) >> 6); \
- c_patch((address)&insn, 8, 8, (offset & nth_bit(10)) >> 10); \
- c_patch((address)&insn, 10, 9, (offset & right_n_bits(10)) >> 8); \
- c_patch((address)&insn, 11, 11, (offset & nth_bit(4)) >> 4); \
- c_patch((address)&insn, 12, 12, (offset & nth_bit(11)) >> 11); \
- c_patch((address)&insn, 15, 13, funct3); \
- emit_int16(insn); \
- } \
- void NAME(address dest) { \
- assert_cond(dest != nullptr); \
- int64_t distance = dest - pc(); \
- assert(is_simm12(distance) && ((distance % 2) == 0), "invalid encoding"); \
- c_j(distance); \
- } \
- void NAME(Label &L) { \
- wrap_label(L, &Assembler::NAME); \
+ // Format CJ, c.j (c.jal)
+ // Note C instruction can't be changed, i.e. relocation patching.
+ void c_j(int32_t offset) {
+ assert(is_simm12(offset) && ((offset % 2) == 0), "invalid encoding");
+ uint16_t insn = 0;
+ c_patch((address)&insn, 1, 0, 0b01);
+ c_patch((address)&insn, 2, 2, (offset & nth_bit(5)) >> 5);
+ c_patch((address)&insn, 5, 3, (offset & right_n_bits(4)) >> 1);
+ c_patch((address)&insn, 6, 6, (offset & nth_bit(7)) >> 7);
+ c_patch((address)&insn, 7, 7, (offset & nth_bit(6)) >> 6);
+ c_patch((address)&insn, 8, 8, (offset & nth_bit(10)) >> 10);
+ c_patch((address)&insn, 10, 9, (offset & right_n_bits(10)) >> 8);
+ c_patch((address)&insn, 11, 11, (offset & nth_bit(4)) >> 4);
+ c_patch((address)&insn, 12, 12, (offset & nth_bit(11)) >> 11);
+ c_patch((address)&insn, 15, 13, 0b101);
+ emit_int16(insn);
}
- INSN(c_j, 0b101, 0b01);
+ void c_j(address dest) {
+ assert_cond(dest != nullptr);
+ int64_t distance = dest - pc();
+ assert(is_simm12(distance) && ((distance % 2) == 0), "invalid encoding");
+ c_j(distance);
+ }
-#undef INSN
+ void c_j(Label &L) {
+ wrap_label(L, &Assembler::c_j);
+ }
+
+ public:
#define INSN(NAME, funct3, op) \
void NAME(Register Rs1, int32_t imm) { \
@@ -2803,24 +2849,35 @@ enum Nf {
// --------------------------
// Unconditional branch instructions
// --------------------------
-#define INSN(NAME) \
- void NAME(Register Rd, Register Rs, const int32_t offset) { \
- /* jalr -> c.jr/c.jalr */ \
- if (do_compress() && (offset == 0 && Rs != x0)) { \
- if (Rd == x1) { \
- c_jalr(Rs); \
- return; \
- } else if (Rd == x0) { \
- c_jr(Rs); \
- return; \
- } \
- } \
- _jalr(Rd, Rs, offset); \
+ protected:
+ // All calls and jumps must go via MASM.
+ void jalr(Register Rd, Register Rs, const int32_t offset) {
+ /* jalr -> c.jr/c.jalr */
+ if (do_compress() && (offset == 0 && Rs != x0)) {
+ if (Rd == x1) {
+ c_jalr(Rs);
+ return;
+ } else if (Rd == x0) {
+ c_jr(Rs);
+ return;
+ }
+ }
+ _jalr(Rd, Rs, offset);
}
- INSN(jalr);
+ void jal(Register Rd, const int32_t offset) {
+ /* jal -> c.j, note c.jal is RV32C only */
+ if (do_compress() &&
+ Rd == x0 &&
+ is_simm12(offset) && ((offset % 2) == 0)) {
+ c_j(offset);
+ return;
+ }
-#undef INSN
+ _jal(Rd, offset);
+ }
+
+ public:
// --------------------------
// Miscellaneous Instructions
@@ -3000,18 +3057,6 @@ enum Nf {
#undef INSN
-// ---------------------------------------------------------------------------------------
-
-#define INSN(NAME, REGISTER) \
- void NAME(Register Rs) { \
- jalr(REGISTER, Rs, 0); \
- }
-
- INSN(jr, x0);
- INSN(jalr, x1);
-
-#undef INSN
-
// -------------- ZCB Instruction Definitions --------------
// Zcb additional C instructions
private:
diff --git a/src/hotspot/cpu/riscv/c1_LIRAssembler_riscv.cpp b/src/hotspot/cpu/riscv/c1_LIRAssembler_riscv.cpp
index 5d0fa3fad3cec..b3168310e563f 100644
--- a/src/hotspot/cpu/riscv/c1_LIRAssembler_riscv.cpp
+++ b/src/hotspot/cpu/riscv/c1_LIRAssembler_riscv.cpp
@@ -135,7 +135,7 @@ static jlong as_long(LIR_Opr data) {
Address LIR_Assembler::as_Address(LIR_Address* addr, Register tmp) {
if (addr->base()->is_illegal()) {
assert(addr->index()->is_illegal(), "must be illegal too");
- __ movptr(tmp, addr->disp());
+ __ movptr(tmp, (address)addr->disp());
return Address(tmp, 0);
}
@@ -1841,17 +1841,7 @@ void LIR_Assembler::leal(LIR_Opr addr, LIR_Opr dest, LIR_PatchCode patch_code, C
void LIR_Assembler::rt_call(LIR_Opr result, address dest, const LIR_OprList* args, LIR_Opr tmp, CodeEmitInfo* info) {
assert(!tmp->is_valid(), "don't need temporary");
- CodeBlob *cb = CodeCache::find_blob(dest);
- if (cb != nullptr) {
- __ far_call(RuntimeAddress(dest));
- } else {
- RuntimeAddress target(dest);
- __ relocate(target.rspec(), [&] {
- int32_t offset;
- __ movptr(t0, target.target(), offset);
- __ jalr(x1, t0, offset);
- });
- }
+ __ rt_call(dest);
if (info != nullptr) {
add_call_info_here(info);
diff --git a/src/hotspot/cpu/riscv/c2_MacroAssembler_riscv.cpp b/src/hotspot/cpu/riscv/c2_MacroAssembler_riscv.cpp
index 2468bf70fe53e..dd9a9974f1b94 100644
--- a/src/hotspot/cpu/riscv/c2_MacroAssembler_riscv.cpp
+++ b/src/hotspot/cpu/riscv/c2_MacroAssembler_riscv.cpp
@@ -2762,6 +2762,10 @@ void C2_MacroAssembler::compare_integral_v(VectorRegister vd, VectorRegister src
case BoolTest::ge: vmsge_vv(vd, src1, src2, vm); break;
case BoolTest::lt: vmslt_vv(vd, src1, src2, vm); break;
case BoolTest::gt: vmsgt_vv(vd, src1, src2, vm); break;
+ case BoolTest::ule: vmsleu_vv(vd, src1, src2, vm); break;
+ case BoolTest::uge: vmsgeu_vv(vd, src1, src2, vm); break;
+ case BoolTest::ult: vmsltu_vv(vd, src1, src2, vm); break;
+ case BoolTest::ugt: vmsgtu_vv(vd, src1, src2, vm); break;
default:
assert(false, "unsupported compare condition");
ShouldNotReachHere();
diff --git a/src/hotspot/cpu/riscv/gc/g1/g1BarrierSetAssembler_riscv.cpp b/src/hotspot/cpu/riscv/gc/g1/g1BarrierSetAssembler_riscv.cpp
index fa7df32d7e944..062f80290626f 100644
--- a/src/hotspot/cpu/riscv/gc/g1/g1BarrierSetAssembler_riscv.cpp
+++ b/src/hotspot/cpu/riscv/gc/g1/g1BarrierSetAssembler_riscv.cpp
@@ -194,7 +194,7 @@ void G1BarrierSetAssembler::g1_write_barrier_post(MacroAssembler* masm,
// Does store cross heap regions?
__ xorr(tmp1, store_addr, new_val);
- __ srli(tmp1, tmp1, HeapRegion::LogOfHRGrainBytes);
+ __ srli(tmp1, tmp1, G1HeapRegion::LogOfHRGrainBytes);
__ beqz(tmp1, done);
// crosses regions, storing null?
diff --git a/src/hotspot/cpu/riscv/gc/shared/barrierSetAssembler_riscv.cpp b/src/hotspot/cpu/riscv/gc/shared/barrierSetAssembler_riscv.cpp
index f411a5572c7c8..0f3850809775c 100644
--- a/src/hotspot/cpu/riscv/gc/shared/barrierSetAssembler_riscv.cpp
+++ b/src/hotspot/cpu/riscv/gc/shared/barrierSetAssembler_riscv.cpp
@@ -211,21 +211,6 @@ void BarrierSetAssembler::tlab_allocate(MacroAssembler* masm, Register obj,
}
}
-void BarrierSetAssembler::incr_allocated_bytes(MacroAssembler* masm,
- Register var_size_in_bytes,
- int con_size_in_bytes,
- Register tmp1) {
- assert(tmp1->is_valid(), "need temp reg");
-
- __ ld(tmp1, Address(xthread, in_bytes(JavaThread::allocated_bytes_offset())));
- if (var_size_in_bytes->is_valid()) {
- __ add(tmp1, tmp1, var_size_in_bytes);
- } else {
- __ add(tmp1, tmp1, con_size_in_bytes);
- }
- __ sd(tmp1, Address(xthread, in_bytes(JavaThread::allocated_bytes_offset())));
-}
-
static volatile uint32_t _patching_epoch = 0;
address BarrierSetAssembler::patching_epoch_addr() {
@@ -396,7 +381,7 @@ OptoReg::Name BarrierSetAssembler::refine_register(const Node* node, OptoReg::Na
void SaveLiveRegisters::initialize(BarrierStubC2* stub) {
// Record registers that needs to be saved/restored
- RegMaskIterator rmi(stub->live());
+ RegMaskIterator rmi(stub->preserve_set());
while (rmi.has_next()) {
const OptoReg::Name opto_reg = rmi.next();
if (OptoReg::is_reg(opto_reg)) {
@@ -414,12 +399,8 @@ void SaveLiveRegisters::initialize(BarrierStubC2* stub) {
}
}
- // Remove C-ABI SOE registers, tmp regs and _ref register that will be updated
- if (stub->result() != noreg) {
- _gp_regs -= RegSet::range(x18, x27) + RegSet::of(x2) + RegSet::of(x8, x9) + RegSet::of(x5, stub->result());
- } else {
- _gp_regs -= RegSet::range(x18, x27) + RegSet::of(x2, x5) + RegSet::of(x8, x9);
- }
+ // Remove C-ABI SOE registers and tmp regs
+ _gp_regs -= RegSet::range(x18, x27) + RegSet::of(x2, x5) + RegSet::of(x8, x9);
}
SaveLiveRegisters::SaveLiveRegisters(MacroAssembler* masm, BarrierStubC2* stub)
diff --git a/src/hotspot/cpu/riscv/gc/shared/barrierSetAssembler_riscv.hpp b/src/hotspot/cpu/riscv/gc/shared/barrierSetAssembler_riscv.hpp
index eb0801d237f70..c1f51d768df5d 100644
--- a/src/hotspot/cpu/riscv/gc/shared/barrierSetAssembler_riscv.hpp
+++ b/src/hotspot/cpu/riscv/gc/shared/barrierSetAssembler_riscv.hpp
@@ -45,11 +45,6 @@ enum class NMethodPatchingType {
};
class BarrierSetAssembler: public CHeapObj {
-private:
- void incr_allocated_bytes(MacroAssembler* masm,
- Register var_size_in_bytes, int con_size_in_bytes,
- Register t1 = noreg);
-
public:
virtual void arraycopy_prologue(MacroAssembler* masm, DecoratorSet decorators, bool is_oop,
Register src, Register dst, Register count, RegSet saved_regs) {}
diff --git a/src/hotspot/cpu/riscv/gc/shenandoah/shenandoahBarrierSetAssembler_riscv.cpp b/src/hotspot/cpu/riscv/gc/shenandoah/shenandoahBarrierSetAssembler_riscv.cpp
index 26d60441c2d2c..9be8259e7e85d 100644
--- a/src/hotspot/cpu/riscv/gc/shenandoah/shenandoahBarrierSetAssembler_riscv.cpp
+++ b/src/hotspot/cpu/riscv/gc/shenandoah/shenandoahBarrierSetAssembler_riscv.cpp
@@ -112,18 +112,14 @@ void ShenandoahBarrierSetAssembler::satb_write_barrier_pre(MacroAssembler* masm,
assert_different_registers(obj, pre_val, tmp1, tmp2);
assert(pre_val != noreg && tmp1 != noreg && tmp2 != noreg, "expecting a register");
- Address in_progress(thread, in_bytes(ShenandoahThreadLocalData::satb_mark_queue_active_offset()));
Address index(thread, in_bytes(ShenandoahThreadLocalData::satb_mark_queue_index_offset()));
Address buffer(thread, in_bytes(ShenandoahThreadLocalData::satb_mark_queue_buffer_offset()));
// Is marking active?
- if (in_bytes(SATBMarkQueue::byte_width_of_active()) == 4) {
- __ lwu(tmp1, in_progress);
- } else {
- assert(in_bytes(SATBMarkQueue::byte_width_of_active()) == 1, "Assumption");
- __ lbu(tmp1, in_progress);
- }
- __ beqz(tmp1, done);
+ Address gc_state(xthread, in_bytes(ShenandoahThreadLocalData::gc_state_offset()));
+ __ lbu(t1, gc_state);
+ __ test_bit(t1, t1, ShenandoahHeap::MARKING_BITPOS);
+ __ beqz(t1, done);
// Do we need to load the previous value?
if (obj != noreg) {
@@ -300,7 +296,7 @@ void ShenandoahBarrierSetAssembler::load_reference_barrier(MacroAssembler* masm,
assert(!is_narrow, "phantom access cannot be narrow");
target = CAST_FROM_FN_PTR(address, ShenandoahRuntime::load_reference_barrier_weak);
}
- __ call(target);
+ __ rt_call(target);
__ mv(t0, x10);
__ pop_call_clobbered_registers();
__ mv(x10, t0);
@@ -703,7 +699,7 @@ void ShenandoahBarrierSetAssembler::generate_c1_load_reference_barrier_runtime_s
assert(is_native, "phantom must only be called off-heap");
target = CAST_FROM_FN_PTR(address, ShenandoahRuntime::load_reference_barrier_phantom);
}
- __ call(target);
+ __ rt_call(target);
__ mv(t0, x10);
__ pop_call_clobbered_registers();
__ mv(x10, t0);
diff --git a/src/hotspot/cpu/riscv/globals_riscv.hpp b/src/hotspot/cpu/riscv/globals_riscv.hpp
index e22456cacc603..c39b29f6ee4a7 100644
--- a/src/hotspot/cpu/riscv/globals_riscv.hpp
+++ b/src/hotspot/cpu/riscv/globals_riscv.hpp
@@ -115,6 +115,7 @@ define_pd_global(intx, InlineSmallCode, 1000);
product(bool, UseZtso, false, EXPERIMENTAL, "Assume Ztso memory model") \
product(bool, UseZihintpause, false, EXPERIMENTAL, \
"Use Zihintpause instructions") \
+ product(bool, UseZvbb, false, EXPERIMENTAL, "Use Zvbb instructions") \
product(bool, UseZvfh, false, EXPERIMENTAL, "Use Zvfh instructions") \
product(bool, UseZvkn, false, EXPERIMENTAL, \
"Use Zvkn group extension, Zvkned, Zvknhb, Zvkb, Zvkt") \
diff --git a/src/hotspot/cpu/riscv/jniFastGetField_riscv.cpp b/src/hotspot/cpu/riscv/jniFastGetField_riscv.cpp
index fd1701c8188b8..8423ecad8a3da 100644
--- a/src/hotspot/cpu/riscv/jniFastGetField_riscv.cpp
+++ b/src/hotspot/cpu/riscv/jniFastGetField_riscv.cpp
@@ -177,7 +177,7 @@ address JNI_FastGetField::generate_fast_get_int_field0(BasicType type) {
__ relocate(target.rspec(), [&] {
int32_t offset;
__ la(t0, target.target(), offset);
- __ jalr(x1, t0, offset);
+ __ jalr(t0, offset);
});
__ leave();
__ ret();
diff --git a/src/hotspot/cpu/riscv/jvmciCodeInstaller_riscv.cpp b/src/hotspot/cpu/riscv/jvmciCodeInstaller_riscv.cpp
index a7160bd724128..35bfbb1df8ead 100644
--- a/src/hotspot/cpu/riscv/jvmciCodeInstaller_riscv.cpp
+++ b/src/hotspot/cpu/riscv/jvmciCodeInstaller_riscv.cpp
@@ -42,8 +42,10 @@ jint CodeInstaller::pd_next_offset(NativeInstruction* inst, jint pc_offset, JVMC
return pc_offset + NativeCall::instruction_size;
} else if (inst->is_jump()) {
return pc_offset + NativeJump::instruction_size;
- } else if (inst->is_movptr()) {
- return pc_offset + NativeMovConstReg::movptr_instruction_size;
+ } else if (inst->is_movptr1()) {
+ return pc_offset + NativeMovConstReg::movptr1_instruction_size;
+ } else if (inst->is_movptr2()) {
+ return pc_offset + NativeMovConstReg::movptr2_instruction_size;
} else {
JVMCI_ERROR_0("unsupported type of instruction for call site");
}
diff --git a/src/hotspot/cpu/riscv/macroAssembler_riscv.cpp b/src/hotspot/cpu/riscv/macroAssembler_riscv.cpp
index b450d93680f3b..78ea3af28be04 100644
--- a/src/hotspot/cpu/riscv/macroAssembler_riscv.cpp
+++ b/src/hotspot/cpu/riscv/macroAssembler_riscv.cpp
@@ -339,7 +339,7 @@ void MacroAssembler::call_VM_base(Register oop_result,
relocate(target.rspec(), [&] {
int32_t offset;
la(t0, target.target(), offset);
- jalr(x0, t0, offset);
+ jr(t0, offset);
});
bind(ok);
}
@@ -640,15 +640,17 @@ void MacroAssembler::emit_static_call_stub() {
// Jump to the entry point of the c2i stub.
int32_t offset = 0;
- movptr(t0, 0, offset);
- jalr(x0, t0, offset);
+ movptr(t0, 0, offset, t1); // lui + lui + slli + add
+ jr(t0, offset);
}
void MacroAssembler::call_VM_leaf_base(address entry_point,
int number_of_arguments,
Label *retaddr) {
+ int32_t offset = 0;
push_reg(RegSet::of(t0, xmethod), sp); // push << t0 & xmethod >> to sp
- call(entry_point);
+ mv(t0, entry_point, offset);
+ jalr(t0, offset);
if (retaddr != nullptr) {
bind(*retaddr);
}
@@ -716,33 +718,19 @@ void MacroAssembler::super_call_VM_leaf(address entry_point, Register arg_0, Reg
}
void MacroAssembler::la(Register Rd, const address addr) {
- int64_t offset = addr - pc();
- if (is_valid_32bit_offset(offset)) {
- auipc(Rd, (int32_t)offset + 0x800); //0x800, Note:the 11th sign bit
- addi(Rd, Rd, ((int64_t)offset << 52) >> 52);
- } else {
- movptr(Rd, addr);
- }
+ int32_t offset;
+ la(Rd, addr, offset);
+ addi(Rd, Rd, offset);
}
void MacroAssembler::la(Register Rd, const address addr, int32_t &offset) {
- assert((uintptr_t)addr < (1ull << 48), "bad address");
-
- unsigned long target_address = (uintptr_t)addr;
- unsigned long low_address = (uintptr_t)CodeCache::low_bound();
- unsigned long high_address = (uintptr_t)CodeCache::high_bound();
- long offset_low = target_address - low_address;
- long offset_high = target_address - high_address;
-
- // RISC-V doesn't compute a page-aligned address, in order to partially
- // compensate for the use of *signed* offsets in its base+disp12
- // addressing mode (RISC-V's PC-relative reach remains asymmetric
- // [-(2G + 2K), 2G - 2K).
- if (offset_high >= -((1L << 31) + (1L << 11)) && offset_low < (1L << 31) - (1L << 11)) {
+ if (is_32bit_offset_from_codecache((int64_t)addr)) {
int64_t distance = addr - pc();
+ assert(is_valid_32bit_offset(distance), "Must be");
auipc(Rd, (int32_t)distance + 0x800);
offset = ((int32_t)distance << 20) >> 20;
} else {
+ assert(!CodeCache::contains(addr), "Must be");
movptr(Rd, addr, offset);
}
}
@@ -859,88 +847,113 @@ void MacroAssembler::li(Register Rd, int64_t imm) {
}
}
-#define INSN(NAME, REGISTER) \
- void MacroAssembler::NAME(const address dest, Register temp) { \
- assert_cond(dest != nullptr); \
- int64_t distance = dest - pc(); \
- if (is_simm21(distance) && ((distance % 2) == 0)) { \
- Assembler::jal(REGISTER, distance); \
- } else { \
- assert(temp != noreg, "expecting a register"); \
- int32_t offset = 0; \
- movptr(temp, dest, offset); \
- Assembler::jalr(REGISTER, temp, offset); \
- } \
- } \
-
- INSN(j, x0);
- INSN(jal, x1);
-
-#undef INSN
+void MacroAssembler::jump_link(const address dest, Register temp) {
+ assert_cond(dest != nullptr);
+ int64_t distance = dest - pc();
+ if (is_simm21(distance) && ((distance % 2) == 0)) {
+ Assembler::jal(x1, distance);
+ } else {
+ assert(temp != noreg && temp != x0, "expecting a register");
+ int32_t offset = 0;
+ la(temp, dest, offset);
+ jalr(temp, offset);
+ }
+}
-#define INSN(NAME, REGISTER) \
- void MacroAssembler::NAME(const Address &adr, Register temp) { \
- switch (adr.getMode()) { \
- case Address::literal: { \
- relocate(adr.rspec(), [&] { \
- NAME(adr.target(), temp); \
- }); \
- break; \
- } \
- case Address::base_plus_offset: { \
- int32_t offset = ((int32_t)adr.offset() << 20) >> 20; \
- la(temp, Address(adr.base(), adr.offset() - offset)); \
- Assembler::jalr(REGISTER, temp, offset); \
- break; \
- } \
- default: \
- ShouldNotReachHere(); \
- } \
- }
-
- INSN(j, x0);
- INSN(jal, x1);
+void MacroAssembler::jump_link(const Address &adr, Register temp) {
+ switch (adr.getMode()) {
+ case Address::literal: {
+ relocate(adr.rspec(), [&] {
+ jump_link(adr.target(), temp);
+ });
+ break;
+ }
+ case Address::base_plus_offset: {
+ int32_t offset = ((int32_t)adr.offset() << 20) >> 20;
+ la(temp, Address(adr.base(), adr.offset() - offset));
+ jalr(temp, offset);
+ break;
+ }
+ default:
+ ShouldNotReachHere();
+ }
+}
-#undef INSN
+void MacroAssembler::j(const address dest, Register temp) {
+ assert(CodeCache::contains(dest), "Must be");
+ assert_cond(dest != nullptr);
+ int64_t distance = dest - pc();
-#define INSN(NAME) \
- void MacroAssembler::NAME(Register Rd, const address dest, Register temp) { \
- assert_cond(dest != nullptr); \
- int64_t distance = dest - pc(); \
- if (is_simm21(distance) && ((distance % 2) == 0)) { \
- Assembler::NAME(Rd, distance); \
- } else { \
- assert_different_registers(Rd, temp); \
- int32_t offset = 0; \
- movptr(temp, dest, offset); \
- jalr(Rd, temp, offset); \
- } \
- } \
- void MacroAssembler::NAME(Register Rd, Label &L, Register temp) { \
- assert_different_registers(Rd, temp); \
- wrap_label(Rd, L, temp, &MacroAssembler::NAME); \
+ // We can't patch C, i.e. if Label wasn't bound we need to patch this jump.
+ IncompressibleRegion ir(this);
+ if (is_simm21(distance) && ((distance % 2) == 0)) {
+ Assembler::jal(x0, distance);
+ } else {
+ assert(temp != noreg && temp != x0, "expecting a register");
+ int32_t offset = 0;
+ la(temp, dest, offset);
+ jr(temp, offset);
}
+}
- INSN(jal);
+void MacroAssembler::j(const Address &adr, Register temp) {
+ switch (adr.getMode()) {
+ case Address::literal: {
+ relocate(adr.rspec(), [&] {
+ j(adr.target(), temp);
+ });
+ break;
+ }
+ case Address::base_plus_offset: {
+ int32_t offset = ((int32_t)adr.offset() << 20) >> 20;
+ la(temp, Address(adr.base(), adr.offset() - offset));
+ jr(temp, offset);
+ break;
+ }
+ default:
+ ShouldNotReachHere();
+ }
+}
-#undef INSN
+void MacroAssembler::j(Label &lab, Register temp) {
+ assert_different_registers(x0, temp);
+ if (lab.is_bound()) {
+ MacroAssembler::j(target(lab), temp);
+ } else {
+ lab.add_patch_at(code(), locator());
+ MacroAssembler::j(pc(), temp);
+ }
+}
-#define INSN(NAME, REGISTER) \
- void MacroAssembler::NAME(Label &l, Register temp) { \
- jal(REGISTER, l, temp); \
- } \
+void MacroAssembler::jr(Register Rd, int32_t offset) {
+ assert(Rd != noreg, "expecting a register");
+ Assembler::jalr(x0, Rd, offset);
+}
- INSN(j, x0);
- INSN(jal, x1);
+void MacroAssembler::call(const address dest, Register temp) {
+ assert_cond(dest != nullptr);
+ assert(temp != noreg, "expecting a register");
+ int32_t offset = 0;
+ la(temp, dest, offset);
+ jalr(temp, offset);
+}
-#undef INSN
+void MacroAssembler::jalr(Register Rs, int32_t offset) {
+ assert(Rs != noreg, "expecting a register");
+ Assembler::jalr(x1, Rs, offset);
+}
-void MacroAssembler::wrap_label(Register Rt, Label &L, Register tmp, load_insn_by_temp insn) {
- if (L.is_bound()) {
- (this->*insn)(Rt, target(L), tmp);
+void MacroAssembler::rt_call(address dest, Register tmp) {
+ CodeBlob *cb = CodeCache::find_blob(dest);
+ RuntimeAddress target(dest);
+ if (cb) {
+ far_call(target, tmp);
} else {
- L.add_patch_at(code(), locator());
- (this->*insn)(Rt, pc(), tmp);
+ relocate(target.rspec(), [&] {
+ int32_t offset;
+ la(tmp, target.target(), offset);
+ jalr(tmp, offset);
+ });
}
}
@@ -1412,15 +1425,34 @@ static int patch_offset_in_pc_relative(address branch, int64_t offset) {
return PC_RELATIVE_INSTRUCTION_NUM * NativeInstruction::instruction_size;
}
-static int patch_addr_in_movptr(address branch, address target) {
- const int MOVPTR_INSTRUCTIONS_NUM = 6; // lui + addi + slli + addi + slli + addi/jalr/load
+static int patch_addr_in_movptr1(address branch, address target) {
int32_t lower = ((intptr_t)target << 35) >> 35;
int64_t upper = ((intptr_t)target - lower) >> 29;
Assembler::patch(branch + 0, 31, 12, upper & 0xfffff); // Lui. target[48:29] + target[28] ==> branch[31:12]
Assembler::patch(branch + 4, 31, 20, (lower >> 17) & 0xfff); // Addi. target[28:17] ==> branch[31:20]
Assembler::patch(branch + 12, 31, 20, (lower >> 6) & 0x7ff); // Addi. target[16: 6] ==> branch[31:20]
Assembler::patch(branch + 20, 31, 20, lower & 0x3f); // Addi/Jalr/Load. target[ 5: 0] ==> branch[31:20]
- return MOVPTR_INSTRUCTIONS_NUM * NativeInstruction::instruction_size;
+ return NativeMovConstReg::movptr1_instruction_size;
+}
+
+static int patch_addr_in_movptr2(address instruction_address, address target) {
+ uintptr_t addr = (uintptr_t)target;
+
+ assert(addr < (1ull << 48), "48-bit overflow in address constant");
+ unsigned int upper18 = (addr >> 30ull);
+ int lower30 = (addr & 0x3fffffffu);
+ int low12 = (lower30 << 20) >> 20;
+ int mid18 = ((lower30 - low12) >> 12);
+
+ Assembler::patch(instruction_address + (NativeInstruction::instruction_size * 0), 31, 12, (upper18 & 0xfffff)); // Lui
+ Assembler::patch(instruction_address + (NativeInstruction::instruction_size * 1), 31, 12, (mid18 & 0xfffff)); // Lui
+ // Slli
+ // Add
+ Assembler::patch(instruction_address + (NativeInstruction::instruction_size * 4), 31, 20, low12 & 0xfff); // Addi/Jalr/Load
+
+ assert(MacroAssembler::target_addr_for_insn(instruction_address) == target, "Must be");
+
+ return NativeMovConstReg::movptr2_instruction_size;
}
static int patch_imm_in_li64(address branch, address target) {
@@ -1494,7 +1526,7 @@ static long get_offset_of_pc_relative(address insn_addr) {
return offset;
}
-static address get_target_of_movptr(address insn_addr) {
+static address get_target_of_movptr1(address insn_addr) {
assert_cond(insn_addr != nullptr);
intptr_t target_address = (((int64_t)Assembler::sextract(Assembler::ld_instr(insn_addr), 31, 12)) & 0xfffff) << 29; // Lui.
target_address += ((int64_t)Assembler::sextract(Assembler::ld_instr(insn_addr + 4), 31, 20)) << 17; // Addi.
@@ -1503,6 +1535,17 @@ static address get_target_of_movptr(address insn_addr) {
return (address) target_address;
}
+static address get_target_of_movptr2(address insn_addr) {
+ assert_cond(insn_addr != nullptr);
+ int32_t upper18 = ((Assembler::sextract(Assembler::ld_instr(insn_addr + NativeInstruction::instruction_size * 0), 31, 12)) & 0xfffff); // Lui
+ int32_t mid18 = ((Assembler::sextract(Assembler::ld_instr(insn_addr + NativeInstruction::instruction_size * 1), 31, 12)) & 0xfffff); // Lui
+ // 2 // Slli
+ // 3 // Add
+ int32_t low12 = ((Assembler::sextract(Assembler::ld_instr(insn_addr + NativeInstruction::instruction_size * 4), 31, 20))); // Addi/Jalr/Load.
+ address ret = (address)(((intptr_t)upper18<<30ll) + ((intptr_t)mid18<<12ll) + low12);
+ return ret;
+}
+
static address get_target_of_li64(address insn_addr) {
assert_cond(insn_addr != nullptr);
intptr_t target_address = (((int64_t)Assembler::sextract(Assembler::ld_instr(insn_addr), 31, 12)) & 0xfffff) << 44; // Lui.
@@ -1522,30 +1565,32 @@ address MacroAssembler::get_target_of_li32(address insn_addr) {
// Patch any kind of instruction; there may be several instructions.
// Return the total length (in bytes) of the instructions.
-int MacroAssembler::pd_patch_instruction_size(address branch, address target) {
- assert_cond(branch != nullptr);
- int64_t offset = target - branch;
- if (NativeInstruction::is_jal_at(branch)) { // jal
- return patch_offset_in_jal(branch, offset);
- } else if (NativeInstruction::is_branch_at(branch)) { // beq/bge/bgeu/blt/bltu/bne
- return patch_offset_in_conditional_branch(branch, offset);
- } else if (NativeInstruction::is_pc_relative_at(branch)) { // auipc, addi/jalr/load
- return patch_offset_in_pc_relative(branch, offset);
- } else if (NativeInstruction::is_movptr_at(branch)) { // movptr
- return patch_addr_in_movptr(branch, target);
- } else if (NativeInstruction::is_li64_at(branch)) { // li64
- return patch_imm_in_li64(branch, target);
- } else if (NativeInstruction::is_li32_at(branch)) { // li32
+int MacroAssembler::pd_patch_instruction_size(address instruction_address, address target) {
+ assert_cond(instruction_address != nullptr);
+ int64_t offset = target - instruction_address;
+ if (NativeInstruction::is_jal_at(instruction_address)) { // jal
+ return patch_offset_in_jal(instruction_address, offset);
+ } else if (NativeInstruction::is_branch_at(instruction_address)) { // beq/bge/bgeu/blt/bltu/bne
+ return patch_offset_in_conditional_branch(instruction_address, offset);
+ } else if (NativeInstruction::is_pc_relative_at(instruction_address)) { // auipc, addi/jalr/load
+ return patch_offset_in_pc_relative(instruction_address, offset);
+ } else if (NativeInstruction::is_movptr1_at(instruction_address)) { // movptr1
+ return patch_addr_in_movptr1(instruction_address, target);
+ } else if (NativeInstruction::is_movptr2_at(instruction_address)) { // movptr2
+ return patch_addr_in_movptr2(instruction_address, target);
+ } else if (NativeInstruction::is_li64_at(instruction_address)) { // li64
+ return patch_imm_in_li64(instruction_address, target);
+ } else if (NativeInstruction::is_li32_at(instruction_address)) { // li32
int64_t imm = (intptr_t)target;
- return patch_imm_in_li32(branch, (int32_t)imm);
- } else if (NativeInstruction::is_li16u_at(branch)) {
+ return patch_imm_in_li32(instruction_address, (int32_t)imm);
+ } else if (NativeInstruction::is_li16u_at(instruction_address)) {
int64_t imm = (intptr_t)target;
- return patch_imm_in_li16u(branch, (uint16_t)imm);
+ return patch_imm_in_li16u(instruction_address, (uint16_t)imm);
} else {
#ifdef ASSERT
tty->print_cr("pd_patch_instruction_size: instruction 0x%x at " INTPTR_FORMAT " could not be patched!\n",
- Assembler::ld_instr(branch), p2i(branch));
- Disassembler::decode(branch - 16, branch + 16);
+ Assembler::ld_instr(instruction_address), p2i(instruction_address));
+ Disassembler::decode(instruction_address - 16, instruction_address + 16);
#endif
ShouldNotReachHere();
return -1;
@@ -1561,8 +1606,10 @@ address MacroAssembler::target_addr_for_insn(address insn_addr) {
offset = get_offset_of_conditional_branch(insn_addr);
} else if (NativeInstruction::is_pc_relative_at(insn_addr)) { // auipc, addi/jalr/load
offset = get_offset_of_pc_relative(insn_addr);
- } else if (NativeInstruction::is_movptr_at(insn_addr)) { // movptr
- return get_target_of_movptr(insn_addr);
+ } else if (NativeInstruction::is_movptr1_at(insn_addr)) { // movptr1
+ return get_target_of_movptr1(insn_addr);
+ } else if (NativeInstruction::is_movptr2_at(insn_addr)) { // movptr2
+ return get_target_of_movptr2(insn_addr);
} else if (NativeInstruction::is_li64_at(insn_addr)) { // li64
return get_target_of_li64(insn_addr);
} else if (NativeInstruction::is_li32_at(insn_addr)) { // li32
@@ -1581,9 +1628,12 @@ int MacroAssembler::patch_oop(address insn_addr, address o) {
// Move narrow OOP
uint32_t n = CompressedOops::narrow_oop_value(cast_to_oop(o));
return patch_imm_in_li32(insn_addr, (int32_t)n);
- } else if (NativeInstruction::is_movptr_at(insn_addr)) {
+ } else if (NativeInstruction::is_movptr1_at(insn_addr)) {
// Move wide OOP
- return patch_addr_in_movptr(insn_addr, o);
+ return patch_addr_in_movptr1(insn_addr, o);
+ } else if (NativeInstruction::is_movptr2_at(insn_addr)) {
+ // Move wide OOP
+ return patch_addr_in_movptr2(insn_addr, o);
}
ShouldNotReachHere();
return -1;
@@ -1604,16 +1654,31 @@ void MacroAssembler::reinit_heapbase() {
}
}
-void MacroAssembler::movptr(Register Rd, address addr, int32_t &offset) {
- int64_t imm64 = (int64_t)addr;
+void MacroAssembler::movptr(Register Rd, address addr, Register temp) {
+ int offset = 0;
+ movptr(Rd, addr, offset, temp);
+ addi(Rd, Rd, offset);
+}
+
+void MacroAssembler::movptr(Register Rd, address addr, int32_t &offset, Register temp) {
+ uint64_t uimm64 = (uint64_t)addr;
#ifndef PRODUCT
{
char buffer[64];
- snprintf(buffer, sizeof(buffer), "0x%" PRIx64, imm64);
+ snprintf(buffer, sizeof(buffer), "0x%" PRIx64, uimm64);
block_comment(buffer);
}
#endif
- assert((uintptr_t)imm64 < (1ull << 48), "48-bit overflow in address constant");
+ assert(uimm64 < (1ull << 48), "48-bit overflow in address constant");
+
+ if (temp == noreg) {
+ movptr1(Rd, uimm64, offset);
+ } else {
+ movptr2(Rd, uimm64, offset, temp);
+ }
+}
+
+void MacroAssembler::movptr1(Register Rd, uint64_t imm64, int32_t &offset) {
// Load upper 31 bits
int64_t imm = imm64 >> 17;
int64_t upper = imm, lower = imm;
@@ -1632,6 +1697,23 @@ void MacroAssembler::movptr(Register Rd, address addr, int32_t &offset) {
offset = imm64 & 0x3f;
}
+void MacroAssembler::movptr2(Register Rd, uint64_t addr, int32_t &offset, Register tmp) {
+ assert_different_registers(Rd, tmp, noreg);
+
+ uint32_t upper18 = (addr >> 30ull);
+ int32_t lower30 = (addr & 0x3fffffffu);
+ int32_t low12 = (lower30 << 20) >> 20;
+ int32_t mid18 = ((lower30 - low12) >> 12);
+
+ lui(tmp, upper18 << 12);
+ lui(Rd, mid18 << 12);
+
+ slli(tmp, tmp, 18);
+ add(Rd, Rd, tmp);
+
+ offset = low12;
+}
+
void MacroAssembler::add(Register Rd, Register Rn, int64_t increment, Register temp) {
if (is_simm12(increment)) {
addi(Rd, Rn, increment);
@@ -2107,6 +2189,7 @@ void MacroAssembler::movoop(Register dst, jobject obj) {
// Move a metadata address into a register.
void MacroAssembler::mov_metadata(Register dst, Metadata* obj) {
+ assert((uintptr_t)obj < (1ull << 48), "48-bit overflow in metadata");
int oop_index;
if (obj == nullptr) {
oop_index = oop_recorder()->allocate_metadata_index(obj);
@@ -3169,7 +3252,6 @@ void MacroAssembler::atomic_cas(
}
void MacroAssembler::far_jump(const Address &entry, Register tmp) {
- assert(ReservedCodeCacheSize < 4*G, "branch out of range");
assert(CodeCache::find_blob(entry.target()) != nullptr,
"destination of far call not found in code cache");
assert(entry.rspec().type() == relocInfo::external_word_type
@@ -3179,12 +3261,11 @@ void MacroAssembler::far_jump(const Address &entry, Register tmp) {
relocate(entry.rspec(), [&] {
int32_t offset;
la(tmp, entry.target(), offset);
- jalr(x0, tmp, offset);
+ jr(tmp, offset);
});
}
void MacroAssembler::far_call(const Address &entry, Register tmp) {
- assert(ReservedCodeCacheSize < 4*G, "branch out of range");
assert(CodeCache::find_blob(entry.target()) != nullptr,
"destination of far call not found in code cache");
assert(entry.rspec().type() == relocInfo::external_word_type
@@ -3194,9 +3275,8 @@ void MacroAssembler::far_call(const Address &entry, Register tmp) {
// We can use auipc + jalr here because we know that the total size of
// the code cache cannot exceed 2Gb.
relocate(entry.rspec(), [&] {
- int32_t offset;
- la(tmp, entry.target(), offset);
- jalr(x1, tmp, offset); // link
+ assert(is_valid_32bit_offset(entry.target() - pc()), "Far call using wrong instructions.");
+ call(entry.target(), tmp);
});
}
@@ -3452,7 +3532,7 @@ void MacroAssembler::reserved_stack_check() {
relocate(target.rspec(), [&] {
int32_t offset;
movptr(t0, target.target(), offset);
- jalr(x0, t0, offset);
+ jr(t0, offset);
});
should_not_reach_here();
@@ -3534,7 +3614,7 @@ address MacroAssembler::trampoline_call(Address entry) {
}
#endif
relocate(entry.rspec(), [&] {
- jal(target);
+ jump_link(target, t0);
});
postcond(pc() != badAddress);
@@ -3544,7 +3624,7 @@ address MacroAssembler::trampoline_call(Address entry) {
address MacroAssembler::ic_call(address entry, jint method_index) {
RelocationHolder rh = virtual_call_Relocation::spec(pc(), method_index);
IncompressibleRegion ir(this); // relocations
- movptr(t1, (address)Universe::non_oop_word());
+ movptr(t1, (address)Universe::non_oop_word(), t0);
assert_cond(entry != nullptr);
return trampoline_call(Address(entry, rh));
}
@@ -3651,8 +3731,8 @@ int MacroAssembler::max_trampoline_stub_size() {
}
int MacroAssembler::static_call_stub_size() {
- // (lui, addi, slli, addi, slli, addi) + (lui, addi, slli, addi, slli) + jalr
- return 12 * NativeInstruction::instruction_size;
+ // (lui, addi, slli, addi, slli, addi) + (lui + lui + slli + add) + jalr
+ return 11 * NativeInstruction::instruction_size;
}
Address MacroAssembler::add_memory_helper(const Address dst, Register tmp) {
@@ -4110,7 +4190,7 @@ void MacroAssembler::multiply_128_x_128_loop(Register y, Register z,
* x12: y
* x13: ylen
* x14: z
- * x15: zlen
+ * x15: tmp0
* x16: tmp1
* x17: tmp2
* x7: tmp3
@@ -4120,10 +4200,10 @@ void MacroAssembler::multiply_128_x_128_loop(Register y, Register z,
* x31: tmp7
*/
void MacroAssembler::multiply_to_len(Register x, Register xlen, Register y, Register ylen,
- Register z, Register zlen,
+ Register z, Register tmp0,
Register tmp1, Register tmp2, Register tmp3, Register tmp4,
Register tmp5, Register tmp6, Register product_hi) {
- assert_different_registers(x, xlen, y, ylen, z, zlen, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6);
+ assert_different_registers(x, xlen, y, ylen, z, tmp0, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6);
const Register idx = tmp1;
const Register kdx = tmp2;
@@ -4132,11 +4212,11 @@ void MacroAssembler::multiply_to_len(Register x, Register xlen, Register y, Regi
const Register y_idx = tmp4;
const Register carry = tmp5;
const Register product = xlen;
- const Register x_xstart = zlen; // reuse register
+ const Register x_xstart = tmp0;
- mv(idx, ylen); // idx = ylen;
- mv(kdx, zlen); // kdx = xlen+ylen;
- mv(carry, zr); // carry = 0;
+ mv(idx, ylen); // idx = ylen;
+ addw(kdx, xlen, ylen); // kdx = xlen+ylen;
+ mv(carry, zr); // carry = 0;
Label L_multiply_64_x_64_loop, L_done;
@@ -4373,7 +4453,7 @@ address MacroAssembler::zero_words(Register ptr, Register cnt) {
return nullptr;
}
} else {
- jal(zero_blocks);
+ jump_link(zero_blocks, t0);
}
}
bind(around);
@@ -5018,20 +5098,6 @@ void MacroAssembler::double_move(VMRegPair src, VMRegPair dst, Register tmp) {
}
}
-void MacroAssembler::rt_call(address dest, Register tmp) {
- CodeBlob *cb = CodeCache::find_blob(dest);
- RuntimeAddress target(dest);
- if (cb) {
- far_call(target, tmp);
- } else {
- relocate(target.rspec(), [&] {
- int32_t offset;
- movptr(tmp, target.target(), offset);
- jalr(x1, tmp, offset);
- });
- }
-}
-
void MacroAssembler::test_bit(Register Rd, Register Rs, uint32_t bit_pos) {
assert(bit_pos < 64, "invalid bit range");
if (UseZbs) {
diff --git a/src/hotspot/cpu/riscv/macroAssembler_riscv.hpp b/src/hotspot/cpu/riscv/macroAssembler_riscv.hpp
index 7c77edd8711ce..e5800b0aa916d 100644
--- a/src/hotspot/cpu/riscv/macroAssembler_riscv.hpp
+++ b/src/hotspot/cpu/riscv/macroAssembler_riscv.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 1997, 2023, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1997, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2014, 2020, Red Hat Inc. All rights reserved.
* Copyright (c) 2020, 2023, Huawei Technologies Co., Ltd. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
@@ -592,14 +592,40 @@ class MacroAssembler: public Assembler {
void bltz(Register Rs, const address dest);
void bgtz(Register Rs, const address dest);
- void j(Label &l, Register temp = t0);
+ private:
+ void jump_link(const address dest, Register temp);
+ void jump_link(const Address &adr, Register temp);
+ public:
+ // We try to follow risc-v asm menomics.
+ // But as we don't layout a reachable GOT,
+ // we often need to resort to movptr, li <48imm>.
+ // https://github.com/riscv-non-isa/riscv-asm-manual/blob/master/riscv-asm.md
+
+ // jump: jal x0, offset
+ // For long reach uses temp register for:
+ // la + jr
void j(const address dest, Register temp = t0);
void j(const Address &adr, Register temp = t0);
- void jal(Label &l, Register temp = t0);
- void jal(const address dest, Register temp = t0);
- void jal(const Address &adr, Register temp = t0);
- void jal(Register Rd, Label &L, Register temp = t0);
- void jal(Register Rd, const address dest, Register temp = t0);
+ void j(Label &l, Register temp = t0);
+
+ // jump register: jalr x0, offset(rs)
+ void jr(Register Rd, int32_t offset = 0);
+
+ // call: la + jalr x1
+ void call(const address dest, Register temp = t0);
+
+ // jalr: jalr x1, offset(rs)
+ void jalr(Register Rs, int32_t offset = 0);
+
+ // Emit a runtime call. Only invalidates the tmp register which
+ // is used to keep the entry address for jalr/movptr.
+ // Uses call() for intra code cache, else movptr + jalr.
+ void rt_call(address dest, Register tmp = t0);
+
+ // ret: jalr x0, 0(x1)
+ inline void ret() {
+ Assembler::jalr(x0, x1, 0);
+ }
//label
void beqz(Register Rs, Label &l, bool is_far = false);
@@ -689,6 +715,14 @@ class MacroAssembler: public Assembler {
return x < (twoG - twoK) && x >= (-twoG - twoK);
}
+ // Ensure that the auipc can reach the destination at x from anywhere within
+ // the code cache so that if it is relocated we know it will still reach.
+ bool is_32bit_offset_from_codecache(int64_t x) {
+ int64_t low = (int64_t)CodeCache::low_bound();
+ int64_t high = (int64_t)CodeCache::high_bound();
+ return is_valid_32bit_offset(x - low) && is_valid_32bit_offset(x - high);
+ }
+
public:
void push_reg(Register Rs);
void pop_reg(Register Rd);
@@ -733,14 +767,13 @@ class MacroAssembler: public Assembler {
typedef void (MacroAssembler::* compare_and_branch_insn)(Register Rs1, Register Rs2, const address dest);
typedef void (MacroAssembler::* compare_and_branch_label_insn)(Register Rs1, Register Rs2, Label &L, bool is_far);
typedef void (MacroAssembler::* jal_jalr_insn)(Register Rt, address dest);
- typedef void (MacroAssembler::* load_insn_by_temp)(Register Rt, address dest, Register temp);
- void wrap_label(Register r, Label &L, Register t, load_insn_by_temp insn);
void wrap_label(Register r, Label &L, jal_jalr_insn insn);
void wrap_label(Register r1, Register r2, Label &L,
compare_and_branch_insn insn,
compare_and_branch_label_insn neg_insn, bool is_far = false);
+ // la will use movptr instead of GOT when not in reach for auipc.
void la(Register Rd, Label &label);
void la(Register Rd, const address addr);
void la(Register Rd, const address addr, int32_t &offset);
@@ -771,17 +804,16 @@ class MacroAssembler: public Assembler {
}
}
- void movptr(Register Rd, address addr, int32_t &offset);
-
- void movptr(Register Rd, address addr) {
- int offset = 0;
- movptr(Rd, addr, offset);
- addi(Rd, Rd, offset);
- }
-
- inline void movptr(Register Rd, uintptr_t imm64) {
- movptr(Rd, (address)imm64);
- }
+ // Generates a load of a 48-bit constant which can be
+ // patched to any 48-bit constant, i.e. address.
+ // If common case supply additional temp register
+ // to shorten the instruction sequence.
+ void movptr(Register Rd, address addr, Register tmp = noreg);
+ void movptr(Register Rd, address addr, int32_t &offset, Register tmp = noreg);
+ private:
+ void movptr1(Register Rd, uintptr_t addr, int32_t &offset);
+ void movptr2(Register Rd, uintptr_t addr, int32_t &offset, Register tmp);
+ public:
// arith
void add (Register Rd, Register Rn, int64_t increment, Register temp = t0);
@@ -1254,7 +1286,7 @@ class MacroAssembler: public Assembler {
Register tmp, Register tmp3, Register tmp4,
Register tmp6, Register product_hi);
void multiply_to_len(Register x, Register xlen, Register y, Register ylen,
- Register z, Register zlen,
+ Register z, Register tmp0,
Register tmp1, Register tmp2, Register tmp3, Register tmp4,
Register tmp5, Register tmp6, Register product_hi);
#endif
@@ -1469,21 +1501,6 @@ class MacroAssembler: public Assembler {
VMRegPair dst,
bool is_receiver,
int* receiver_offset);
- // Emit a runtime call. Only invalidates the tmp register which
- // is used to keep the entry address for jalr/movptr.
- void rt_call(address dest, Register tmp = t0);
-
- void call(const address dest, Register temp = t0) {
- assert_cond(dest != nullptr);
- assert(temp != noreg, "expecting a register");
- int32_t offset = 0;
- mv(temp, dest, offset);
- jalr(x1, temp, offset);
- }
-
- inline void ret() {
- jalr(x0, x1, 0);
- }
#ifdef ASSERT
// Template short-hand support to clean-up after a failed call to trampoline
diff --git a/src/hotspot/cpu/riscv/matcher_riscv.hpp b/src/hotspot/cpu/riscv/matcher_riscv.hpp
index 32665f5922cf5..6e2b97b820214 100644
--- a/src/hotspot/cpu/riscv/matcher_riscv.hpp
+++ b/src/hotspot/cpu/riscv/matcher_riscv.hpp
@@ -128,8 +128,8 @@
static const bool int_in_long = true;
// Does the CPU supports vector variable shift instructions?
- static constexpr bool supports_vector_variable_shifts(void) {
- return false;
+ static bool supports_vector_variable_shifts(void) {
+ return UseRVV;
}
// Does target support predicated operation emulation.
@@ -138,18 +138,18 @@
}
// Does the CPU supports vector variable rotate instructions?
- static constexpr bool supports_vector_variable_rotates(void) {
- return false;
+ static bool supports_vector_variable_rotates(void) {
+ return UseZvbb;
}
// Does the CPU supports vector constant rotate instructions?
- static constexpr bool supports_vector_constant_rotates(int shift) {
- return false;
+ static bool supports_vector_constant_rotates(int shift) {
+ return UseZvbb;
}
// Does the CPU supports vector unsigned comparison instructions?
- static constexpr bool supports_vector_comparison_unsigned(int vlen, BasicType bt) {
- return false;
+ static bool supports_vector_comparison_unsigned(int vlen, BasicType bt) {
+ return UseRVV;
}
// Some microarchitectures have mask registers used on vectors
diff --git a/src/hotspot/cpu/riscv/nativeInst_riscv.cpp b/src/hotspot/cpu/riscv/nativeInst_riscv.cpp
index c4048f66e0d3e..dd54329e521d5 100644
--- a/src/hotspot/cpu/riscv/nativeInst_riscv.cpp
+++ b/src/hotspot/cpu/riscv/nativeInst_riscv.cpp
@@ -84,7 +84,7 @@ bool NativeInstruction::is_load_pc_relative_at(address instr) {
check_load_pc_relative_data_dependency(instr);
}
-bool NativeInstruction::is_movptr_at(address instr) {
+bool NativeInstruction::is_movptr1_at(address instr) {
return is_lui_at(instr) && // Lui
is_addi_at(instr + instruction_size) && // Addi
is_slli_shift_at(instr + instruction_size * 2, 11) && // Slli Rd, Rs, 11
@@ -93,7 +93,18 @@ bool NativeInstruction::is_movptr_at(address instr) {
(is_addi_at(instr + instruction_size * 5) ||
is_jalr_at(instr + instruction_size * 5) ||
is_load_at(instr + instruction_size * 5)) && // Addi/Jalr/Load
- check_movptr_data_dependency(instr);
+ check_movptr1_data_dependency(instr);
+}
+
+bool NativeInstruction::is_movptr2_at(address instr) {
+ return is_lui_at(instr) && // lui
+ is_lui_at(instr + instruction_size) && // lui
+ is_slli_shift_at(instr + instruction_size * 2, 18) && // slli Rd, Rs, 18
+ is_add_at(instr + instruction_size * 3) &&
+ (is_addi_at(instr + instruction_size * 4) ||
+ is_jalr_at(instr + instruction_size * 4) ||
+ is_load_at(instr + instruction_size * 4)) && // Addi/Jalr/Load
+ check_movptr2_data_dependency(instr);
}
bool NativeInstruction::is_li16u_at(address instr) {
@@ -201,10 +212,11 @@ void NativeCall::insert(address code_pos, address entry) { Unimplemented(); }
//-------------------------------------------------------------------
void NativeMovConstReg::verify() {
- if (!(nativeInstruction_at(instruction_address())->is_movptr() ||
- is_auipc_at(instruction_address()))) {
- fatal("should be MOVPTR or AUIPC");
+ NativeInstruction* ni = nativeInstruction_at(instruction_address());
+ if (ni->is_movptr() || ni->is_auipc()) {
+ return;
}
+ fatal("should be MOVPTR or AUIPC");
}
intptr_t NativeMovConstReg::data() const {
@@ -223,7 +235,7 @@ void NativeMovConstReg::set_data(intptr_t x) {
} else {
// Store x into the instruction stream.
MacroAssembler::pd_patch_instruction_size(instruction_address(), (address)x);
- ICache::invalidate_range(instruction_address(), movptr_instruction_size);
+ ICache::invalidate_range(instruction_address(), movptr1_instruction_size /* > movptr2_instruction_size */ );
}
// Find and replace the oop/metadata corresponding to this
@@ -393,14 +405,16 @@ void NativeJump::patch_verified_entry(address entry, address verified_entry, add
ICache::invalidate_range(verified_entry, instruction_size);
}
+//-------------------------------------------------------------------
+
void NativeGeneralJump::insert_unconditional(address code_pos, address entry) {
CodeBuffer cb(code_pos, instruction_size);
MacroAssembler a(&cb);
Assembler::IncompressibleRegion ir(&a); // Fixed length: see NativeGeneralJump::get_instruction_size()
int32_t offset = 0;
- a.movptr(t0, entry, offset); // lui, addi, slli, addi, slli
- a.jalr(x0, t0, offset); // jalr
+ a.movptr(t0, entry, offset, t1); // lui, lui, slli, add
+ a.jr(t0, offset); // jalr
ICache::invalidate_range(code_pos, instruction_size);
}
@@ -410,6 +424,7 @@ void NativeGeneralJump::replace_mt_safe(address instr_addr, address code_buffer)
ShouldNotCallThis();
}
+//-------------------------------------------------------------------
address NativeCallTrampolineStub::destination(nmethod *nm) const {
return ptr_at(data_offset);
diff --git a/src/hotspot/cpu/riscv/nativeInst_riscv.hpp b/src/hotspot/cpu/riscv/nativeInst_riscv.hpp
index 48bbb2b3b181d..dcdf525aafb9e 100644
--- a/src/hotspot/cpu/riscv/nativeInst_riscv.hpp
+++ b/src/hotspot/cpu/riscv/nativeInst_riscv.hpp
@@ -64,7 +64,11 @@ class NativeInstruction {
}
bool is_jal() const { return is_jal_at(addr_at(0)); }
- bool is_movptr() const { return is_movptr_at(addr_at(0)); }
+ bool is_movptr() const { return is_movptr1_at(addr_at(0)) ||
+ is_movptr2_at(addr_at(0)); }
+ bool is_movptr1() const { return is_movptr1_at(addr_at(0)); }
+ bool is_movptr2() const { return is_movptr2_at(addr_at(0)); }
+ bool is_auipc() const { return is_auipc_at(addr_at(0)); }
bool is_call() const { return is_call_at(addr_at(0)); }
bool is_jump() const { return is_jump_at(addr_at(0)); }
@@ -76,9 +80,10 @@ class NativeInstruction {
static bool is_float_load_at(address instr) { assert_cond(instr != nullptr); return extract_opcode(instr) == 0b0000111; }
static bool is_auipc_at(address instr) { assert_cond(instr != nullptr); return extract_opcode(instr) == 0b0010111; }
static bool is_jump_at(address instr) { assert_cond(instr != nullptr); return is_branch_at(instr) || is_jal_at(instr) || is_jalr_at(instr); }
+ static bool is_add_at(address instr) { assert_cond(instr != nullptr); return extract_opcode(instr) == 0b0110011 && extract_funct3(instr) == 0b000; }
static bool is_addi_at(address instr) { assert_cond(instr != nullptr); return extract_opcode(instr) == 0b0010011 && extract_funct3(instr) == 0b000; }
static bool is_addiw_at(address instr) { assert_cond(instr != nullptr); return extract_opcode(instr) == 0b0011011 && extract_funct3(instr) == 0b000; }
- static bool is_addiw_to_zr_at(address instr) { assert_cond(instr != nullptr); return is_addiw_at(instr) && extract_rd(instr) == zr; }
+ static bool is_addiw_to_zr_at(address instr){ assert_cond(instr != nullptr); return is_addiw_at(instr) && extract_rd(instr) == zr; }
static bool is_lui_at(address instr) { assert_cond(instr != nullptr); return extract_opcode(instr) == 0b0110111; }
static bool is_lui_to_zr_at(address instr) { assert_cond(instr != nullptr); return is_lui_at(instr) && extract_rd(instr) == zr; }
@@ -109,7 +114,7 @@ class NativeInstruction {
// addi
// slli
// addi/jalr/load
- static bool check_movptr_data_dependency(address instr) {
+ static bool check_movptr1_data_dependency(address instr) {
address lui = instr;
address addi1 = lui + instruction_size;
address slli1 = addi1 + instruction_size;
@@ -127,6 +132,26 @@ class NativeInstruction {
extract_rs1(last_instr) == extract_rd(slli2);
}
+ // the instruction sequence of movptr2 is as below:
+ // lui
+ // lui
+ // slli
+ // add
+ // addi/jalr/load
+ static bool check_movptr2_data_dependency(address instr) {
+ address lui1 = instr;
+ address lui2 = lui1 + instruction_size;
+ address slli = lui2 + instruction_size;
+ address add = slli + instruction_size;
+ address last_instr = add + instruction_size;
+ return extract_rd(add) == extract_rd(lui2) &&
+ extract_rs1(add) == extract_rd(lui2) &&
+ extract_rs2(add) == extract_rd(slli) &&
+ extract_rs1(slli) == extract_rd(lui1) &&
+ extract_rd(slli) == extract_rd(lui1) &&
+ extract_rs1(last_instr) == extract_rd(add);
+ }
+
// the instruction sequence of li64 is as below:
// lui
// addi
@@ -204,7 +229,8 @@ class NativeInstruction {
extract_rs1(load) == extract_rd(load);
}
- static bool is_movptr_at(address instr);
+ static bool is_movptr1_at(address instr);
+ static bool is_movptr2_at(address instr);
static bool is_li16u_at(address instr);
static bool is_li32_at(address instr);
static bool is_li64_at(address instr);
@@ -351,26 +377,33 @@ inline NativeCall* nativeCall_before(address return_address) {
class NativeMovConstReg: public NativeInstruction {
public:
enum RISCV_specific_constants {
- movptr_instruction_size = 6 * NativeInstruction::instruction_size, // lui, addi, slli, addi, slli, addi. See movptr().
- load_pc_relative_instruction_size = 2 * NativeInstruction::instruction_size, // auipc, ld
- instruction_offset = 0,
- displacement_offset = 0
+ movptr1_instruction_size = 6 * NativeInstruction::instruction_size, // lui, addi, slli, addi, slli, addi. See movptr1().
+ movptr2_instruction_size = 5 * NativeInstruction::instruction_size, // lui, lui, slli, add, addi. See movptr2().
+ load_pc_relative_instruction_size = 2 * NativeInstruction::instruction_size // auipc, ld
};
- address instruction_address() const { return addr_at(instruction_offset); }
+ address instruction_address() const { return addr_at(0); }
address next_instruction_address() const {
// if the instruction at 5 * instruction_size is addi,
// it means a lui + addi + slli + addi + slli + addi instruction sequence,
// and the next instruction address should be addr_at(6 * instruction_size).
// However, when the instruction at 5 * instruction_size isn't addi,
// the next instruction address should be addr_at(5 * instruction_size)
- if (nativeInstruction_at(instruction_address())->is_movptr()) {
- if (is_addi_at(addr_at(movptr_instruction_size - NativeInstruction::instruction_size))) {
+ if (is_movptr1_at(instruction_address())) {
+ if (is_addi_at(addr_at(movptr1_instruction_size - NativeInstruction::instruction_size))) {
// Assume: lui, addi, slli, addi, slli, addi
- return addr_at(movptr_instruction_size);
+ return addr_at(movptr1_instruction_size);
} else {
// Assume: lui, addi, slli, addi, slli
- return addr_at(movptr_instruction_size - NativeInstruction::instruction_size);
+ return addr_at(movptr1_instruction_size - NativeInstruction::instruction_size);
+ }
+ } else if (is_movptr2_at(instruction_address())) {
+ if (is_addi_at(addr_at(movptr2_instruction_size - NativeInstruction::instruction_size))) {
+ // Assume: lui, lui, slli, add, addi
+ return addr_at(movptr2_instruction_size);
+ } else {
+ // Assume: lui, lui, slli, add
+ return addr_at(movptr2_instruction_size - NativeInstruction::instruction_size);
}
} else if (is_load_pc_relative_at(instruction_address())) {
// Assume: auipc, ld
@@ -383,12 +416,6 @@ class NativeMovConstReg: public NativeInstruction {
intptr_t data() const;
void set_data(intptr_t x);
- void flush() {
- if (!maybe_cpool_ref(instruction_address())) {
- ICache::invalidate_range(instruction_address(), movptr_instruction_size);
- }
- }
-
void verify();
void print();
@@ -399,14 +426,14 @@ class NativeMovConstReg: public NativeInstruction {
inline NativeMovConstReg* nativeMovConstReg_at(address addr) {
assert_cond(addr != nullptr);
- NativeMovConstReg* test = (NativeMovConstReg*)(addr - NativeMovConstReg::instruction_offset);
+ NativeMovConstReg* test = (NativeMovConstReg*)(addr);
DEBUG_ONLY(test->verify());
return test;
}
inline NativeMovConstReg* nativeMovConstReg_before(address addr) {
assert_cond(addr != nullptr);
- NativeMovConstReg* test = (NativeMovConstReg*)(addr - NativeMovConstReg::instruction_size - NativeMovConstReg::instruction_offset);
+ NativeMovConstReg* test = (NativeMovConstReg*)(addr - NativeMovConstReg::instruction_size);
DEBUG_ONLY(test->verify());
return test;
}
@@ -484,10 +511,7 @@ inline NativeJump* nativeJump_at(address addr) {
class NativeGeneralJump: public NativeJump {
public:
enum RISCV_specific_constants {
- instruction_size = 6 * NativeInstruction::instruction_size, // lui, addi, slli, addi, slli, jalr
- instruction_offset = 0,
- data_offset = 0,
- next_instruction_offset = 6 * NativeInstruction::instruction_size // lui, addi, slli, addi, slli, jalr
+ instruction_size = 5 * NativeInstruction::instruction_size, // lui, lui, slli, add, jalr
};
address jump_destination() const;
diff --git a/src/hotspot/cpu/riscv/relocInfo_riscv.cpp b/src/hotspot/cpu/riscv/relocInfo_riscv.cpp
index 0e50d0b0796ca..b3cdb93a979d8 100644
--- a/src/hotspot/cpu/riscv/relocInfo_riscv.cpp
+++ b/src/hotspot/cpu/riscv/relocInfo_riscv.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 1998, 2023, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1998, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2014, Red Hat Inc. All rights reserved.
* Copyright (c) 2020, 2022, Huawei Technologies Co., Ltd. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
@@ -31,7 +31,7 @@
#include "oops/oop.inline.hpp"
#include "runtime/safepoint.hpp"
-void Relocation::pd_set_data_value(address x, intptr_t o, bool verify_only) {
+void Relocation::pd_set_data_value(address x, bool verify_only) {
if (verify_only) {
return;
}
diff --git a/src/hotspot/cpu/riscv/riscv.ad b/src/hotspot/cpu/riscv/riscv.ad
index f9d66f158fb2b..8b28ec88efddf 100644
--- a/src/hotspot/cpu/riscv/riscv.ad
+++ b/src/hotspot/cpu/riscv/riscv.ad
@@ -649,10 +649,12 @@ reg_class non_allocatable_reg(
R23, R23_H // java thread
);
+// Class for all non-special integer registers
reg_class no_special_reg32 %{
return _NO_SPECIAL_REG32_mask;
%}
+// Class for all non-special long integer registers
reg_class no_special_reg %{
return _NO_SPECIAL_REG_mask;
%}
@@ -661,10 +663,16 @@ reg_class ptr_reg %{
return _PTR_REG_mask;
%}
+// Class for all non_special pointer registers
reg_class no_special_ptr_reg %{
return _NO_SPECIAL_PTR_REG_mask;
%}
+// Class for all non_special pointer registers (excluding fp)
+reg_class no_special_no_fp_ptr_reg %{
+ return _NO_SPECIAL_NO_FP_PTR_REG_mask;
+%}
+
// Class for 64 bit register r10
reg_class r10_reg(
R10, R10_H
@@ -1037,6 +1045,7 @@ extern RegMask _PTR_REG_mask;
extern RegMask _NO_SPECIAL_REG32_mask;
extern RegMask _NO_SPECIAL_REG_mask;
extern RegMask _NO_SPECIAL_PTR_REG_mask;
+extern RegMask _NO_SPECIAL_NO_FP_PTR_REG_mask;
class CallStubImpl {
@@ -1099,6 +1108,7 @@ RegMask _PTR_REG_mask;
RegMask _NO_SPECIAL_REG32_mask;
RegMask _NO_SPECIAL_REG_mask;
RegMask _NO_SPECIAL_PTR_REG_mask;
+RegMask _NO_SPECIAL_NO_FP_PTR_REG_mask;
void reg_mask_init() {
@@ -1133,6 +1143,9 @@ void reg_mask_init() {
_NO_SPECIAL_REG_mask.Remove(OptoReg::as_OptoReg(x8->as_VMReg()));
_NO_SPECIAL_PTR_REG_mask.Remove(OptoReg::as_OptoReg(x8->as_VMReg()));
}
+
+ _NO_SPECIAL_NO_FP_PTR_REG_mask = _NO_SPECIAL_PTR_REG_mask;
+ _NO_SPECIAL_NO_FP_PTR_REG_mask.Remove(OptoReg::as_OptoReg(x8->as_VMReg()));
}
void PhaseOutput::pd_perform_mach_node_analysis() {
@@ -1231,7 +1244,7 @@ int MachCallStaticJavaNode::ret_addr_offset()
int MachCallDynamicJavaNode::ret_addr_offset()
{
- return 7 * NativeInstruction::instruction_size; // movptr, jal
+ return NativeMovConstReg::movptr2_instruction_size + NativeInstruction::instruction_size; // movptr2, jal
}
int MachCallRuntimeNode::ret_addr_offset() {
@@ -1272,12 +1285,11 @@ int CallStaticJavaDirectNode::compute_padding(int current_offset) const
// ensure that it does not span a cache line so that it can be patched.
int CallDynamicJavaDirectNode::compute_padding(int current_offset) const
{
- // skip the movptr in MacroAssembler::ic_call():
- // lui + addi + slli + addi + slli + addi
- // Though movptr() has already 4-byte aligned with or without RVC,
+ // skip the movptr2 in MacroAssembler::ic_call():
+ // lui, lui, slli, add, addi
+ // Though movptr2() has already 4-byte aligned with or without RVC,
// We need to prevent from further changes by explicitly calculating the size.
- const int movptr_size = 6 * NativeInstruction::instruction_size;
- current_offset += movptr_size;
+ current_offset += NativeMovConstReg::movptr2_instruction_size;
// to make sure the address of jal 4-byte aligned.
return align_up(current_offset, alignment_required()) - current_offset;
}
@@ -2137,11 +2149,44 @@ bool size_fits_all_mem_uses(AddPNode* addp, int shift) {
return true;
}
+// Binary src (Replicate scalar/immediate)
+static bool is_vector_scalar_bitwise_pattern(Node* n, Node* m) {
+ if (n == nullptr || m == nullptr) {
+ return false;
+ }
+
+ if (m->Opcode() != Op_Replicate) {
+ return false;
+ }
+
+ switch (n->Opcode()) {
+ case Op_AndV:
+ case Op_OrV:
+ case Op_XorV: {
+ return true;
+ }
+ default:
+ return false;
+ }
+}
+
+// (XorV src (Replicate m1))
+// (XorVMask src (MaskAll m1))
+static bool is_vector_bitwise_not_pattern(Node* n, Node* m) {
+ if (n != nullptr && m != nullptr) {
+ return (n->Opcode() == Op_XorV || n->Opcode() == Op_XorVMask) &&
+ VectorNode::is_all_ones_vector(m);
+ }
+ return false;
+}
+
// Should the Matcher clone input 'm' of node 'n'?
bool Matcher::pd_clone_node(Node* n, Node* m, Matcher::MStack& mstack) {
assert_cond(m != nullptr);
- if (is_vshift_con_pattern(n, m)) { // ShiftV src (ShiftCntV con)
- mstack.push(m, Visit); // m = ShiftCntV
+ if (is_vshift_con_pattern(n, m) || // ShiftV src (ShiftCntV con)
+ is_vector_bitwise_not_pattern(n, m) ||
+ is_vector_scalar_bitwise_pattern(n, m)) {
+ mstack.push(m, Visit);
return true;
}
return false;
@@ -3175,6 +3220,18 @@ operand iRegPNoSp()
interface(REG_INTER);
%}
+// This operand is not allowed to use fp even if
+// fp is not used to hold the frame pointer.
+operand iRegPNoSpNoFp()
+%{
+ constraint(ALLOC_IN_RC(no_special_no_fp_ptr_reg));
+ match(RegP);
+ match(iRegPNoSp);
+ op_cost(0);
+ format %{ %}
+ interface(REG_INTER);
+%}
+
operand iRegP_R10()
%{
constraint(ALLOC_IN_RC(r10_reg));
@@ -9956,7 +10013,7 @@ instruct CallDynamicJavaDirect(method meth, rFlagsReg cr)
effect(USE meth, KILL cr);
- ins_cost(BRANCH_COST + ALU_COST * 6);
+ ins_cost(BRANCH_COST + ALU_COST * 5);
format %{ "CALL,dynamic $meth\t#@CallDynamicJavaDirect" %}
@@ -10506,7 +10563,9 @@ instruct cmpFastUnlockLightweight(rFlagsReg cr, iRegP object, iRegP_R10 box, iRe
// Also known as an 'interprocedural jump'.
// Target of jump will eventually return to caller.
// TailJump below removes the return address.
-instruct TailCalljmpInd(iRegPNoSp jump_target, inline_cache_RegP method_oop)
+// Don't use fp for 'jump_target' because a MachEpilogNode has already been
+// emitted just above the TailCall which has reset fp to the caller state.
+instruct TailCalljmpInd(iRegPNoSpNoFp jump_target, inline_cache_RegP method_oop)
%{
match(TailCall jump_target method_oop);
@@ -10519,7 +10578,7 @@ instruct TailCalljmpInd(iRegPNoSp jump_target, inline_cache_RegP method_oop)
ins_pipe(pipe_class_call);
%}
-instruct TailjmpInd(iRegPNoSp jump_target, iRegP_R10 ex_oop)
+instruct TailjmpInd(iRegPNoSpNoFp jump_target, iRegP_R10 ex_oop)
%{
match(TailJump jump_target ex_oop);
diff --git a/src/hotspot/cpu/riscv/riscv_v.ad b/src/hotspot/cpu/riscv/riscv_v.ad
index 7869d6156e590..ae25263ef5ba8 100644
--- a/src/hotspot/cpu/riscv/riscv_v.ad
+++ b/src/hotspot/cpu/riscv/riscv_v.ad
@@ -73,6 +73,14 @@ source %{
return false;
}
break;
+ case Op_CountTrailingZerosV:
+ case Op_CountLeadingZerosV:
+ case Op_PopCountVL:
+ case Op_PopCountVI:
+ case Op_ReverseBytesV:
+ case Op_RotateLeftV:
+ case Op_RotateRightV:
+ return UseZvbb;
case Op_LoadVectorGather:
case Op_LoadVectorGatherMasked:
if (is_subword_type(bt)) {
@@ -477,6 +485,130 @@ instruct vand_masked(vReg dst_src1, vReg src2, vRegMask_V0 v0) %{
ins_pipe(pipe_slow);
%}
+// vector-immediate and (unpredicated)
+
+instruct vand_immI(vReg dst_src, immI5 con) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_INT ||
+ Matcher::vector_element_basic_type(n) == T_BYTE ||
+ Matcher::vector_element_basic_type(n) == T_SHORT);
+ match(Set dst_src (AndV dst_src (Replicate con)));
+ format %{ "vand_immI $dst_src, $dst_src, $con" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ __ vsetvli_helper(bt, Matcher::vector_length(this));
+ __ vand_vi(as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($dst_src$$reg),
+ $con$$constant);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+instruct vand_immL(vReg dst_src, immL5 con) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_LONG);
+ match(Set dst_src (AndV dst_src (Replicate con)));
+ format %{ "vand_immL $dst_src, $dst_src, $con" %}
+ ins_encode %{
+ __ vsetvli_helper(T_LONG, Matcher::vector_length(this));
+ __ vand_vi(as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($dst_src$$reg),
+ $con$$constant);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+// vector-scalar and (unpredicated)
+
+instruct vand_regI(vReg dst_src, iRegIorL2I src) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_INT ||
+ Matcher::vector_element_basic_type(n) == T_BYTE ||
+ Matcher::vector_element_basic_type(n) == T_SHORT);
+ match(Set dst_src (AndV dst_src (Replicate src)));
+ format %{ "vand_regI $dst_src, $dst_src, $src" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ __ vsetvli_helper(bt, Matcher::vector_length(this));
+ __ vand_vx(as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($dst_src$$reg),
+ as_Register($src$$reg));
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+instruct vand_regL(vReg dst_src, iRegL src) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_LONG);
+ match(Set dst_src (AndV dst_src (Replicate src)));
+ format %{ "vand_regL $dst_src, $dst_src, $src" %}
+ ins_encode %{
+ __ vsetvli_helper(T_LONG, Matcher::vector_length(this));
+ __ vand_vx(as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($dst_src$$reg),
+ as_Register($src$$reg));
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+// vector-immediate and (predicated)
+
+instruct vand_immI_masked(vReg dst_src, immI5 con, vRegMask_V0 v0) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_INT ||
+ Matcher::vector_element_basic_type(n) == T_BYTE ||
+ Matcher::vector_element_basic_type(n) == T_SHORT);
+ match(Set dst_src (AndV (Binary dst_src (Replicate con)) v0));
+ format %{ "vand_immI_masked $dst_src, $dst_src, $con" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ __ vsetvli_helper(bt, Matcher::vector_length(this));
+ __ vand_vi(as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($dst_src$$reg),
+ $con$$constant, Assembler::v0_t);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+instruct vand_immL_masked(vReg dst_src, immL5 con, vRegMask_V0 v0) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_LONG);
+ match(Set dst_src (AndV (Binary dst_src (Replicate con)) v0));
+ format %{ "vand_immL_masked $dst_src, $dst_src, $con" %}
+ ins_encode %{
+ __ vsetvli_helper(T_LONG, Matcher::vector_length(this));
+ __ vand_vi(as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($dst_src$$reg),
+ $con$$constant, Assembler::v0_t);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+// vector-scalar and (predicated)
+
+instruct vand_regI_masked(vReg dst_src, iRegIorL2I src, vRegMask_V0 v0) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_INT ||
+ Matcher::vector_element_basic_type(n) == T_BYTE ||
+ Matcher::vector_element_basic_type(n) == T_SHORT);
+ match(Set dst_src (AndV (Binary dst_src (Replicate src)) v0));
+ format %{ "vand_regI_masked $dst_src, $dst_src, $src" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ __ vsetvli_helper(bt, Matcher::vector_length(this));
+ __ vand_vx(as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($dst_src$$reg),
+ as_Register($src$$reg), Assembler::v0_t);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+instruct vand_regL_masked(vReg dst_src, iRegL src, vRegMask_V0 v0) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_LONG);
+ match(Set dst_src (AndV (Binary dst_src (Replicate src)) v0));
+ format %{ "vand_regL_masked $dst_src, $dst_src, $src" %}
+ ins_encode %{
+ __ vsetvli_helper(T_LONG, Matcher::vector_length(this));
+ __ vand_vx(as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($dst_src$$reg),
+ as_Register($src$$reg), Assembler::v0_t);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
// vector or
instruct vor(vReg dst, vReg src1, vReg src2) %{
@@ -509,6 +641,130 @@ instruct vor_masked(vReg dst_src1, vReg src2, vRegMask_V0 v0) %{
ins_pipe(pipe_slow);
%}
+// vector-immediate or (unpredicated)
+
+instruct vor_immI(vReg dst_src, immI5 con) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_INT ||
+ Matcher::vector_element_basic_type(n) == T_BYTE ||
+ Matcher::vector_element_basic_type(n) == T_SHORT);
+ match(Set dst_src (OrV dst_src (Replicate con)));
+ format %{ "vor_immI $dst_src, $dst_src, $con" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ __ vsetvli_helper(bt, Matcher::vector_length(this));
+ __ vor_vi(as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($dst_src$$reg),
+ $con$$constant);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+instruct vor_immL(vReg dst_src, immL5 con) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_LONG);
+ match(Set dst_src (OrV dst_src (Replicate con)));
+ format %{ "vor_immL $dst_src, $dst_src, $con" %}
+ ins_encode %{
+ __ vsetvli_helper(T_LONG, Matcher::vector_length(this));
+ __ vor_vi(as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($dst_src$$reg),
+ $con$$constant);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+// vector-scalar or (unpredicated)
+
+instruct vor_regI(vReg dst_src, iRegIorL2I src) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_INT ||
+ Matcher::vector_element_basic_type(n) == T_BYTE ||
+ Matcher::vector_element_basic_type(n) == T_SHORT);
+ match(Set dst_src (OrV dst_src (Replicate src)));
+ format %{ "vor_regI $dst_src, $dst_src, $src" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ __ vsetvli_helper(bt, Matcher::vector_length(this));
+ __ vor_vx(as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($dst_src$$reg),
+ as_Register($src$$reg));
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+instruct vor_regL(vReg dst_src, iRegL src) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_LONG);
+ match(Set dst_src (OrV dst_src (Replicate src)));
+ format %{ "vor_regL $dst_src, $dst_src, $src" %}
+ ins_encode %{
+ __ vsetvli_helper(T_LONG, Matcher::vector_length(this));
+ __ vor_vx(as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($dst_src$$reg),
+ as_Register($src$$reg));
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+// vector-immediate or (predicated)
+
+instruct vor_immI_masked(vReg dst_src, immI5 con, vRegMask_V0 v0) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_INT ||
+ Matcher::vector_element_basic_type(n) == T_BYTE ||
+ Matcher::vector_element_basic_type(n) == T_SHORT);
+ match(Set dst_src (OrV (Binary dst_src (Replicate con)) v0));
+ format %{ "vor_immI_masked $dst_src, $dst_src, $con" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ __ vsetvli_helper(bt, Matcher::vector_length(this));
+ __ vor_vi(as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($dst_src$$reg),
+ $con$$constant, Assembler::v0_t);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+instruct vor_immL_masked(vReg dst_src, immL5 con, vRegMask_V0 v0) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_LONG);
+ match(Set dst_src (OrV (Binary dst_src (Replicate con)) v0));
+ format %{ "vor_immL_masked $dst_src, $dst_src, $con" %}
+ ins_encode %{
+ __ vsetvli_helper(T_LONG, Matcher::vector_length(this));
+ __ vor_vi(as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($dst_src$$reg),
+ $con$$constant, Assembler::v0_t);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+// vector-scalar or (predicated)
+
+instruct vor_regI_masked(vReg dst_src, iRegIorL2I src, vRegMask_V0 v0) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_INT ||
+ Matcher::vector_element_basic_type(n) == T_BYTE ||
+ Matcher::vector_element_basic_type(n) == T_SHORT);
+ match(Set dst_src (OrV (Binary dst_src (Replicate src)) v0));
+ format %{ "vor_regI_masked $dst_src, $dst_src, $src" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ __ vsetvli_helper(bt, Matcher::vector_length(this));
+ __ vor_vx(as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($dst_src$$reg),
+ as_Register($src$$reg), Assembler::v0_t);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+instruct vor_regL_masked(vReg dst_src, iRegL src, vRegMask_V0 v0) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_LONG);
+ match(Set dst_src (OrV (Binary dst_src (Replicate src)) v0));
+ format %{ "vor_regL_masked $dst_src, $dst_src, $src" %}
+ ins_encode %{
+ __ vsetvli_helper(T_LONG, Matcher::vector_length(this));
+ __ vor_vx(as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($dst_src$$reg),
+ as_Register($src$$reg), Assembler::v0_t);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
// vector xor
instruct vxor(vReg dst, vReg src1, vReg src2) %{
@@ -541,6 +797,194 @@ instruct vxor_masked(vReg dst_src1, vReg src2, vRegMask_V0 v0) %{
ins_pipe(pipe_slow);
%}
+// vector-immediate xor (unpredicated)
+
+instruct vxor_immI(vReg dst_src, immI5 con) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_INT ||
+ Matcher::vector_element_basic_type(n) == T_BYTE ||
+ Matcher::vector_element_basic_type(n) == T_SHORT);
+ match(Set dst_src (XorV dst_src (Replicate con)));
+ format %{ "vxor_immI $dst_src, $dst_src, $con" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ __ vsetvli_helper(bt, Matcher::vector_length(this));
+ __ vxor_vi(as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($dst_src$$reg),
+ $con$$constant);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+instruct vxor_immL(vReg dst_src, immL5 con) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_LONG);
+ match(Set dst_src (XorV dst_src (Replicate con)));
+ format %{ "vxor_immL $dst_src, $dst_src, $con" %}
+ ins_encode %{
+ __ vsetvli_helper(T_LONG, Matcher::vector_length(this));
+ __ vxor_vi(as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($dst_src$$reg),
+ $con$$constant);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+// vector-scalar xor (unpredicated)
+
+instruct vxor_regI(vReg dst_src, iRegIorL2I src) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_INT ||
+ Matcher::vector_element_basic_type(n) == T_BYTE ||
+ Matcher::vector_element_basic_type(n) == T_SHORT);
+ match(Set dst_src (XorV dst_src (Replicate src)));
+ format %{ "vxor_regI $dst_src, $dst_src, $src" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ __ vsetvli_helper(bt, Matcher::vector_length(this));
+ __ vxor_vx(as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($dst_src$$reg),
+ as_Register($src$$reg));
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+instruct vxor_regL(vReg dst_src, iRegL src) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_LONG);
+ match(Set dst_src (XorV dst_src (Replicate src)));
+ format %{ "vxor_regL $dst_src, $dst_src, $src" %}
+ ins_encode %{
+ __ vsetvli_helper(T_LONG, Matcher::vector_length(this));
+ __ vxor_vx(as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($dst_src$$reg),
+ as_Register($src$$reg));
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+// vector-immediate xor (predicated)
+
+instruct vxor_immI_masked(vReg dst_src, immI5 con, vRegMask_V0 v0) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_INT ||
+ Matcher::vector_element_basic_type(n) == T_BYTE ||
+ Matcher::vector_element_basic_type(n) == T_SHORT);
+ match(Set dst_src (XorV (Binary dst_src (Replicate con)) v0));
+ format %{ "vxor_immI_masked $dst_src, $dst_src, $con" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ __ vsetvli_helper(bt, Matcher::vector_length(this));
+ __ vxor_vi(as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($dst_src$$reg),
+ $con$$constant, Assembler::v0_t);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+instruct vxor_immL_masked(vReg dst_src, immL5 con, vRegMask_V0 v0) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_LONG);
+ match(Set dst_src (XorV (Binary dst_src (Replicate con)) v0));
+ format %{ "vxor_immL_masked $dst_src, $dst_src, $con" %}
+ ins_encode %{
+ __ vsetvli_helper(T_LONG, Matcher::vector_length(this));
+ __ vxor_vi(as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($dst_src$$reg),
+ $con$$constant, Assembler::v0_t);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+// vector-scalar xor (predicated)
+
+instruct vxor_regI_masked(vReg dst_src, iRegIorL2I src, vRegMask_V0 v0) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_INT ||
+ Matcher::vector_element_basic_type(n) == T_BYTE ||
+ Matcher::vector_element_basic_type(n) == T_SHORT);
+ match(Set dst_src (XorV (Binary dst_src (Replicate src)) v0));
+ format %{ "vxor_regI_masked $dst_src, $dst_src, $src" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ __ vsetvli_helper(bt, Matcher::vector_length(this));
+ __ vxor_vx(as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($dst_src$$reg),
+ as_Register($src$$reg), Assembler::v0_t);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+instruct vxor_regL_masked(vReg dst_src, iRegL src, vRegMask_V0 v0) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_LONG);
+ match(Set dst_src (XorV (Binary dst_src (Replicate src)) v0));
+ format %{ "vxor_regL_masked $dst_src, $dst_src, $src" %}
+ ins_encode %{
+ __ vsetvli_helper(T_LONG, Matcher::vector_length(this));
+ __ vxor_vx(as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($dst_src$$reg),
+ as_Register($src$$reg), Assembler::v0_t);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+// ------------------------------ Vector not -----------------------------------
+
+// vector not
+
+instruct vnotI(vReg dst, vReg src, immI_M1 m1) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_INT ||
+ Matcher::vector_element_basic_type(n) == T_BYTE ||
+ Matcher::vector_element_basic_type(n) == T_SHORT);
+ match(Set dst (XorV src (Replicate m1)));
+ format %{ "vnotI $dst, $src" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ __ vsetvli_helper(bt, Matcher::vector_length(this));
+ __ vxor_vi(as_VectorRegister($dst$$reg),
+ as_VectorRegister($src$$reg),
+ -1);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+instruct vnotL(vReg dst, vReg src, immL_M1 m1) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_LONG);
+ match(Set dst (XorV src (Replicate m1)));
+ format %{ "vnotL $dst, $src" %}
+ ins_encode %{
+ __ vsetvli_helper(T_LONG, Matcher::vector_length(this));
+ __ vxor_vi(as_VectorRegister($dst$$reg),
+ as_VectorRegister($src$$reg),
+ -1);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+// vector not - predicated
+
+instruct vnotI_masked(vReg dst_src, immI_M1 m1, vRegMask_V0 v0) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_INT ||
+ Matcher::vector_element_basic_type(n) == T_BYTE ||
+ Matcher::vector_element_basic_type(n) == T_SHORT);
+ match(Set dst_src (XorV (Binary dst_src (Replicate m1)) v0));
+ format %{ "vnotI_masked $dst_src, $dst_src, $v0" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ __ vsetvli_helper(bt, Matcher::vector_length(this));
+ __ vxor_vi(as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($dst_src$$reg),
+ -1, Assembler::v0_t);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+instruct vnotL_masked(vReg dst_src, immI_M1 m1, vRegMask_V0 v0) %{
+ predicate(Matcher::vector_element_basic_type(n) == T_LONG);
+ match(Set dst_src (XorV (Binary dst_src (Replicate m1)) v0));
+ format %{ "vnotL_masked $dst_src, $dst_src, $v0" %}
+ ins_encode %{
+ __ vsetvli_helper(T_LONG, Matcher::vector_length(this));
+ __ vxor_vi(as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($dst_src$$reg),
+ -1, Assembler::v0_t);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
// vector float div
instruct vdiv_fp(vReg dst, vReg src1, vReg src2) %{
@@ -1776,6 +2220,32 @@ instruct replicateD(vReg dst, fRegD src) %{
%}
// vector shift
+//
+// Following shift instruct's are shared by vectorization (in SLP, superword.cpp) and Vector API.
+//
+// Shift behaviour in vectorization is defined by java language spec, which includes:
+// 1. "If the promoted type of the left-hand operand is int, then only the five lowest-order bits of
+// the right-hand operand are used as the shift distance. It is as if the right-hand operand were
+// subjected to a bitwise logical AND operator & (§15.22.1) with the mask value 0x1f (0b11111).
+// The shift distance actually used is therefore always in the range 0 to 31, inclusive."
+// 2. similarly, for long "with the mask value 0x3f (0b111111)"
+// check https://docs.oracle.com/javase/specs/jls/se21/html/jls-15.html#jls-15.19 for details.
+//
+// Shift behaviour in Vector API is defined as:
+// e.g. for ASHR, "a>>(n&(ESIZE*8-1))"
+// this behaviour is the same as shift instrunction's in riscv vector extension.
+// check https://docs.oracle.com/en/java/javase/21/docs/api/jdk.incubator.vector/jdk/incubator/vector/VectorOperators.html#ASHR
+// and https://github.com/riscv/riscv-v-spec/blob/master/v-spec.adoc#116-vector-single-width-shift-instructions for details.
+//
+// Despite the difference between these 2 behaviours, the same shift instruct's of byte and short are
+// still shared between vectorization and Vector API. The way it works is hidden inside the implementation
+// of vectorization and Vector API:
+// 1. when doing optimization vectorization masks the shift value with "(BitsPerInt - 1)" or "(BitsPerLong - 1)"
+// 2. in Vector API, shift value is masked with SHIFT_MASK (e.g. for ByteVector it's "Byte.SIZE - 1")
+//
+// If not because of this pre-processing of shift value respectively in vectorization and Vector API, then
+// e.g. for a byte shift value 16, the intrinsic behaviour will be different, and they can not share the same
+// instruct here, as vectorization requires x >> 16, but Vector API requires x >> (16 & 7).
instruct vasrB(vReg dst, vReg src, vReg shift, vRegMask_V0 v0) %{
match(Set dst (RShiftVB src shift));
@@ -2589,6 +3059,200 @@ instruct vshiftcnt(vReg dst, iRegIorL2I cnt) %{
ins_pipe(pipe_slow);
%}
+// --------------------------------- Vector Rotation ----------------------------------
+//
+// Following rotate instruct's are shared by vectorization (in SLP, superword.cpp) and Vector API.
+//
+// Rotate behaviour in vectorization is defined by java API, which includes:
+// 1. Integer.rorateRight, Integer.rorateLeft.
+// `rotation by any multiple of 32 is a no-op, so all but the last five bits of the rotation distance can be ignored`.
+// 2. Long.rorateRight, Long.rorateLeft.
+// `rotation by any multiple of 64 is a no-op, so all but the last six bits of the rotation distance can be ignored`.
+//
+// Rotate behaviour in Vector API is defined as below, e.g.
+// 1. For Byte ROR, `a ROR b` is: (byte)(((((byte)a) & 0xFF) >>> (b & 7)) | ((((byte)a) & 0xFF) << (8 - (b & 7))))
+// 2. For Short ROR, `a ROR b` is: (short)(((((short)a) & 0xFFFF) >>> (b & 15)) | ((((short)a) & 0xFFFF) << (16 - (b & 15))))
+// 3. For Integer ROR, `a ROR b` is: Integer.rotateRight(a, ((int)b))
+// 4. For Long ROR, `a ROR b` is: Long.rotateRight(a, ((int)b))
+//
+// Basically, the behaviour between vectorization and Vector API is the same for Long and Integer, except that Vector API
+// also supports Byte and Short rotation. But we can still share the intrinsics between vectorization and Vector API.
+//
+// NOTE: As vror.vi encodes 6-bits immediate rotate amount, which is different from other vector-immediate instructions,
+// implementation of vector rotation for long and other types can be unified.
+
+// Rotate right
+
+instruct vrotate_right(vReg dst, vReg src, vReg shift) %{
+ match(Set dst (RotateRightV src shift));
+ format %{ "vrotate_right $dst, $src, $shift\t" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ __ vsetvli_helper(bt, Matcher::vector_length(this));
+ __ vror_vv(as_VectorRegister($dst$$reg), as_VectorRegister($src$$reg),
+ as_VectorRegister($shift$$reg));
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+instruct vrotate_right_reg(vReg dst, vReg src, iRegIorL2I shift) %{
+ match(Set dst (RotateRightV src (Replicate shift)));
+ format %{ "vrotate_right_reg $dst, $src, $shift\t" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ __ vsetvli_helper(bt, Matcher::vector_length(this));
+ __ vror_vx(as_VectorRegister($dst$$reg), as_VectorRegister($src$$reg),
+ as_Register($shift$$reg));
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+instruct vrotate_right_imm(vReg dst, vReg src, immI shift) %{
+ match(Set dst (RotateRightV src shift));
+ format %{ "vrotate_right_imm $dst, $src, $shift\t" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ uint32_t bits = type2aelembytes(bt) * 8;
+ uint32_t con = (unsigned)$shift$$constant & (bits - 1);
+ if (con == 0) {
+ return;
+ }
+ __ vsetvli_helper(bt, Matcher::vector_length(this));
+ __ vror_vi(as_VectorRegister($dst$$reg), as_VectorRegister($src$$reg), con);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+// Rotate right - masked
+
+instruct vrotate_right_masked(vReg dst_src, vReg shift, vRegMask_V0 v0) %{
+ match(Set dst_src (RotateRightV (Binary dst_src shift) v0));
+ format %{ "vrotate_right_masked $dst_src, $dst_src, $shift, v0.t\t" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ __ vsetvli_helper(bt, Matcher::vector_length(this));
+ __ vror_vv(as_VectorRegister($dst_src$$reg), as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($shift$$reg), Assembler::v0_t);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+instruct vrotate_right_reg_masked(vReg dst_src, iRegIorL2I shift, vRegMask_V0 v0) %{
+ match(Set dst_src (RotateRightV (Binary dst_src (Replicate shift)) v0));
+ format %{ "vrotate_right_reg_masked $dst_src, $dst_src, $shift, v0.t\t" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ __ vsetvli_helper(bt, Matcher::vector_length(this));
+ __ vror_vx(as_VectorRegister($dst_src$$reg), as_VectorRegister($dst_src$$reg),
+ as_Register($shift$$reg), Assembler::v0_t);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+instruct vrotate_right_imm_masked(vReg dst_src, immI shift, vRegMask_V0 v0) %{
+ match(Set dst_src (RotateRightV (Binary dst_src shift) v0));
+ format %{ "vrotate_right_imm_masked $dst_src, $dst_src, $shift, v0.t\t" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ uint32_t bits = type2aelembytes(bt) * 8;
+ uint32_t con = (unsigned)$shift$$constant & (bits - 1);
+ if (con == 0) {
+ return;
+ }
+ __ vsetvli_helper(bt, Matcher::vector_length(this));
+ __ vror_vi(as_VectorRegister($dst_src$$reg), as_VectorRegister($dst_src$$reg),
+ con, Assembler::v0_t);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+// Rotate left
+
+instruct vrotate_left(vReg dst, vReg src, vReg shift) %{
+ match(Set dst (RotateLeftV src shift));
+ format %{ "vrotate_left $dst, $src, $shift\t" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ __ vsetvli_helper(bt, Matcher::vector_length(this));
+ __ vrol_vv(as_VectorRegister($dst$$reg), as_VectorRegister($src$$reg),
+ as_VectorRegister($shift$$reg));
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+instruct vrotate_left_reg(vReg dst, vReg src, iRegIorL2I shift) %{
+ match(Set dst (RotateLeftV src (Replicate shift)));
+ format %{ "vrotate_left_reg $dst, $src, $shift\t" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ __ vsetvli_helper(bt, Matcher::vector_length(this));
+ __ vrol_vx(as_VectorRegister($dst$$reg), as_VectorRegister($src$$reg),
+ as_Register($shift$$reg));
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+instruct vrotate_left_imm(vReg dst, vReg src, immI shift) %{
+ match(Set dst (RotateLeftV src shift));
+ format %{ "vrotate_left_imm $dst, $src, $shift\t" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ uint32_t bits = type2aelembytes(bt) * 8;
+ uint32_t con = (unsigned)$shift$$constant & (bits - 1);
+ if (con == 0) {
+ return;
+ }
+ __ vsetvli_helper(bt, Matcher::vector_length(this));
+ con = bits - con;
+ __ vror_vi(as_VectorRegister($dst$$reg), as_VectorRegister($src$$reg), con);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+// Rotate left - masked
+
+instruct vrotate_left_masked(vReg dst_src, vReg shift, vRegMask_V0 v0) %{
+ match(Set dst_src (RotateLeftV (Binary dst_src shift) v0));
+ format %{ "vrotate_left_masked $dst_src, $dst_src, $shift, v0.t\t" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ __ vsetvli_helper(bt, Matcher::vector_length(this));
+ __ vrol_vv(as_VectorRegister($dst_src$$reg), as_VectorRegister($dst_src$$reg),
+ as_VectorRegister($shift$$reg), Assembler::v0_t);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+instruct vrotate_left_reg_masked(vReg dst_src, iRegIorL2I shift, vRegMask_V0 v0) %{
+ match(Set dst_src (RotateLeftV (Binary dst_src (Replicate shift)) v0));
+ format %{ "vrotate_left_reg_masked $dst_src, $dst_src, $shift, v0.t\t" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ __ vsetvli_helper(bt, Matcher::vector_length(this));
+ __ vrol_vx(as_VectorRegister($dst_src$$reg), as_VectorRegister($dst_src$$reg),
+ as_Register($shift$$reg), Assembler::v0_t);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+instruct vrotate_left_imm_masked(vReg dst_src, immI shift, vRegMask_V0 v0) %{
+ match(Set dst_src (RotateLeftV (Binary dst_src shift) v0));
+ format %{ "vrotate_left_imm_masked $dst_src, $dst_src, $shift, v0.t\t" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ uint32_t bits = type2aelembytes(bt) * 8;
+ uint32_t con = (unsigned)$shift$$constant & (bits - 1);
+ if (con == 0) {
+ return;
+ }
+ __ vsetvli_helper(bt, Matcher::vector_length(this));
+ con = bits - con;
+ __ vror_vi(as_VectorRegister($dst_src$$reg), as_VectorRegister($dst_src$$reg),
+ con, Assembler::v0_t);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
// vector sqrt
instruct vsqrt_fp(vReg dst, vReg src) %{
@@ -3753,6 +4417,32 @@ instruct vsignum_reg(vReg dst, vReg zero, vReg one, vRegMask_V0 v0) %{
ins_pipe(pipe_slow);
%}
+// -------------------------------- Reverse Bytes Vector Operations ------------------------
+
+instruct vreverse_bytes_masked(vReg dst_src, vRegMask_V0 v0) %{
+ match(Set dst_src (ReverseBytesV dst_src v0));
+ format %{ "vreverse_bytes_masked $dst_src, $dst_src, v0" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ uint vlen = Matcher::vector_length(this);
+ __ vsetvli_helper(bt, vlen);
+ __ vrev8_v(as_VectorRegister($dst_src$$reg), as_VectorRegister($dst_src$$reg), Assembler::v0_t);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+instruct vreverse_bytes(vReg dst, vReg src) %{
+ match(Set dst (ReverseBytesV src));
+ format %{ "vreverse_bytes $dst, $src" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ uint vlen = Matcher::vector_length(this);
+ __ vsetvli_helper(bt, vlen);
+ __ vrev8_v(as_VectorRegister($dst$$reg), as_VectorRegister($src$$reg));
+ %}
+ ins_pipe(pipe_slow);
+%}
+
// ---------------- Convert Half Floating to Floating Vector Operations ----------------
// half precision -> single
@@ -3784,6 +4474,93 @@ instruct vconvF2HF(vReg dst, vReg src, vReg vtmp, vRegMask_V0 v0, iRegINoSp tmp)
ins_pipe(pipe_slow);
%}
+
+// ------------------------------ Popcount vector ------------------------------
+
+instruct vpopcount_masked(vReg dst_src, vRegMask_V0 v0) %{
+ match(Set dst_src (PopCountVI dst_src v0));
+ match(Set dst_src (PopCountVL dst_src v0));
+ ins_cost(VEC_COST);
+ format %{ "vcpop_v $dst_src, $dst_src, $v0\t# vcpop_v with mask" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ uint vlen = Matcher::vector_length(this);
+ __ vsetvli_helper(bt, vlen);
+ __ vcpop_v(as_VectorRegister($dst_src$$reg), as_VectorRegister($dst_src$$reg), Assembler::v0_t);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+instruct vpopcount(vReg dst, vReg src) %{
+ match(Set dst (PopCountVI src));
+ match(Set dst (PopCountVL src));
+ ins_cost(VEC_COST);
+ format %{ "vcpop_v $dst, $src\t# vcpop_v without mask" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ uint vlen = Matcher::vector_length(this);
+ __ vsetvli_helper(bt, vlen);
+ __ vcpop_v(as_VectorRegister($dst$$reg), as_VectorRegister($src$$reg));
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+// ------------------------------ CountLeadingZerosV --------------------------
+
+instruct vcountLeadingZeros_masked(vReg dst_src, vRegMask_V0 v0) %{
+ match(Set dst_src (CountLeadingZerosV dst_src v0));
+ ins_cost(VEC_COST);
+ format %{ "vcount_leading_zeros_masked $dst_src, $dst_src, v0" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ uint vlen = Matcher::vector_length(this);
+ __ vsetvli_helper(bt, vlen);
+ __ vclz_v(as_VectorRegister($dst_src$$reg), as_VectorRegister($dst_src$$reg), Assembler::v0_t);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+instruct vcountLeadingZeros(vReg dst, vReg src) %{
+ match(Set dst (CountLeadingZerosV src));
+ ins_cost(VEC_COST);
+ format %{ "vcount_leading_zeros $dst, $src" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ uint vlen = Matcher::vector_length(this);
+ __ vsetvli_helper(bt, vlen);
+ __ vclz_v(as_VectorRegister($dst$$reg), as_VectorRegister($src$$reg));
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+// ------------------------------ CountTrailingZerosV --------------------------
+
+instruct vcountTrailingZeros_masked(vReg dst_src, vRegMask_V0 v0) %{
+ match(Set dst_src (CountTrailingZerosV dst_src v0));
+ ins_cost(VEC_COST);
+ format %{ "vcount_trailing_zeros_masked $dst_src, $dst_src, v0" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ uint vlen = Matcher::vector_length(this);
+ __ vsetvli_helper(bt, vlen);
+ __ vctz_v(as_VectorRegister($dst_src$$reg), as_VectorRegister($dst_src$$reg), Assembler::v0_t);
+ %}
+ ins_pipe(pipe_slow);
+%}
+
+instruct vcountTrailingZeros(vReg dst, vReg src) %{
+ match(Set dst (CountTrailingZerosV src));
+ ins_cost(VEC_COST);
+ format %{ "vcount_trailing_zeros $dst, $src" %}
+ ins_encode %{
+ BasicType bt = Matcher::vector_element_basic_type(this);
+ uint vlen = Matcher::vector_length(this);
+ __ vsetvli_helper(bt, vlen);
+ __ vctz_v(as_VectorRegister($dst$$reg), as_VectorRegister($src$$reg));
+ %}
+ ins_pipe(pipe_slow);
+%}
+
// ------------------------------ Vector Load Gather ---------------------------
instruct gather_load(vReg dst, indirect mem, vReg idx) %{
diff --git a/src/hotspot/cpu/riscv/stubGenerator_riscv.cpp b/src/hotspot/cpu/riscv/stubGenerator_riscv.cpp
index 582538a18b496..c292f671325da 100644
--- a/src/hotspot/cpu/riscv/stubGenerator_riscv.cpp
+++ b/src/hotspot/cpu/riscv/stubGenerator_riscv.cpp
@@ -652,7 +652,7 @@ class StubGenerator: public StubCodeGenerator {
assert(frame::arg_reg_save_area_bytes == 0, "not expecting frame reg save area");
#endif
BLOCK_COMMENT("call MacroAssembler::debug");
- __ call(CAST_FROM_FN_PTR(address, MacroAssembler::debug64));
+ __ rt_call(CAST_FROM_FN_PTR(address, MacroAssembler::debug64));
__ ebreak();
return start;
@@ -2840,7 +2840,6 @@ class StubGenerator: public StubCodeGenerator {
* c_rarg2 - y address
* c_rarg3 - y length
* c_rarg4 - z address
- * c_rarg5 - z length
*/
address generate_multiplyToLen()
{
@@ -2853,8 +2852,8 @@ class StubGenerator: public StubCodeGenerator {
const Register y = x12;
const Register ylen = x13;
const Register z = x14;
- const Register zlen = x15;
+ const Register tmp0 = x15;
const Register tmp1 = x16;
const Register tmp2 = x17;
const Register tmp3 = x7;
@@ -2865,7 +2864,7 @@ class StubGenerator: public StubCodeGenerator {
BLOCK_COMMENT("Entry:");
__ enter(); // required for proper stackwalking of RuntimeStub frame
- __ multiply_to_len(x, xlen, y, ylen, z, zlen, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, tmp7);
+ __ multiply_to_len(x, xlen, y, ylen, z, tmp0, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, tmp7);
__ leave(); // required for proper stackwalking of RuntimeStub frame
__ ret();
@@ -2881,10 +2880,10 @@ class StubGenerator: public StubCodeGenerator {
const Register x = x10;
const Register xlen = x11;
const Register z = x12;
- const Register zlen = x13;
const Register y = x14; // == x
const Register ylen = x15; // == xlen
+ const Register tmp0 = x13; // zlen, unused
const Register tmp1 = x16;
const Register tmp2 = x17;
const Register tmp3 = x7;
@@ -2897,7 +2896,7 @@ class StubGenerator: public StubCodeGenerator {
__ enter();
__ mv(y, x);
__ mv(ylen, xlen);
- __ multiply_to_len(x, xlen, y, ylen, z, zlen, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, tmp7);
+ __ multiply_to_len(x, xlen, y, ylen, z, tmp0, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, tmp7);
__ leave();
__ ret();
@@ -5450,7 +5449,7 @@ static const int64_t right_3_bits = right_n_bits(3);
}
__ mv(c_rarg0, xthread);
BLOCK_COMMENT("call runtime_entry");
- __ call(runtime_entry);
+ __ rt_call(runtime_entry);
// Generate oop map
OopMap* map = new OopMap(framesize, 0);
diff --git a/src/hotspot/cpu/riscv/templateInterpreterGenerator_riscv.cpp b/src/hotspot/cpu/riscv/templateInterpreterGenerator_riscv.cpp
index 07aad47173c29..769e4dc5ccc78 100644
--- a/src/hotspot/cpu/riscv/templateInterpreterGenerator_riscv.cpp
+++ b/src/hotspot/cpu/riscv/templateInterpreterGenerator_riscv.cpp
@@ -1206,7 +1206,7 @@ address TemplateInterpreterGenerator::generate_native_entry(bool synchronized) {
// hand.
//
__ mv(c_rarg0, xthread);
- __ call(CAST_FROM_FN_PTR(address, JavaThread::check_special_condition_for_native_trans));
+ __ rt_call(CAST_FROM_FN_PTR(address, JavaThread::check_special_condition_for_native_trans));
__ get_method(xmethod);
__ reinit_heapbase();
__ bind(Continue);
@@ -1255,7 +1255,7 @@ address TemplateInterpreterGenerator::generate_native_entry(bool synchronized) {
__ push_call_clobbered_registers();
__ mv(c_rarg0, xthread);
- __ call(CAST_FROM_FN_PTR(address, SharedRuntime::reguard_yellow_pages));
+ __ rt_call(CAST_FROM_FN_PTR(address, SharedRuntime::reguard_yellow_pages));
__ pop_call_clobbered_registers();
__ bind(no_reguard);
}
@@ -1815,7 +1815,7 @@ void TemplateInterpreterGenerator::trace_bytecode(Template* t) {
// the tosca in-state for the given template.
assert(Interpreter::trace_code(t->tos_in()) != nullptr, "entry must have been generated");
- __ jal(Interpreter::trace_code(t->tos_in()));
+ __ call(Interpreter::trace_code(t->tos_in()));
__ reinit_heapbase();
}
diff --git a/src/hotspot/cpu/riscv/upcallLinker_riscv.cpp b/src/hotspot/cpu/riscv/upcallLinker_riscv.cpp
index 8a6557dde93a1..383f332f8fd94 100644
--- a/src/hotspot/cpu/riscv/upcallLinker_riscv.cpp
+++ b/src/hotspot/cpu/riscv/upcallLinker_riscv.cpp
@@ -223,7 +223,7 @@ address UpcallLinker::make_upcall_stub(jobject receiver, Method* entry,
__ block_comment("{ on_entry");
__ la(c_rarg0, Address(sp, frame_data_offset));
- __ movptr(c_rarg1, (intptr_t) receiver);
+ __ movptr(c_rarg1, (address) receiver);
__ rt_call(CAST_FROM_FN_PTR(address, UpcallLinker::on_entry));
__ mv(xthread, x10);
__ reinit_heapbase();
diff --git a/src/hotspot/cpu/riscv/vm_version_riscv.cpp b/src/hotspot/cpu/riscv/vm_version_riscv.cpp
index 848ffe709d882..995e8b722485d 100644
--- a/src/hotspot/cpu/riscv/vm_version_riscv.cpp
+++ b/src/hotspot/cpu/riscv/vm_version_riscv.cpp
@@ -245,6 +245,12 @@ void VM_Version::initialize() {
FLAG_SET_DEFAULT(UseChaCha20Intrinsics, false);
}
+ // UseZvbb (depends on RVV).
+ if (UseZvbb && !UseRVV) {
+ FLAG_SET_DEFAULT(UseZvbb, false);
+ warning("Cannot enable UseZvbb on cpu without RVV support.");
+ }
+
// SHA's
if (FLAG_IS_DEFAULT(UseSHA)) {
FLAG_SET_DEFAULT(UseSHA, true);
diff --git a/src/hotspot/cpu/riscv/vm_version_riscv.hpp b/src/hotspot/cpu/riscv/vm_version_riscv.hpp
index d26b16941dbfe..f3a834b72379c 100644
--- a/src/hotspot/cpu/riscv/vm_version_riscv.hpp
+++ b/src/hotspot/cpu/riscv/vm_version_riscv.hpp
@@ -161,6 +161,7 @@ class VM_Version : public Abstract_VM_Version {
decl(ext_Ztso , "Ztso" , RV_NO_FLAG_BIT, true , UPDATE_DEFAULT(UseZtso)) \
decl(ext_Zihintpause , "Zihintpause" , RV_NO_FLAG_BIT, true , UPDATE_DEFAULT(UseZihintpause)) \
decl(ext_Zacas , "Zacas" , RV_NO_FLAG_BIT, true , UPDATE_DEFAULT(UseZacas)) \
+ decl(ext_Zvbb , "Zvbb" , RV_NO_FLAG_BIT, true , UPDATE_DEFAULT(UseZvbb)) \
decl(ext_Zvfh , "Zvfh" , RV_NO_FLAG_BIT, true , UPDATE_DEFAULT(UseZvfh)) \
decl(ext_Zvkn , "Zvkn" , RV_NO_FLAG_BIT, true , UPDATE_DEFAULT(UseZvkn)) \
decl(mvendorid , "VendorId" , RV_NO_FLAG_BIT, false, NO_UPDATE_DEFAULT) \
diff --git a/src/hotspot/cpu/s390/assembler_s390.hpp b/src/hotspot/cpu/s390/assembler_s390.hpp
index 91cc7e611bfd1..cf80d164fafe6 100644
--- a/src/hotspot/cpu/s390/assembler_s390.hpp
+++ b/src/hotspot/cpu/s390/assembler_s390.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2016, 2023, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2016, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2016, 2023 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@@ -352,10 +352,6 @@ class AddressLiteral {
relocInfo::relocType rtype() const { return _rspec.type(); }
const RelocationHolder& rspec() const { return _rspec; }
-
- RelocationHolder rspec(int offset) const {
- return offset == 0 ? _rspec : _rspec.plus(offset);
- }
};
// Convenience classes
diff --git a/src/hotspot/cpu/s390/c1_LIRAssembler_s390.cpp b/src/hotspot/cpu/s390/c1_LIRAssembler_s390.cpp
index 503440a5fcc01..a5e62169a9350 100644
--- a/src/hotspot/cpu/s390/c1_LIRAssembler_s390.cpp
+++ b/src/hotspot/cpu/s390/c1_LIRAssembler_s390.cpp
@@ -2274,7 +2274,9 @@ void LIR_Assembler::emit_arraycopy(LIR_OpArrayCopy* op) {
address entry = StubRoutines::select_arraycopy_function(basic_type, aligned, disjoint, name, false);
__ call_VM_leaf(entry);
- __ bind(*stub->continuation());
+ if (stub != nullptr) {
+ __ bind(*stub->continuation());
+ }
}
void LIR_Assembler::shift_op(LIR_Code code, LIR_Opr left, LIR_Opr count, LIR_Opr dest, LIR_Opr tmp) {
@@ -2385,7 +2387,8 @@ void LIR_Assembler::emit_alloc_array(LIR_OpAllocArray* op) {
arrayOopDesc::base_offset_in_bytes(op->type()),
type2aelembytes(op->type()),
op->klass()->as_register(),
- *op->stub()->entry());
+ *op->stub()->entry(),
+ op->zero_array());
}
__ bind(*op->stub()->continuation());
}
diff --git a/src/hotspot/cpu/s390/c1_LIRGenerator_s390.cpp b/src/hotspot/cpu/s390/c1_LIRGenerator_s390.cpp
index 8df135974259e..619f0f7174f01 100644
--- a/src/hotspot/cpu/s390/c1_LIRGenerator_s390.cpp
+++ b/src/hotspot/cpu/s390/c1_LIRGenerator_s390.cpp
@@ -1,6 +1,6 @@
/*
* Copyright (c) 2016, 2024, Oracle and/or its affiliates. All rights reserved.
- * Copyright (c) 2016, 2017 SAP SE. All rights reserved.
+ * Copyright (c) 2016, 2024 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -710,7 +710,13 @@ void LIRGenerator::do_ArrayCopy(Intrinsic* x) {
frame_map()->c_calling_convention(&sig);
// Make all state_for calls early since they can emit code.
- CodeEmitInfo* info = state_for (x, x->state());
+ CodeEmitInfo* info = nullptr;
+ if (x->state_before() != nullptr && x->state_before()->force_reexecute()) {
+ info = state_for(x, x->state_before());
+ info->set_force_reexecute();
+ } else {
+ info = state_for(x, x->state());
+ }
LIRItem src(x->argument_at(0), this);
LIRItem src_pos(x->argument_at(1), this);
@@ -735,6 +741,9 @@ void LIRGenerator::do_ArrayCopy(Intrinsic* x) {
int flags;
ciArrayKlass* expected_type;
arraycopy_helper(x, &flags, &expected_type);
+ if (x->check_flag(Instruction::OmitChecksFlag)) {
+ flags = 0;
+ }
__ arraycopy(src.result(), src_pos.result(), dst.result(), dst_pos.result(),
length.result(), tmp, expected_type, flags, info); // does add_safepoint
@@ -769,7 +778,13 @@ void LIRGenerator::do_NewInstance(NewInstance* x) {
}
void LIRGenerator::do_NewTypeArray(NewTypeArray* x) {
- CodeEmitInfo* info = state_for (x, x->state());
+ CodeEmitInfo* info = nullptr;
+ if (x->state_before() != nullptr && x->state_before()->force_reexecute()) {
+ info = state_for(x, x->state_before());
+ info->set_force_reexecute();
+ } else {
+ info = state_for(x, x->state());
+ }
LIRItem length(x->length(), this);
length.load_item();
@@ -786,7 +801,8 @@ void LIRGenerator::do_NewTypeArray(NewTypeArray* x) {
__ metadata2reg(ciTypeArrayKlass::make(elem_type)->constant_encoding(), klass_reg);
CodeStub* slow_path = new NewTypeArrayStub(klass_reg, len, reg, info);
- __ allocate_array(reg, len, tmp1, tmp2, tmp3, tmp4, elem_type, klass_reg, slow_path);
+ __ allocate_array(reg, len, tmp1, tmp2, tmp3, tmp4, elem_type,
+ klass_reg, slow_path, x->zero_array());
LIR_Opr result = rlock_result(x);
__ move(reg, result);
diff --git a/src/hotspot/cpu/s390/c1_MacroAssembler_s390.cpp b/src/hotspot/cpu/s390/c1_MacroAssembler_s390.cpp
index 58bdcee5d5f8f..c35b092329781 100644
--- a/src/hotspot/cpu/s390/c1_MacroAssembler_s390.cpp
+++ b/src/hotspot/cpu/s390/c1_MacroAssembler_s390.cpp
@@ -1,6 +1,6 @@
/*
* Copyright (c) 2016, 2024, Oracle and/or its affiliates. All rights reserved.
- * Copyright (c) 2016, 2023 SAP SE. All rights reserved.
+ * Copyright (c) 2016, 2024 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -164,7 +164,7 @@ void C1_MacroAssembler::try_allocate(
Register obj, // result: Pointer to object after successful allocation.
Register var_size_in_bytes, // Object size in bytes if unknown at compile time; invalid otherwise.
int con_size_in_bytes, // Object size in bytes if known at compile time.
- Register t1, // Temp register: Must be global register for incr_allocated_bytes.
+ Register t1, // Temp register.
Label& slow_case // Continuation point if fast allocation fails.
) {
if (UseTLAB) {
@@ -191,7 +191,6 @@ void C1_MacroAssembler::initialize_header(Register obj, Register klass, Register
}
void C1_MacroAssembler::initialize_body(Register objectFields, Register len_in_bytes, Register Rzero) {
- Label done;
assert_different_registers(objectFields, len_in_bytes, Rzero);
// Initialize object fields.
@@ -203,7 +202,6 @@ void C1_MacroAssembler::initialize_body(Register objectFields, Register len_in_b
// Use Rzero as src length, then mvcle will copy nothing
// and fill the object with the padding value 0.
move_long_ext(objectFields, as_Register(Rzero->encoding()-1), 0);
- bind(done);
}
void C1_MacroAssembler::allocate_object(
@@ -274,7 +272,8 @@ void C1_MacroAssembler::allocate_array(
int base_offset_in_bytes, // elements offset in bytes
int elt_size, // element size in bytes
Register klass, // object klass
- Label& slow_case // Continuation point if fast allocation fails.
+ Label& slow_case, // Continuation point if fast allocation fails.
+ bool zero_array // zero the allocated array or not
) {
assert_different_registers(obj, len, t1, t2, klass);
@@ -305,15 +304,17 @@ void C1_MacroAssembler::allocate_array(
initialize_header(obj, klass, len, noreg, t1);
// Clear rest of allocated space.
- Label done;
- Register object_fields = t1;
- Register Rzero = Z_R1_scratch;
- z_aghi(arr_size, -base_offset_in_bytes);
- z_bre(done); // Jump if size of fields is zero.
- z_la(object_fields, base_offset_in_bytes, obj);
- z_xgr(Rzero, Rzero);
- initialize_body(object_fields, arr_size, Rzero);
- bind(done);
+ if (zero_array) {
+ Label done;
+ Register object_fields = t1;
+ Register Rzero = Z_R1_scratch;
+ z_aghi(arr_size, -base_offset_in_bytes);
+ z_bre(done); // Jump if size of fields is zero.
+ z_la(object_fields, base_offset_in_bytes, obj);
+ z_xgr(Rzero, Rzero);
+ initialize_body(object_fields, arr_size, Rzero);
+ bind(done);
+ }
// Dtrace support is unimplemented.
// if (CURRENT_ENV->dtrace_alloc_probes()) {
diff --git a/src/hotspot/cpu/s390/c1_MacroAssembler_s390.hpp b/src/hotspot/cpu/s390/c1_MacroAssembler_s390.hpp
index c77258509e1a5..e93eff43bd4af 100644
--- a/src/hotspot/cpu/s390/c1_MacroAssembler_s390.hpp
+++ b/src/hotspot/cpu/s390/c1_MacroAssembler_s390.hpp
@@ -1,6 +1,6 @@
/*
* Copyright (c) 2016, 2024, Oracle and/or its affiliates. All rights reserved.
- * Copyright (c) 2016, 2023 SAP SE. All rights reserved.
+ * Copyright (c) 2016, 2024 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -89,7 +89,8 @@
int base_offset_in_bytes, // elements offset in bytes
int elt_size, // element size in bytes
Register klass, // object klass
- Label& slow_case // Continuation point if fast allocation fails.
+ Label& slow_case, // Continuation point if fast allocation fails.
+ bool zero_array // zero the allocated array or not
);
// Invalidates registers in this window.
diff --git a/src/hotspot/cpu/s390/gc/g1/g1BarrierSetAssembler_s390.cpp b/src/hotspot/cpu/s390/gc/g1/g1BarrierSetAssembler_s390.cpp
index 8ce9305a865e5..37631298920ca 100644
--- a/src/hotspot/cpu/s390/gc/g1/g1BarrierSetAssembler_s390.cpp
+++ b/src/hotspot/cpu/s390/gc/g1/g1BarrierSetAssembler_s390.cpp
@@ -282,7 +282,7 @@ void G1BarrierSetAssembler::g1_write_barrier_post(MacroAssembler* masm, Decorato
__ z_lgr(Rtmp1, Rstore_addr);
__ z_xgr(Rtmp1, Rnew_val);
}
- __ z_srag(Rtmp1, Rtmp1, HeapRegion::LogOfHRGrainBytes);
+ __ z_srag(Rtmp1, Rtmp1, G1HeapRegion::LogOfHRGrainBytes);
__ z_bre(filtered);
// Crosses regions, storing null?
diff --git a/src/hotspot/cpu/s390/macroAssembler_s390.cpp b/src/hotspot/cpu/s390/macroAssembler_s390.cpp
index 224153f0e84b0..ef5216a12ba13 100644
--- a/src/hotspot/cpu/s390/macroAssembler_s390.cpp
+++ b/src/hotspot/cpu/s390/macroAssembler_s390.cpp
@@ -5281,9 +5281,6 @@ void MacroAssembler::multiply_to_len(Register x, Register xlen,
z_stmg(Z_R7, Z_R13, _z_abi(gpr7), Z_SP);
- // In openJdk, we store the argument as 32-bit value to slot.
- Address zlen(Z_SP, _z_abi(remaining_cargs)); // Int in long on big endian.
-
const Register idx = tmp1;
const Register kdx = tmp2;
const Register xstart = tmp3;
@@ -5308,7 +5305,7 @@ void MacroAssembler::multiply_to_len(Register x, Register xlen,
//
lgr_if_needed(idx, ylen); // idx = ylen
- z_llgf(kdx, zlen); // C2 does not respect int to long conversion for stub calls, thus load zero-extended.
+ z_agrk(kdx, xlen, ylen); // kdx = xlen + ylen
clear_reg(carry); // carry = 0
Label L_done;
diff --git a/src/hotspot/cpu/s390/relocInfo_s390.cpp b/src/hotspot/cpu/s390/relocInfo_s390.cpp
index 747ae9c535d06..9e352be20a3a2 100644
--- a/src/hotspot/cpu/s390/relocInfo_s390.cpp
+++ b/src/hotspot/cpu/s390/relocInfo_s390.cpp
@@ -30,27 +30,25 @@
#include "oops/oop.inline.hpp"
#include "runtime/safepoint.hpp"
-void Relocation::pd_set_data_value(address x, intptr_t o, bool verify_only) {
- // we don't support splitting of relocations, so o must be zero:
- assert(o == 0, "tried to split relocations");
+void Relocation::pd_set_data_value(address x, bool verify_only) {
if (!verify_only) {
switch (format()) {
case relocInfo::uncompressed_format:
- nativeMovConstReg_at(addr())->set_data_plain(((intptr_t)x) + o, code());
+ nativeMovConstReg_at(addr())->set_data_plain(((intptr_t)x), code());
break;
case relocInfo::compressed_format:
if (type() == relocInfo::metadata_type)
- nativeMovConstReg_at(addr())->set_narrow_klass(((intptr_t)x) + o);
+ nativeMovConstReg_at(addr())->set_narrow_klass(((intptr_t)x));
else if (type() == relocInfo::oop_type)
- nativeMovConstReg_at(addr())->set_narrow_oop(((intptr_t)x) + o);
+ nativeMovConstReg_at(addr())->set_narrow_oop(((intptr_t)x));
else
guarantee(false, "bad relocInfo type for relocInfo::narrow_oop_format");
break;
case relocInfo::pcrel_addr_format: // patch target location
- nativeMovConstReg_at(addr())->set_pcrel_addr(((intptr_t)x) + o, code());
+ nativeMovConstReg_at(addr())->set_pcrel_addr(((intptr_t)x), code());
break;
case relocInfo::pcrel_data_format: // patch data at target location
- nativeMovConstReg_at(addr())->set_pcrel_data(((intptr_t)x) + o, code());
+ nativeMovConstReg_at(addr())->set_pcrel_data(((intptr_t)x), code());
break;
default:
assert(false, "not a valid relocInfo format");
diff --git a/src/hotspot/cpu/s390/stubGenerator_s390.cpp b/src/hotspot/cpu/s390/stubGenerator_s390.cpp
index db686158f2107..dfd20730b8476 100644
--- a/src/hotspot/cpu/s390/stubGenerator_s390.cpp
+++ b/src/hotspot/cpu/s390/stubGenerator_s390.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2016, 2023, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2016, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2016, 2023 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@@ -2981,7 +2981,6 @@ class StubGenerator: public StubCodeGenerator {
// Z_ARG3 - y address
// Z_ARG4 - y length
// Z_ARG5 - z address
- // 160[Z_SP] - z length
address generate_multiplyToLen() {
__ align(CodeEntryAlignment);
StubCodeMark mark(this, "StubRoutines", "multiplyToLen");
@@ -2993,8 +2992,6 @@ class StubGenerator: public StubCodeGenerator {
const Register y = Z_ARG3;
const Register ylen = Z_ARG4;
const Register z = Z_ARG5;
- // zlen is passed on the stack:
- // Address zlen(Z_SP, _z_abi(remaining_cargs));
// Next registers will be saved on stack in multiply_to_len().
const Register tmp1 = Z_tmp_1;
diff --git a/src/hotspot/cpu/x86/assembler_x86.cpp b/src/hotspot/cpu/x86/assembler_x86.cpp
index bcf4d5ea13b60..b02cca92cd484 100644
--- a/src/hotspot/cpu/x86/assembler_x86.cpp
+++ b/src/hotspot/cpu/x86/assembler_x86.cpp
@@ -76,7 +76,8 @@ static const unsigned char tuple_table[Assembler::EVEX_ETUP + 1][Assembler::AVX_
2, 4, 8, // EVEX_OVM(0)
16, 16, 16, // EVEX_M128(0)
8, 32, 64, // EVEX_DUP(0)
- 0, 0, 0 // EVEX_NTUP
+ 1, 1, 1, // EVEX_NOSCALE(0)
+ 0, 0, 0 // EVEX_ETUP
};
AddressLiteral::AddressLiteral(address target, relocInfo::relocType rtype) {
@@ -294,11 +295,7 @@ void Assembler::emit_data(jint data, RelocationHolder const& rspec, int format)
}
static int encode(Register r) {
- int enc = r->encoding();
- if (enc >= 8) {
- enc -= 8;
- }
- return enc;
+ return r->encoding() & 7;
}
void Assembler::emit_arith_b(int op1, int op2, Register dst, int imm8) {
@@ -438,6 +435,9 @@ bool Assembler::query_compressed_disp_byte(int disp, bool is_evex_inst, int vect
case EVEX_DUP:
break;
+ case EVEX_NOSCALE:
+ break;
+
default:
assert(0, "no valid evex tuple_table entry");
break;
@@ -525,6 +525,9 @@ bool Assembler::emit_compressed_disp_byte(int &disp) {
case EVEX_DUP:
break;
+ case EVEX_NOSCALE:
+ break;
+
default:
assert(0, "no valid evex tuple_table entry");
break;
@@ -546,6 +549,24 @@ bool Assembler::emit_compressed_disp_byte(int &disp) {
return is8bit(disp);
}
+bool Assembler::needs_rex2(Register reg1, Register reg2, Register reg3) {
+ bool rex2 = (reg1->is_valid() && reg1->encoding() >= 16) ||
+ (reg2->is_valid() && reg2->encoding() >= 16) ||
+ (reg3->is_valid() && reg3->encoding() >= 16);
+ assert(!rex2 || UseAPX, "extended gpr use requires UseAPX");
+ return rex2;
+}
+
+bool Assembler::needs_eevex(Register reg1, Register reg2, Register reg3) {
+ return needs_rex2(reg1, reg2, reg3);
+}
+
+bool Assembler::needs_eevex(int enc1, int enc2, int enc3) {
+ bool eevex = enc1 >= 16 || enc2 >= 16 || enc3 >=16;
+ assert(!eevex || UseAPX, "extended gpr use requires UseAPX");
+ return eevex;
+}
+
static bool is_valid_encoding(int reg_enc) {
return reg_enc >= 0;
}
@@ -616,9 +637,9 @@ void Assembler::emit_operand_helper(int reg_enc, int base_enc, int index_enc,
if (is_valid_encoding(index_enc)) {
assert(scale != Address::no_scale, "inconsistent address");
// [base + index*scale + disp]
- if (disp == 0 && no_relocation &&
- base_enc != rbp->encoding() LP64_ONLY(&& base_enc != r13->encoding())) {
+ if (disp == 0 && no_relocation && ((base_enc & 0x7) != 5)) {
// [base + index*scale]
+ // !(rbp | r13 | r21 | r29)
// [00 reg 100][ss index base]
emit_modrm_sib(0b00, reg_enc, 0b100,
scale, index_enc, base_enc);
@@ -635,7 +656,8 @@ void Assembler::emit_operand_helper(int reg_enc, int base_enc, int index_enc,
scale, index_enc, base_enc);
emit_data(disp, rspec, disp32_operand);
}
- } else if (base_enc == rsp->encoding() LP64_ONLY(|| base_enc == r12->encoding())) {
+ } else if ((base_enc & 0x7) == 4) {
+ // rsp | r12 | r20 | r28
// [rsp + disp]
if (disp == 0 && no_relocation) {
// [rsp]
@@ -657,10 +679,11 @@ void Assembler::emit_operand_helper(int reg_enc, int base_enc, int index_enc,
}
} else {
// [base + disp]
- assert(base_enc != rsp->encoding() LP64_ONLY(&& base_enc != r12->encoding()), "illegal addressing mode");
- if (disp == 0 && no_relocation &&
- base_enc != rbp->encoding() LP64_ONLY(&& base_enc != r13->encoding())) {
+ // !(rsp | r12 | r20 | r28) were handled above
+ assert(((base_enc & 0x7) != 4), "illegal addressing mode");
+ if (disp == 0 && no_relocation && ((base_enc & 0x7) != 5)) {
// [base]
+ // !(rbp | r13 | r21 | r29)
// [00 reg base]
emit_modrm(0, reg_enc, base_enc);
} else if (emit_compressed_disp_byte(disp) && no_relocation) {
@@ -817,6 +840,13 @@ address Assembler::locate_operand(address inst, WhichOperand which) {
NOT_LP64(assert(false, "64bit prefixes"));
goto again_after_prefix;
+ case REX2:
+ NOT_LP64(assert(false, "64bit prefixes"));
+ if ((0xFF & *ip++) & REXBIT_W) {
+ is_64bit = true;
+ }
+ goto again_after_prefix;
+
case REX_W:
case REX_WB:
case REX_WX:
@@ -866,6 +896,14 @@ address Assembler::locate_operand(address inst, WhichOperand which) {
case REX_WRXB:
NOT_LP64(assert(false, "64bit prefix found"));
goto again_after_size_prefix2;
+
+ case REX2:
+ NOT_LP64(assert(false, "64bit prefix found"));
+ if ((0xFF & *ip++) & REXBIT_W) {
+ is_64bit = true;
+ }
+ goto again_after_size_prefix2;
+
case 0x8B: // movw r, a
case 0x89: // movw a, r
debug_only(has_disp32 = true);
@@ -1155,6 +1193,7 @@ address Assembler::locate_operand(address inst, WhichOperand which) {
case REX_WRB:
case REX_WRX:
case REX_WRXB:
+ case REX2:
NOT_LP64(assert(false, "found 64bit prefix"));
ip++;
default:
@@ -1281,6 +1320,33 @@ void Assembler::emit_operand(XMMRegister reg, Address adr, int post_addr_length)
}
}
+void Assembler::emit_opcode_prefix_and_encoding(int byte1, int byte2, int ocp_and_encoding, int byte3) {
+ int opcode_prefix = (ocp_and_encoding & 0xFF00) >> 8;
+ if (opcode_prefix != 0) {
+ emit_int32(opcode_prefix, (unsigned char)byte1, byte2 | (ocp_and_encoding & 0xFF), byte3);
+ } else {
+ emit_int24((unsigned char)byte1, byte2 | (ocp_and_encoding & 0xFF), byte3);
+ }
+}
+
+void Assembler::emit_opcode_prefix_and_encoding(int byte1, int byte2, int ocp_and_encoding) {
+ int opcode_prefix = (ocp_and_encoding & 0xFF00) >> 8;
+ if (opcode_prefix != 0) {
+ emit_int24(opcode_prefix, (unsigned char)byte1, byte2 | (ocp_and_encoding & 0xFF));
+ } else {
+ emit_int16((unsigned char)byte1, byte2 | (ocp_and_encoding & 0xFF));
+ }
+}
+
+void Assembler::emit_opcode_prefix_and_encoding(int byte1, int ocp_and_encoding) {
+ int opcode_prefix = (ocp_and_encoding & 0xFF00) >> 8;
+ if (opcode_prefix != 0) {
+ emit_int16(opcode_prefix, (unsigned char)byte1 | (ocp_and_encoding & 0xFF));
+ } else {
+ emit_int8((unsigned char)byte1 | (ocp_and_encoding & 0xFF));
+ }
+}
+
// Now the Assembler instructions (identical for 32/64 bits)
void Assembler::adcl(Address dst, int32_t imm32) {
@@ -1603,50 +1669,48 @@ void Assembler::andl(Register dst, Register src) {
void Assembler::andnl(Register dst, Register src1, Register src2) {
assert(VM_Version::supports_bmi1(), "bit manipulation instructions not supported");
- InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = vex_prefix_and_encode(dst->encoding(), src1->encoding(), src2->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes);
+ InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ int encode = vex_prefix_and_encode(dst->encoding(), src1->encoding(), src2->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes, true);
emit_int16((unsigned char)0xF2, (0xC0 | encode));
}
void Assembler::andnl(Register dst, Register src1, Address src2) {
assert(VM_Version::supports_bmi1(), "bit manipulation instructions not supported");
InstructionMark im(this);
- InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
+ InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ attributes.set_address_attributes(/* tuple_type */ EVEX_NOSCALE, /* input_size_in_bits */ EVEX_32bit);
vex_prefix(src2, src1->encoding(), dst->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes);
emit_int8((unsigned char)0xF2);
emit_operand(dst, src2, 0);
}
void Assembler::bsfl(Register dst, Register src) {
- int encode = prefix_and_encode(dst->encoding(), src->encoding());
- emit_int24(0x0F,
- (unsigned char)0xBC,
- 0xC0 | encode);
+ int encode = prefix_and_encode(dst->encoding(), src->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xBC, 0xC0, encode);
}
void Assembler::bsrl(Register dst, Register src) {
- int encode = prefix_and_encode(dst->encoding(), src->encoding());
- emit_int24(0x0F,
- (unsigned char)0xBD,
- 0xC0 | encode);
+ int encode = prefix_and_encode(dst->encoding(), src->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xBD, 0xC0, encode);
}
void Assembler::bswapl(Register reg) { // bswap
- int encode = prefix_and_encode(reg->encoding());
- emit_int16(0x0F, (0xC8 | encode));
+ int encode = prefix_and_encode(reg->encoding(), false, true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xC8, encode);
}
void Assembler::blsil(Register dst, Register src) {
assert(VM_Version::supports_bmi1(), "bit manipulation instructions not supported");
- InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = vex_prefix_and_encode(rbx->encoding(), dst->encoding(), src->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ int encode = vex_prefix_and_encode(rbx->encoding(), dst->encoding(), src->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes, true);
emit_int16((unsigned char)0xF3, (0xC0 | encode));
}
void Assembler::blsil(Register dst, Address src) {
assert(VM_Version::supports_bmi1(), "bit manipulation instructions not supported");
InstructionMark im(this);
- InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ attributes.set_address_attributes(/* tuple_type */ EVEX_NOSCALE, /* input_size_in_bits */ EVEX_32bit);
vex_prefix(src, dst->encoding(), rbx->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes);
emit_int8((unsigned char)0xF3);
emit_operand(rbx, src, 0);
@@ -1654,8 +1718,8 @@ void Assembler::blsil(Register dst, Address src) {
void Assembler::blsmskl(Register dst, Register src) {
assert(VM_Version::supports_bmi1(), "bit manipulation instructions not supported");
- InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = vex_prefix_and_encode(rdx->encoding(), dst->encoding(), src->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ int encode = vex_prefix_and_encode(rdx->encoding(), dst->encoding(), src->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes, true);
emit_int16((unsigned char)0xF3,
0xC0 | encode);
}
@@ -1663,7 +1727,8 @@ void Assembler::blsmskl(Register dst, Register src) {
void Assembler::blsmskl(Register dst, Address src) {
assert(VM_Version::supports_bmi1(), "bit manipulation instructions not supported");
InstructionMark im(this);
- InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ attributes.set_address_attributes(/* tuple_type */ EVEX_NOSCALE, /* input_size_in_bits */ EVEX_32bit);
vex_prefix(src, dst->encoding(), rdx->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes);
emit_int8((unsigned char)0xF3);
emit_operand(rdx, src, 0);
@@ -1671,15 +1736,16 @@ void Assembler::blsmskl(Register dst, Address src) {
void Assembler::blsrl(Register dst, Register src) {
assert(VM_Version::supports_bmi1(), "bit manipulation instructions not supported");
- InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = vex_prefix_and_encode(rcx->encoding(), dst->encoding(), src->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ int encode = vex_prefix_and_encode(rcx->encoding(), dst->encoding(), src->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes, true);
emit_int16((unsigned char)0xF3, (0xC0 | encode));
}
void Assembler::blsrl(Register dst, Address src) {
assert(VM_Version::supports_bmi1(), "bit manipulation instructions not supported");
InstructionMark im(this);
- InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ attributes.set_address_attributes(/* tuple_type */ EVEX_NOSCALE, /* input_size_in_bits */ EVEX_32bit);
vex_prefix(src, dst->encoding(), rcx->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes);
emit_int8((unsigned char)0xF3);
emit_operand(rcx, src, 0);
@@ -1743,18 +1809,15 @@ void Assembler::cld() {
void Assembler::cmovl(Condition cc, Register dst, Register src) {
NOT_LP64(guarantee(VM_Version::supports_cmov(), "illegal instruction"));
- int encode = prefix_and_encode(dst->encoding(), src->encoding());
- emit_int24(0x0F,
- 0x40 | cc,
- 0xC0 | encode);
+ int encode = prefix_and_encode(dst->encoding(), src->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding(0x40 | cc, 0xC0, encode);
}
-
void Assembler::cmovl(Condition cc, Register dst, Address src) {
InstructionMark im(this);
NOT_LP64(guarantee(VM_Version::supports_cmov(), "illegal instruction"));
- prefix(src, dst);
- emit_int16(0x0F, (0x40 | cc));
+ prefix(src, dst, false, true /* is_map1 */);
+ emit_int8((0x40 | cc));
emit_operand(dst, src, 0);
}
@@ -1840,16 +1903,16 @@ void Assembler::cmpw(Address dst, Register reg) {
// The ZF is set if the compared values were equal, and cleared otherwise.
void Assembler::cmpxchgl(Register reg, Address adr) { // cmpxchg
InstructionMark im(this);
- prefix(adr, reg);
- emit_int16(0x0F, (unsigned char)0xB1);
+ prefix(adr, reg, false, true /* is_map1 */);
+ emit_int8((unsigned char)0xB1);
emit_operand(reg, adr, 0);
}
void Assembler::cmpxchgw(Register reg, Address adr) { // cmpxchg
InstructionMark im(this);
size_prefix();
- prefix(adr, reg);
- emit_int16(0x0F, (unsigned char)0xB1);
+ prefix(adr, reg, false, true /* is_map1 */);
+ emit_int8((unsigned char)0xB1);
emit_operand(reg, adr, 0);
}
@@ -1858,8 +1921,8 @@ void Assembler::cmpxchgw(Register reg, Address adr) { // cmpxchg
// The ZF is set if the compared values were equal, and cleared otherwise.
void Assembler::cmpxchgb(Register reg, Address adr) { // cmpxchg
InstructionMark im(this);
- prefix(adr, reg, true);
- emit_int16(0x0F, (unsigned char)0xB0);
+ prefix(adr, reg, true, true /* is_map1 */);
+ emit_int8((unsigned char)0xB0);
emit_operand(reg, adr, 0);
}
@@ -1917,69 +1980,83 @@ void Assembler::cpuid() {
// F2 REX.W 0F 38 F1 / r CRC32 r64, r / m64 RM Valid N.E. Accumulate CRC32 on r / m64. v
void Assembler::crc32(Register crc, Register v, int8_t sizeInBytes) {
assert(VM_Version::supports_sse4_2(), "");
- int8_t w = 0x01;
- Prefix p = Prefix_EMPTY;
+ if (needs_eevex(crc, v)) {
+ InstructionAttr attributes(AVX_128bit, /* rex_w */ sizeInBytes == 8, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ int encode = vex_prefix_and_encode(crc->encoding(), 0, v->encoding(), sizeInBytes == 2 ? VEX_SIMD_66 : VEX_SIMD_NONE, VEX_OPCODE_0F_3C, &attributes, true);
+ emit_int16(sizeInBytes == 1 ? (unsigned char)0xF0 : (unsigned char)0xF1, (0xC0 | encode));
+ } else {
+ int8_t w = 0x01;
+ Prefix p = Prefix_EMPTY;
- emit_int8((unsigned char)0xF2);
- switch (sizeInBytes) {
- case 1:
- w = 0;
- break;
- case 2:
- case 4:
- break;
- LP64_ONLY(case 8:)
- // This instruction is not valid in 32 bits
- // Note:
- // http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-software-developer-instruction-set-reference-manual-325383.pdf
- //
- // Page B - 72 Vol. 2C says
- // qwreg2 to qwreg 1111 0010 : 0100 1R0B : 0000 1111 : 0011 1000 : 1111 0000 : 11 qwreg1 qwreg2
- // mem64 to qwreg 1111 0010 : 0100 1R0B : 0000 1111 : 0011 1000 : 1111 0000 : mod qwreg r / m
- // F0!!!
- // while 3 - 208 Vol. 2A
- // F2 REX.W 0F 38 F1 / r CRC32 r64, r / m64 RM Valid N.E.Accumulate CRC32 on r / m64.
- //
- // the 0 on a last bit is reserved for a different flavor of this instruction :
- // F2 REX.W 0F 38 F0 / r CRC32 r64, r / m8 RM Valid N.E.Accumulate CRC32 on r / m8.
- p = REX_W;
- break;
- default:
- assert(0, "Unsupported value for a sizeInBytes argument");
- break;
+ emit_int8((unsigned char)0xF2);
+ switch (sizeInBytes) {
+ case 1:
+ w = 0;
+ break;
+ case 2:
+ case 4:
+ break;
+ LP64_ONLY(case 8:)
+ // This instruction is not valid in 32 bits
+ // Note:
+ // http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-software-developer-instruction-set-reference-manual-325383.pdf
+ //
+ // Page B - 72 Vol. 2C says
+ // qwreg2 to qwreg 1111 0010 : 0100 1R0B : 0000 1111 : 0011 1000 : 1111 0000 : 11 qwreg1 qwreg2
+ // mem64 to qwreg 1111 0010 : 0100 1R0B : 0000 1111 : 0011 1000 : 1111 0000 : mod qwreg r / m
+ // F0!!!
+ // while 3 - 208 Vol. 2A
+ // F2 REX.W 0F 38 F1 / r CRC32 r64, r / m64 RM Valid N.E.Accumulate CRC32 on r / m64.
+ //
+ // the 0 on a last bit is reserved for a different flavor of this instruction :
+ // F2 REX.W 0F 38 F0 / r CRC32 r64, r / m8 RM Valid N.E.Accumulate CRC32 on r / m8.
+ p = REX_W;
+ break;
+ default:
+ assert(0, "Unsupported value for a sizeInBytes argument");
+ break;
+ }
+ LP64_ONLY(prefix(crc, v, p);)
+ emit_int32(0x0F,
+ 0x38,
+ 0xF0 | w,
+ 0xC0 | ((crc->encoding() & 0x7) << 3) | (v->encoding() & 7));
}
- LP64_ONLY(prefix(crc, v, p);)
- emit_int32(0x0F,
- 0x38,
- 0xF0 | w,
- 0xC0 | ((crc->encoding() & 0x7) << 3) | (v->encoding() & 7));
}
void Assembler::crc32(Register crc, Address adr, int8_t sizeInBytes) {
assert(VM_Version::supports_sse4_2(), "");
InstructionMark im(this);
- int8_t w = 0x01;
- Prefix p = Prefix_EMPTY;
+ if (needs_eevex(crc, adr.base(), adr.index())) {
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ sizeInBytes == 8, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ true);
+ attributes.set_address_attributes(/* tuple_type */ EVEX_NOSCALE, /* input_size_in_bits */ EVEX_64bit);
+ vex_prefix(adr, 0, crc->encoding(), sizeInBytes == 2 ? VEX_SIMD_66 : VEX_SIMD_NONE, VEX_OPCODE_0F_3C, &attributes);
+ emit_int8(sizeInBytes == 1 ? (unsigned char)0xF0 : (unsigned char)0xF1);
+ emit_operand(crc, adr, 0);
+ } else {
+ int8_t w = 0x01;
+ Prefix p = Prefix_EMPTY;
- emit_int8((uint8_t)0xF2);
- switch (sizeInBytes) {
- case 1:
- w = 0;
- break;
- case 2:
- case 4:
- break;
- LP64_ONLY(case 8:)
- // This instruction is not valid in 32 bits
- p = REX_W;
- break;
- default:
- assert(0, "Unsupported value for a sizeInBytes argument");
- break;
+ emit_int8((uint8_t)0xF2);
+ switch (sizeInBytes) {
+ case 1:
+ w = 0;
+ break;
+ case 2:
+ case 4:
+ break;
+ LP64_ONLY(case 8:)
+ // This instruction is not valid in 32 bits
+ p = REX_W;
+ break;
+ default:
+ assert(0, "Unsupported value for a sizeInBytes argument");
+ break;
+ }
+ LP64_ONLY(prefix(crc, adr, p);)
+ emit_int24(0x0F, 0x38, (0xF0 | w));
+ emit_operand(crc, adr, 0);
}
- LP64_ONLY(prefix(crc, adr, p);)
- emit_int24(0x0F, 0x38, (0xF0 | w));
- emit_operand(crc, adr, 0);
}
void Assembler::cvtdq2pd(XMMRegister dst, XMMRegister src) {
@@ -2081,7 +2158,7 @@ void Assembler::cvtsd2ss(XMMRegister dst, Address src) {
void Assembler::cvtsi2sdl(XMMRegister dst, Register src) {
NOT_LP64(assert(VM_Version::supports_sse2(), ""));
InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = simd_prefix_and_encode(dst, dst, as_XMMRegister(src->encoding()), VEX_SIMD_F2, VEX_OPCODE_0F, &attributes);
+ int encode = simd_prefix_and_encode(dst, dst, as_XMMRegister(src->encoding()), VEX_SIMD_F2, VEX_OPCODE_0F, &attributes, true);
emit_int16(0x2A, (0xC0 | encode));
}
@@ -2098,7 +2175,7 @@ void Assembler::cvtsi2sdl(XMMRegister dst, Address src) {
void Assembler::cvtsi2ssl(XMMRegister dst, Register src) {
NOT_LP64(assert(VM_Version::supports_sse(), ""));
InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = simd_prefix_and_encode(dst, dst, as_XMMRegister(src->encoding()), VEX_SIMD_F3, VEX_OPCODE_0F, &attributes);
+ int encode = simd_prefix_and_encode(dst, dst, as_XMMRegister(src->encoding()), VEX_SIMD_F3, VEX_OPCODE_0F, &attributes, true);
emit_int16(0x2A, (0xC0 | encode));
}
@@ -2115,7 +2192,7 @@ void Assembler::cvtsi2ssl(XMMRegister dst, Address src) {
void Assembler::cvtsi2ssq(XMMRegister dst, Register src) {
NOT_LP64(assert(VM_Version::supports_sse(), ""));
InstructionAttr attributes(AVX_128bit, /* rex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = simd_prefix_and_encode(dst, dst, as_XMMRegister(src->encoding()), VEX_SIMD_F3, VEX_OPCODE_0F, &attributes);
+ int encode = simd_prefix_and_encode(dst, dst, as_XMMRegister(src->encoding()), VEX_SIMD_F3, VEX_OPCODE_0F, &attributes, true);
emit_int16(0x2A, (0xC0 | encode));
}
@@ -2418,10 +2495,8 @@ void Assembler::imull(Register src) {
}
void Assembler::imull(Register dst, Register src) {
- int encode = prefix_and_encode(dst->encoding(), src->encoding());
- emit_int24(0x0F,
- (unsigned char)0xAF,
- (0xC0 | encode));
+ int encode = prefix_and_encode(dst->encoding(), src->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xAF, 0xC0, encode);
}
void Assembler::imull(Register dst, Address src, int32_t value) {
@@ -2450,8 +2525,8 @@ void Assembler::imull(Register dst, Register src, int value) {
void Assembler::imull(Register dst, Address src) {
InstructionMark im(this);
- prefix(src, dst);
- emit_int16(0x0F, (unsigned char)0xAF);
+ prefix(src, dst, false, true /* is_map1 */);
+ emit_int8((unsigned char)0xAF);
emit_operand(dst, src, 0);
}
@@ -2587,7 +2662,10 @@ void Assembler::jmpb_0(Label& L, const char* file, int line) {
}
void Assembler::ldmxcsr( Address src) {
- if (UseAVX > 0 ) {
+ // This instruction should be SSE encoded with the REX2 prefix when an
+ // extended GPR is present. To be consistent when UseAPX is enabled, use
+ // this encoding even when an extended GPR is not used.
+ if (UseAVX > 0 && !UseAPX ) {
InstructionMark im(this);
InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
vex_prefix(src, 0, 0, VEX_SIMD_NONE, VEX_OPCODE_0F, &attributes);
@@ -2596,8 +2674,8 @@ void Assembler::ldmxcsr( Address src) {
} else {
NOT_LP64(assert(VM_Version::supports_sse(), ""));
InstructionMark im(this);
- prefix(src);
- emit_int16(0x0F, (unsigned char)0xAE);
+ prefix(src, true /* is_map1 */);
+ emit_int8((unsigned char)0xAE);
emit_operand(as_Register(2), src, 0);
}
}
@@ -2624,16 +2702,16 @@ void Assembler::size_prefix() {
void Assembler::lzcntl(Register dst, Register src) {
assert(VM_Version::supports_lzcnt(), "encoding is treated as BSR");
emit_int8((unsigned char)0xF3);
- int encode = prefix_and_encode(dst->encoding(), src->encoding());
- emit_int24(0x0F, (unsigned char)0xBD, (0xC0 | encode));
+ int encode = prefix_and_encode(dst->encoding(), src->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xBD, 0xC0, encode);
}
void Assembler::lzcntl(Register dst, Address src) {
assert(VM_Version::supports_lzcnt(), "encoding is treated as BSR");
InstructionMark im(this);
emit_int8((unsigned char)0xF3);
- prefix(src, dst);
- emit_int16(0x0F, (unsigned char)0xBD);
+ prefix(src, dst, false, true /* is_map1 */);
+ emit_int8((unsigned char)0xBD);
emit_operand(dst, src, 0);
}
@@ -2725,28 +2803,28 @@ void Assembler::kmovbl(KRegister dst, KRegister src) {
void Assembler::kmovbl(KRegister dst, Register src) {
assert(VM_Version::supports_avx512dq(), "");
- InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = vex_prefix_and_encode(dst->encoding(), 0, src->encoding(), VEX_SIMD_66, VEX_OPCODE_0F, &attributes);
+ InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ int encode = vex_prefix_and_encode(dst->encoding(), 0, src->encoding(), VEX_SIMD_66, VEX_OPCODE_0F, &attributes, true);
emit_int16((unsigned char)0x92, (0xC0 | encode));
}
void Assembler::kmovbl(Register dst, KRegister src) {
assert(VM_Version::supports_avx512dq(), "");
- InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
+ InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
int encode = vex_prefix_and_encode(dst->encoding(), 0, src->encoding(), VEX_SIMD_66, VEX_OPCODE_0F, &attributes);
emit_int16((unsigned char)0x93, (0xC0 | encode));
}
void Assembler::kmovwl(KRegister dst, Register src) {
assert(VM_Version::supports_evex(), "");
- InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = vex_prefix_and_encode(dst->encoding(), 0, src->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F, &attributes);
+ InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ int encode = vex_prefix_and_encode(dst->encoding(), 0, src->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F, &attributes, true);
emit_int16((unsigned char)0x92, (0xC0 | encode));
}
void Assembler::kmovwl(Register dst, KRegister src) {
assert(VM_Version::supports_evex(), "");
- InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
+ InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
int encode = vex_prefix_and_encode(dst->encoding(), 0, src->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F, &attributes);
emit_int16((unsigned char)0x93, (0xC0 | encode));
}
@@ -2754,7 +2832,8 @@ void Assembler::kmovwl(Register dst, KRegister src) {
void Assembler::kmovwl(KRegister dst, Address src) {
assert(VM_Version::supports_evex(), "");
InstructionMark im(this);
- InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ attributes.set_address_attributes(/* tuple_type */ EVEX_NOSCALE, /* input_size_in_bits */ EVEX_32bit);
vex_prefix(src, 0, dst->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F, &attributes);
emit_int8((unsigned char)0x90);
emit_operand(dst, src, 0);
@@ -2763,7 +2842,8 @@ void Assembler::kmovwl(KRegister dst, Address src) {
void Assembler::kmovwl(Address dst, KRegister src) {
assert(VM_Version::supports_evex(), "");
InstructionMark im(this);
- InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ attributes.set_address_attributes(/* tuple_type */ EVEX_NOSCALE, /* input_size_in_bits */ EVEX_32bit);
vex_prefix(dst, 0, src->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F, &attributes);
emit_int8((unsigned char)0x91);
emit_operand(src, dst, 0);
@@ -2778,14 +2858,14 @@ void Assembler::kmovwl(KRegister dst, KRegister src) {
void Assembler::kmovdl(KRegister dst, Register src) {
assert(VM_Version::supports_avx512bw(), "");
- InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = vex_prefix_and_encode(dst->encoding(), 0, src->encoding(), VEX_SIMD_F2, VEX_OPCODE_0F, &attributes);
+ InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ int encode = vex_prefix_and_encode(dst->encoding(), 0, src->encoding(), VEX_SIMD_F2, VEX_OPCODE_0F, &attributes, true);
emit_int16((unsigned char)0x92, (0xC0 | encode));
}
void Assembler::kmovdl(Register dst, KRegister src) {
assert(VM_Version::supports_avx512bw(), "");
- InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
+ InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
int encode = vex_prefix_and_encode(dst->encoding(), 0, src->encoding(), VEX_SIMD_F2, VEX_OPCODE_0F, &attributes);
emit_int16((unsigned char)0x93, (0xC0 | encode));
}
@@ -2800,7 +2880,8 @@ void Assembler::kmovql(KRegister dst, KRegister src) {
void Assembler::kmovql(KRegister dst, Address src) {
assert(VM_Version::supports_avx512bw(), "");
InstructionMark im(this);
- InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ attributes.set_address_attributes(/* tuple_type */ EVEX_NOSCALE, /* input_size_in_bits */ EVEX_32bit);
vex_prefix(src, 0, dst->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F, &attributes);
emit_int8((unsigned char)0x90);
emit_operand(dst, src, 0);
@@ -2809,7 +2890,8 @@ void Assembler::kmovql(KRegister dst, Address src) {
void Assembler::kmovql(Address dst, KRegister src) {
assert(VM_Version::supports_avx512bw(), "");
InstructionMark im(this);
- InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ attributes.set_address_attributes(/* tuple_type */ EVEX_NOSCALE, /* input_size_in_bits */ EVEX_32bit);
vex_prefix(dst, 0, src->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F, &attributes);
emit_int8((unsigned char)0x91);
emit_operand(src, dst, 0);
@@ -2817,14 +2899,14 @@ void Assembler::kmovql(Address dst, KRegister src) {
void Assembler::kmovql(KRegister dst, Register src) {
assert(VM_Version::supports_avx512bw(), "");
- InstructionAttr attributes(AVX_128bit, /* rex_w */ true, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = vex_prefix_and_encode(dst->encoding(), 0, src->encoding(), VEX_SIMD_F2, VEX_OPCODE_0F, &attributes);
+ InstructionAttr attributes(AVX_128bit, /* rex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ int encode = vex_prefix_and_encode(dst->encoding(), 0, src->encoding(), VEX_SIMD_F2, VEX_OPCODE_0F, &attributes, true);
emit_int16((unsigned char)0x92, (0xC0 | encode));
}
void Assembler::kmovql(Register dst, KRegister src) {
assert(VM_Version::supports_avx512bw(), "");
- InstructionAttr attributes(AVX_128bit, /* rex_w */ true, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
+ InstructionAttr attributes(AVX_128bit, /* rex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
int encode = vex_prefix_and_encode(dst->encoding(), 0, src->encoding(), VEX_SIMD_F2, VEX_OPCODE_0F, &attributes);
emit_int16((unsigned char)0x93, (0xC0 | encode));
}
@@ -3105,7 +3187,7 @@ void Assembler::movb(Address dst, Register src) {
void Assembler::movdl(XMMRegister dst, Register src) {
NOT_LP64(assert(VM_Version::supports_sse2(), ""));
InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = simd_prefix_and_encode(dst, xnoreg, as_XMMRegister(src->encoding()), VEX_SIMD_66, VEX_OPCODE_0F, &attributes);
+ int encode = simd_prefix_and_encode(dst, xnoreg, as_XMMRegister(src->encoding()), VEX_SIMD_66, VEX_OPCODE_0F, &attributes, true);
emit_int16(0x6E, (0xC0 | encode));
}
@@ -3113,7 +3195,7 @@ void Assembler::movdl(Register dst, XMMRegister src) {
NOT_LP64(assert(VM_Version::supports_sse2(), ""));
InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
// swap src/dst to get correct prefix
- int encode = simd_prefix_and_encode(src, xnoreg, as_XMMRegister(dst->encoding()), VEX_SIMD_66, VEX_OPCODE_0F, &attributes);
+ int encode = simd_prefix_and_encode(src, xnoreg, as_XMMRegister(dst->encoding()), VEX_SIMD_66, VEX_OPCODE_0F, &attributes, true);
emit_int16(0x7E, (0xC0 | encode));
}
@@ -3582,28 +3664,28 @@ void Assembler::movq(Register dst, XMMRegister src) {
NOT_LP64(assert(VM_Version::supports_sse2(), ""));
InstructionAttr attributes(AVX_128bit, /* rex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
// swap src/dst to get correct prefix
- int encode = simd_prefix_and_encode(src, xnoreg, as_XMMRegister(dst->encoding()), VEX_SIMD_66, VEX_OPCODE_0F, &attributes);
+ int encode = simd_prefix_and_encode(src, xnoreg, as_XMMRegister(dst->encoding()), VEX_SIMD_66, VEX_OPCODE_0F, &attributes, true);
emit_int16(0x7E, (0xC0 | encode));
}
void Assembler::movq(XMMRegister dst, Register src) {
NOT_LP64(assert(VM_Version::supports_sse2(), ""));
InstructionAttr attributes(AVX_128bit, /* rex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = simd_prefix_and_encode(dst, xnoreg, as_XMMRegister(src->encoding()), VEX_SIMD_66, VEX_OPCODE_0F, &attributes);
+ int encode = simd_prefix_and_encode(dst, xnoreg, as_XMMRegister(src->encoding()), VEX_SIMD_66, VEX_OPCODE_0F, &attributes, true);
emit_int16(0x6E, (0xC0 | encode));
}
void Assembler::movsbl(Register dst, Address src) { // movsxb
InstructionMark im(this);
- prefix(src, dst);
- emit_int16(0x0F, (unsigned char)0xBE);
+ prefix(src, dst, false, true /* is_map1 */);
+ emit_int8((unsigned char)0xBE);
emit_operand(dst, src, 0);
}
void Assembler::movsbl(Register dst, Register src) { // movsxb
NOT_LP64(assert(src->has_byte_register(), "must have byte register"));
- int encode = prefix_and_encode(dst->encoding(), false, src->encoding(), true);
- emit_int24(0x0F, (unsigned char)0xBE, (0xC0 | encode));
+ int encode = prefix_and_encode(dst->encoding(), false, src->encoding(), true, true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xBE, 0xC0, encode);
}
void Assembler::movsd(XMMRegister dst, XMMRegister src) {
@@ -3675,14 +3757,14 @@ void Assembler::movss(Address dst, XMMRegister src) {
void Assembler::movswl(Register dst, Address src) { // movsxw
InstructionMark im(this);
- prefix(src, dst);
- emit_int16(0x0F, (unsigned char)0xBF);
+ prefix(src, dst, false, true /* is_map1 */);
+ emit_int8((unsigned char)0xBF);
emit_operand(dst, src, 0);
}
void Assembler::movswl(Register dst, Register src) { // movsxw
- int encode = prefix_and_encode(dst->encoding(), src->encoding());
- emit_int24(0x0F, (unsigned char)0xBF, (0xC0 | encode));
+ int encode = prefix_and_encode(dst->encoding(), src->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xBF, 0xC0, encode);
}
void Assembler::movups(XMMRegister dst, Address src) {
@@ -3753,27 +3835,27 @@ void Assembler::movw(Address dst, Register src) {
void Assembler::movzbl(Register dst, Address src) { // movzxb
InstructionMark im(this);
- prefix(src, dst);
- emit_int16(0x0F, (unsigned char)0xB6);
+ prefix(src, dst, false, true /* is_map1 */);
+ emit_int8((unsigned char)0xB6);
emit_operand(dst, src, 0);
}
void Assembler::movzbl(Register dst, Register src) { // movzxb
NOT_LP64(assert(src->has_byte_register(), "must have byte register"));
- int encode = prefix_and_encode(dst->encoding(), false, src->encoding(), true);
- emit_int24(0x0F, (unsigned char)0xB6, 0xC0 | encode);
+ int encode = prefix_and_encode(dst->encoding(), false, src->encoding(), true, true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xB6, 0xC0, encode);
}
void Assembler::movzwl(Register dst, Address src) { // movzxw
InstructionMark im(this);
- prefix(src, dst);
- emit_int16(0x0F, (unsigned char)0xB7);
+ prefix(src, dst, false, true /* is_map1 */);
+ emit_int8((unsigned char)0xB7);
emit_operand(dst, src, 0);
}
void Assembler::movzwl(Register dst, Register src) { // movzxw
- int encode = prefix_and_encode(dst->encoding(), src->encoding());
- emit_int24(0x0F, (unsigned char)0xB7, 0xC0 | encode);
+ int encode = prefix_and_encode(dst->encoding(), src->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xB7, 0xC0, encode);
}
void Assembler::mull(Address src) {
@@ -3836,7 +3918,7 @@ void Assembler::negl(Address dst) {
emit_operand(as_Register(3), dst, 0);
}
-void Assembler::nop(int i) {
+void Assembler::nop(uint i) {
#ifdef ASSERT
assert(i > 0, " ");
// The fancy nops aren't currently recognized by debuggers making it a
@@ -4416,6 +4498,7 @@ void Assembler::ud2() {
void Assembler::pcmpestri(XMMRegister dst, Address src, int imm8) {
assert(VM_Version::supports_sse4_2(), "");
+ assert(!needs_eevex(src.base(), src.index()), "does not support extended gprs");
InstructionMark im(this);
InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
simd_prefix(dst, xnoreg, src, VEX_SIMD_66, VEX_OPCODE_0F_3A, &attributes);
@@ -4743,7 +4826,7 @@ void Assembler::vmovmskpd(Register dst, XMMRegister src, int vec_enc) {
void Assembler::pextrd(Register dst, XMMRegister src, int imm8) {
assert(VM_Version::supports_sse4_1(), "");
InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ _legacy_mode_dq, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = simd_prefix_and_encode(src, xnoreg, as_XMMRegister(dst->encoding()), VEX_SIMD_66, VEX_OPCODE_0F_3A, &attributes);
+ int encode = simd_prefix_and_encode(src, xnoreg, as_XMMRegister(dst->encoding()), VEX_SIMD_66, VEX_OPCODE_0F_3A, &attributes, true);
emit_int24(0x16, (0xC0 | encode), imm8);
}
@@ -4761,7 +4844,7 @@ void Assembler::pextrd(Address dst, XMMRegister src, int imm8) {
void Assembler::pextrq(Register dst, XMMRegister src, int imm8) {
assert(VM_Version::supports_sse4_1(), "");
InstructionAttr attributes(AVX_128bit, /* rex_w */ true, /* legacy_mode */ _legacy_mode_dq, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = simd_prefix_and_encode(src, xnoreg, as_XMMRegister(dst->encoding()), VEX_SIMD_66, VEX_OPCODE_0F_3A, &attributes);
+ int encode = simd_prefix_and_encode(src, xnoreg, as_XMMRegister(dst->encoding()), VEX_SIMD_66, VEX_OPCODE_0F_3A, &attributes, true);
emit_int24(0x16, (0xC0 | encode), imm8);
}
@@ -4797,7 +4880,7 @@ void Assembler::pextrw(Address dst, XMMRegister src, int imm8) {
void Assembler::pextrb(Register dst, XMMRegister src, int imm8) {
assert(VM_Version::supports_sse4_1(), "");
InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ _legacy_mode_bw, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = simd_prefix_and_encode(src, xnoreg, as_XMMRegister(dst->encoding()), VEX_SIMD_66, VEX_OPCODE_0F_3A, &attributes);
+ int encode = simd_prefix_and_encode(src, xnoreg, as_XMMRegister(dst->encoding()), VEX_SIMD_66, VEX_OPCODE_0F_3A, &attributes, true);
emit_int24(0x14, (0xC0 | encode), imm8);
}
@@ -4815,7 +4898,7 @@ void Assembler::pextrb(Address dst, XMMRegister src, int imm8) {
void Assembler::pinsrd(XMMRegister dst, Register src, int imm8) {
assert(VM_Version::supports_sse4_1(), "");
InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ _legacy_mode_dq, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = simd_prefix_and_encode(dst, dst, as_XMMRegister(src->encoding()), VEX_SIMD_66, VEX_OPCODE_0F_3A, &attributes);
+ int encode = simd_prefix_and_encode(dst, dst, as_XMMRegister(src->encoding()), VEX_SIMD_66, VEX_OPCODE_0F_3A, &attributes, true);
emit_int24(0x22, (0xC0 | encode), imm8);
}
@@ -4833,14 +4916,14 @@ void Assembler::pinsrd(XMMRegister dst, Address src, int imm8) {
void Assembler::vpinsrd(XMMRegister dst, XMMRegister nds, Register src, int imm8) {
assert(VM_Version::supports_avx(), "");
InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ _legacy_mode_dq, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = vex_prefix_and_encode(dst->encoding(), nds->encoding(), src->encoding(), VEX_SIMD_66, VEX_OPCODE_0F_3A, &attributes);
+ int encode = vex_prefix_and_encode(dst->encoding(), nds->encoding(), src->encoding(), VEX_SIMD_66, VEX_OPCODE_0F_3A, &attributes, true);
emit_int24(0x22, (0xC0 | encode), imm8);
}
void Assembler::pinsrq(XMMRegister dst, Register src, int imm8) {
assert(VM_Version::supports_sse4_1(), "");
InstructionAttr attributes(AVX_128bit, /* rex_w */ true, /* legacy_mode */ _legacy_mode_dq, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = simd_prefix_and_encode(dst, dst, as_XMMRegister(src->encoding()), VEX_SIMD_66, VEX_OPCODE_0F_3A, &attributes);
+ int encode = simd_prefix_and_encode(dst, dst, as_XMMRegister(src->encoding()), VEX_SIMD_66, VEX_OPCODE_0F_3A, &attributes, true);
emit_int24(0x22, (0xC0 | encode), imm8);
}
@@ -4858,14 +4941,14 @@ void Assembler::pinsrq(XMMRegister dst, Address src, int imm8) {
void Assembler::vpinsrq(XMMRegister dst, XMMRegister nds, Register src, int imm8) {
assert(VM_Version::supports_avx(), "");
InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ _legacy_mode_dq, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = vex_prefix_and_encode(dst->encoding(), nds->encoding(), src->encoding(), VEX_SIMD_66, VEX_OPCODE_0F_3A, &attributes);
+ int encode = vex_prefix_and_encode(dst->encoding(), nds->encoding(), src->encoding(), VEX_SIMD_66, VEX_OPCODE_0F_3A, &attributes, true);
emit_int24(0x22, (0xC0 | encode), imm8);
}
void Assembler::pinsrw(XMMRegister dst, Register src, int imm8) {
assert(VM_Version::supports_sse2(), "");
InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ _legacy_mode_bw, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = simd_prefix_and_encode(dst, dst, as_XMMRegister(src->encoding()), VEX_SIMD_66, VEX_OPCODE_0F, &attributes);
+ int encode = simd_prefix_and_encode(dst, dst, as_XMMRegister(src->encoding()), VEX_SIMD_66, VEX_OPCODE_0F, &attributes, true);
emit_int24((unsigned char)0xC4, (0xC0 | encode), imm8);
}
@@ -4883,7 +4966,7 @@ void Assembler::pinsrw(XMMRegister dst, Address src, int imm8) {
void Assembler::vpinsrw(XMMRegister dst, XMMRegister nds, Register src, int imm8) {
assert(VM_Version::supports_avx(), "");
InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ _legacy_mode_bw, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = vex_prefix_and_encode(dst->encoding(), nds->encoding(), src->encoding(), VEX_SIMD_66, VEX_OPCODE_0F, &attributes);
+ int encode = vex_prefix_and_encode(dst->encoding(), nds->encoding(), src->encoding(), VEX_SIMD_66, VEX_OPCODE_0F, &attributes, true);
emit_int24((unsigned char)0xC4, (0xC0 | encode), imm8);
}
@@ -4901,14 +4984,14 @@ void Assembler::pinsrb(XMMRegister dst, Address src, int imm8) {
void Assembler::pinsrb(XMMRegister dst, Register src, int imm8) {
assert(VM_Version::supports_sse4_1(), "");
InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ _legacy_mode_bw, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = simd_prefix_and_encode(dst, dst, as_XMMRegister(src->encoding()), VEX_SIMD_66, VEX_OPCODE_0F_3A, &attributes);
+ int encode = simd_prefix_and_encode(dst, dst, as_XMMRegister(src->encoding()), VEX_SIMD_66, VEX_OPCODE_0F_3A, &attributes, true);
emit_int24(0x20, (0xC0 | encode), imm8);
}
void Assembler::vpinsrb(XMMRegister dst, XMMRegister nds, Register src, int imm8) {
assert(VM_Version::supports_avx(), "");
InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ _legacy_mode_bw, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = vex_prefix_and_encode(dst->encoding(), nds->encoding(), src->encoding(), VEX_SIMD_66, VEX_OPCODE_0F_3A, &attributes);
+ int encode = vex_prefix_and_encode(dst->encoding(), nds->encoding(), src->encoding(), VEX_SIMD_66, VEX_OPCODE_0F_3A, &attributes, true);
emit_int24(0x20, (0xC0 | encode), imm8);
}
@@ -5311,16 +5394,16 @@ void Assembler::popcntl(Register dst, Address src) {
assert(VM_Version::supports_popcnt(), "must support");
InstructionMark im(this);
emit_int8((unsigned char)0xF3);
- prefix(src, dst);
- emit_int16(0x0F, (unsigned char)0xB8);
+ prefix(src, dst, false, true /* is_map1 */);
+ emit_int8((unsigned char)0xB8);
emit_operand(dst, src, 0);
}
void Assembler::popcntl(Register dst, Register src) {
assert(VM_Version::supports_popcnt(), "must support");
emit_int8((unsigned char)0xF3);
- int encode = prefix_and_encode(dst->encoding(), src->encoding());
- emit_int24(0x0F, (unsigned char)0xB8, (0xC0 | encode));
+ int encode = prefix_and_encode(dst->encoding(), src->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xB8, 0xC0, encode);
}
void Assembler::evpopcntb(XMMRegister dst, KRegister mask, XMMRegister src, bool merge, int vector_len) {
@@ -5392,48 +5475,48 @@ void Assembler::popl(Address dst) {
void Assembler::prefetchnta(Address src) {
NOT_LP64(assert(VM_Version::supports_sse(), "must support"));
InstructionMark im(this);
- prefix(src);
- emit_int16(0x0F, 0x18);
+ prefix(src, true /* is_map1 */);
+ emit_int8(0x18);
emit_operand(rax, src, 0); // 0, src
}
void Assembler::prefetchr(Address src) {
assert(VM_Version::supports_3dnow_prefetch(), "must support");
InstructionMark im(this);
- prefix(src);
- emit_int16(0x0F, 0x0D);
+ prefix(src, true /* is_map1 */);
+ emit_int8(0x0D);
emit_operand(rax, src, 0); // 0, src
}
void Assembler::prefetcht0(Address src) {
NOT_LP64(assert(VM_Version::supports_sse(), "must support"));
InstructionMark im(this);
- prefix(src);
- emit_int16(0x0F, 0x18);
+ prefix(src, true /* is_map1 */);
+ emit_int8(0x18);
emit_operand(rcx, src, 0); // 1, src
}
void Assembler::prefetcht1(Address src) {
NOT_LP64(assert(VM_Version::supports_sse(), "must support"));
InstructionMark im(this);
- prefix(src);
- emit_int16(0x0F, 0x18);
+ prefix(src, true /* is_map1 */);
+ emit_int8(0x18);
emit_operand(rdx, src, 0); // 2, src
}
void Assembler::prefetcht2(Address src) {
NOT_LP64(assert(VM_Version::supports_sse(), "must support"));
InstructionMark im(this);
- prefix(src);
- emit_int16(0x0F, 0x18);
+ prefix(src, true /* is_map1 */);
+ emit_int8(0x18);
emit_operand(rbx, src, 0); // 3, src
}
void Assembler::prefetchw(Address src) {
assert(VM_Version::supports_3dnow_prefetch(), "must support");
InstructionMark im(this);
- prefix(src);
- emit_int16(0x0F, 0x0D);
+ prefix(src, true /* is_map1 */);
+ emit_int8(0x0D);
emit_operand(rcx, src, 0); // 1, src
}
@@ -5441,6 +5524,12 @@ void Assembler::prefix(Prefix p) {
emit_int8(p);
}
+void Assembler::prefix16(int prefix) {
+ assert(UseAPX, "APX features not enabled");
+ emit_int8((prefix & 0xff00) >> 8);
+ emit_int8(prefix & 0xff);
+}
+
void Assembler::pshufb(XMMRegister dst, XMMRegister src) {
assert(VM_Version::supports_ssse3(), "");
InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ _legacy_mode_bw, /* no_mask_reg */ true, /* uses_vl */ true);
@@ -5648,6 +5737,7 @@ void Assembler::vpslldq(XMMRegister dst, XMMRegister src, int shift, int vector_
void Assembler::ptest(XMMRegister dst, Address src) {
assert(VM_Version::supports_sse4_1(), "");
assert((UseAVX > 0), "SSE mode requires address alignment 16 bytes");
+ assert(!needs_eevex(src.base(), src.index()), "does not support extended gprs");
InstructionMark im(this);
InstructionAttr attributes(AVX_128bit, /* rex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
simd_prefix(dst, xnoreg, src, VEX_SIMD_66, VEX_OPCODE_0F_38, &attributes);
@@ -5665,6 +5755,7 @@ void Assembler::ptest(XMMRegister dst, XMMRegister src) {
void Assembler::vptest(XMMRegister dst, Address src) {
assert(VM_Version::supports_avx(), "");
+ assert(!needs_eevex(src.base(), src.index()), "does not support extended gprs");
InstructionMark im(this);
InstructionAttr attributes(AVX_256bit, /* rex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
assert(dst != xnoreg, "sanity");
@@ -6078,8 +6169,8 @@ void Assembler::sbbl(Register dst, Register src) {
void Assembler::setb(Condition cc, Register dst) {
assert(0 <= cc && cc < 16, "illegal cc");
- int encode = prefix_and_encode(dst->encoding(), true);
- emit_int24(0x0F, (unsigned char)0x90 | cc, (0xC0 | encode));
+ int encode = prefix_and_encode(dst->encoding(), true, true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0x90 | cc, 0xC0, encode);
}
void Assembler::palignr(XMMRegister dst, XMMRegister src, int imm8) {
@@ -6212,34 +6303,34 @@ void Assembler::shrl(Address dst, int imm8) {
void Assembler::shldl(Register dst, Register src) {
- int encode = prefix_and_encode(src->encoding(), dst->encoding());
- emit_int24(0x0F, (unsigned char)0xA5, (0xC0 | encode));
+ int encode = prefix_and_encode(src->encoding(), dst->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xA5, 0xC0, encode);
}
void Assembler::shldl(Register dst, Register src, int8_t imm8) {
- int encode = prefix_and_encode(src->encoding(), dst->encoding());
- emit_int32(0x0F, (unsigned char)0xA4, (0xC0 | encode), imm8);
+ int encode = prefix_and_encode(src->encoding(), dst->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xA4, 0xC0, encode, imm8);
}
void Assembler::shrdl(Register dst, Register src) {
- int encode = prefix_and_encode(src->encoding(), dst->encoding());
- emit_int24(0x0F, (unsigned char)0xAD, (0xC0 | encode));
+ int encode = prefix_and_encode(src->encoding(), dst->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xAD, 0xC0, encode);
}
void Assembler::shrdl(Register dst, Register src, int8_t imm8) {
- int encode = prefix_and_encode(src->encoding(), dst->encoding());
- emit_int32(0x0F, (unsigned char)0xAC, (0xC0 | encode), imm8);
+ int encode = prefix_and_encode(src->encoding(), dst->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xAC, 0xC0, encode, imm8);
}
#ifdef _LP64
void Assembler::shldq(Register dst, Register src, int8_t imm8) {
- int encode = prefixq_and_encode(src->encoding(), dst->encoding());
- emit_int32(0x0F, (unsigned char)0xA4, (0xC0 | encode), imm8);
+ int encode = prefixq_and_encode(src->encoding(), dst->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xA4, 0xC0, encode, imm8);
}
void Assembler::shrdq(Register dst, Register src, int8_t imm8) {
- int encode = prefixq_and_encode(src->encoding(), dst->encoding());
- emit_int32(0x0F, (unsigned char)0xAC, (0xC0 | encode), imm8);
+ int encode = prefixq_and_encode(src->encoding(), dst->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xAC, 0xC0, encode, imm8);
}
#endif
@@ -6305,8 +6396,11 @@ void Assembler::sqrtss(XMMRegister dst, Address src) {
emit_operand(dst, src, 0);
}
-void Assembler::stmxcsr( Address dst) {
- if (UseAVX > 0 ) {
+void Assembler::stmxcsr(Address dst) {
+ // This instruction should be SSE encoded with the REX2 prefix when an
+ // extended GPR is present. To be consistent when UseAPX is enabled, use
+ // this encoding even when an extended GPR is not used.
+ if (UseAVX > 0 && !UseAPX ) {
assert(VM_Version::supports_avx(), "");
InstructionMark im(this);
InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
@@ -6316,8 +6410,8 @@ void Assembler::stmxcsr( Address dst) {
} else {
NOT_LP64(assert(VM_Version::supports_sse(), ""));
InstructionMark im(this);
- prefix(dst);
- emit_int16(0x0F, (unsigned char)0xAE);
+ prefix(dst, true /* is_map1 */);
+ emit_int8((unsigned char)0xAE);
emit_operand(as_Register(3), dst, 0);
}
}
@@ -6457,34 +6551,32 @@ void Assembler::testl(Register dst, Address src) {
void Assembler::tzcntl(Register dst, Register src) {
assert(VM_Version::supports_bmi1(), "tzcnt instruction not supported");
emit_int8((unsigned char)0xF3);
- int encode = prefix_and_encode(dst->encoding(), src->encoding());
- emit_int24(0x0F,
- (unsigned char)0xBC,
- 0xC0 | encode);
+ int encode = prefix_and_encode(dst->encoding(), src->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xBC, 0xC0, encode);
}
void Assembler::tzcntl(Register dst, Address src) {
assert(VM_Version::supports_bmi1(), "tzcnt instruction not supported");
InstructionMark im(this);
emit_int8((unsigned char)0xF3);
- prefix(src, dst);
- emit_int16(0x0F, (unsigned char)0xBC);
+ prefix(src, dst, false, true /* is_map1 */);
+ emit_int8((unsigned char)0xBC);
emit_operand(dst, src, 0);
}
void Assembler::tzcntq(Register dst, Register src) {
assert(VM_Version::supports_bmi1(), "tzcnt instruction not supported");
emit_int8((unsigned char)0xF3);
- int encode = prefixq_and_encode(dst->encoding(), src->encoding());
- emit_int24(0x0F, (unsigned char)0xBC, (0xC0 | encode));
+ int encode = prefixq_and_encode(dst->encoding(), src->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xBC, 0xC0, encode);
}
void Assembler::tzcntq(Register dst, Address src) {
assert(VM_Version::supports_bmi1(), "tzcnt instruction not supported");
InstructionMark im(this);
emit_int8((unsigned char)0xF3);
- prefixq(src, dst);
- emit_int16(0x0F, (unsigned char)0xBC);
+ prefixq(src, dst, true /* is_map1 */);
+ emit_int8((unsigned char)0xBC);
emit_operand(dst, src, 0);
}
@@ -6530,23 +6622,23 @@ void Assembler::xabort(int8_t imm8) {
void Assembler::xaddb(Address dst, Register src) {
InstructionMark im(this);
- prefix(dst, src, true);
- emit_int16(0x0F, (unsigned char)0xC0);
+ prefix(dst, src, true, true /* is_map1 */);
+ emit_int8((unsigned char)0xC0);
emit_operand(src, dst, 0);
}
void Assembler::xaddw(Address dst, Register src) {
InstructionMark im(this);
emit_int8(0x66);
- prefix(dst, src);
- emit_int16(0x0F, (unsigned char)0xC1);
+ prefix(dst, src, false, true /* is_map1 */);
+ emit_int8((unsigned char)0xC1);
emit_operand(src, dst, 0);
}
void Assembler::xaddl(Address dst, Register src) {
InstructionMark im(this);
- prefix(dst, src);
- emit_int16(0x0F, (unsigned char)0xC1);
+ prefix(dst, src, false, true /* is_map1 */);
+ emit_int8((unsigned char)0xC1);
emit_operand(src, dst, 0);
}
@@ -9116,7 +9208,7 @@ void Assembler::extractps(Register dst, XMMRegister src, uint8_t imm8) {
assert(VM_Version::supports_sse4_1(), "");
assert(imm8 <= 0x03, "imm8: %u", imm8);
InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = simd_prefix_and_encode(src, xnoreg, as_XMMRegister(dst->encoding()), VEX_SIMD_66, VEX_OPCODE_0F_3A, &attributes);
+ int encode = simd_prefix_and_encode(src, xnoreg, as_XMMRegister(dst->encoding()), VEX_SIMD_66, VEX_OPCODE_0F_3A, &attributes, true);
// imm8:
// 0x00 - extract from bits 31:0
// 0x01 - extract from bits 63:32
@@ -10886,7 +10978,7 @@ void Assembler::evpbroadcastb(XMMRegister dst, Register src, int vector_len) {
assert(VM_Version::supports_avx512bw(), "");
InstructionAttr attributes(vector_len, /* vex_w */ false, /* legacy_mode */ _legacy_mode_bw, /* no_mask_reg */ true, /* uses_vl */ true);
attributes.set_is_evex_instruction();
- int encode = vex_prefix_and_encode(dst->encoding(), 0, src->encoding(), VEX_SIMD_66, VEX_OPCODE_0F_38, &attributes);
+ int encode = vex_prefix_and_encode(dst->encoding(), 0, src->encoding(), VEX_SIMD_66, VEX_OPCODE_0F_38, &attributes, true);
emit_int16(0x7A, (0xC0 | encode));
}
@@ -10895,7 +10987,7 @@ void Assembler::evpbroadcastw(XMMRegister dst, Register src, int vector_len) {
assert(VM_Version::supports_avx512bw(), "");
InstructionAttr attributes(vector_len, /* vex_w */ false, /* legacy_mode */ _legacy_mode_bw, /* no_mask_reg */ true, /* uses_vl */ true);
attributes.set_is_evex_instruction();
- int encode = vex_prefix_and_encode(dst->encoding(), 0, src->encoding(), VEX_SIMD_66, VEX_OPCODE_0F_38, &attributes);
+ int encode = vex_prefix_and_encode(dst->encoding(), 0, src->encoding(), VEX_SIMD_66, VEX_OPCODE_0F_38, &attributes , true);
emit_int16(0x7B, (0xC0 | encode));
}
@@ -10904,7 +10996,7 @@ void Assembler::evpbroadcastd(XMMRegister dst, Register src, int vector_len) {
assert(VM_Version::supports_evex(), "");
InstructionAttr attributes(vector_len, /* vex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ true);
attributes.set_is_evex_instruction();
- int encode = vex_prefix_and_encode(dst->encoding(), 0, src->encoding(), VEX_SIMD_66, VEX_OPCODE_0F_38, &attributes);
+ int encode = vex_prefix_and_encode(dst->encoding(), 0, src->encoding(), VEX_SIMD_66, VEX_OPCODE_0F_38, &attributes, true);
emit_int16(0x7C, (0xC0 | encode));
}
@@ -10913,7 +11005,7 @@ void Assembler::evpbroadcastq(XMMRegister dst, Register src, int vector_len) {
assert(VM_Version::supports_evex(), "");
InstructionAttr attributes(vector_len, /* vex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ true);
attributes.set_is_evex_instruction();
- int encode = vex_prefix_and_encode(dst->encoding(), 0, src->encoding(), VEX_SIMD_66, VEX_OPCODE_0F_38, &attributes);
+ int encode = vex_prefix_and_encode(dst->encoding(), 0, src->encoding(), VEX_SIMD_66, VEX_OPCODE_0F_38, &attributes, true);
emit_int16(0x7C, (0xC0 | encode));
}
@@ -11673,7 +11765,8 @@ void Assembler::vex_prefix(bool vex_r, bool vex_b, bool vex_x, int nds_enc, VexS
}
// This is a 4 byte encoding
-void Assembler::evex_prefix(bool vex_r, bool vex_b, bool vex_x, bool evex_r, bool evex_v, int nds_enc, VexSimdPrefix pre, VexOpcode opc){
+void Assembler::evex_prefix(bool vex_r, bool vex_b, bool vex_x, bool evex_r, bool eevex_b, bool evex_v,
+ bool eevex_x, int nds_enc, VexSimdPrefix pre, VexOpcode opc) {
// EVEX 0x62 prefix
// byte1 = EVEX_4bytes;
@@ -11682,18 +11775,18 @@ void Assembler::evex_prefix(bool vex_r, bool vex_b, bool vex_x, bool evex_r, boo
// EVEX.b is not currently used for broadcast of single element or data rounding modes
_attributes->set_evex_encoding(evex_encoding);
- // P0: byte 2, initialized to RXBR`00mm
+ // P0: byte 2, initialized to RXBR'0mmm
// instead of not'd
int byte2 = (vex_r ? VEX_R : 0) | (vex_x ? VEX_X : 0) | (vex_b ? VEX_B : 0) | (evex_r ? EVEX_Rb : 0);
byte2 = (~byte2) & 0xF0;
+ byte2 |= eevex_b ? EEVEX_B : 0;
// confine opc opcode extensions in mm bits to lower two bits
- // of form {0F, 0F_38, 0F_3A}
+ // of form {0F, 0F_38, 0F_3A, 0F_3C}
byte2 |= opc;
// P1: byte 3 as Wvvvv1pp
int byte3 = ((~nds_enc) & 0xf) << 3;
- // p[10] is always 1
- byte3 |= EVEX_F;
+ byte3 |= (eevex_x ? 0 : EEVEX_X);
byte3 |= (vex_w & 1) << 7;
// confine pre opcode extensions in pp bits to lower two bits
// of form {66, F3, F2}
@@ -11720,6 +11813,10 @@ void Assembler::evex_prefix(bool vex_r, bool vex_b, bool vex_x, bool evex_r, boo
}
void Assembler::vex_prefix(Address adr, int nds_enc, int xreg_enc, VexSimdPrefix pre, VexOpcode opc, InstructionAttr *attributes) {
+ if (adr.base_needs_rex2() || adr.index_needs_rex2()) {
+ assert(UseAPX, "APX features not enabled");
+ }
+ bool is_extended = adr.base_needs_rex2() || adr.index_needs_rex2() || nds_enc >= 16 || xreg_enc >= 16;
bool vex_r = (xreg_enc & 8) == 8;
bool vex_b = adr.base_needs_rex();
bool vex_x;
@@ -11729,13 +11826,12 @@ void Assembler::vex_prefix(Address adr, int nds_enc, int xreg_enc, VexSimdPrefix
vex_x = adr.index_needs_rex();
}
set_attributes(attributes);
-
// For EVEX instruction (which is not marked as pure EVEX instruction) check and see if this instruction
// is allowed in legacy mode and has resources which will fit in it.
// Pure EVEX instructions will have is_evex_instruction set in their definition.
if (!attributes->is_legacy_mode()) {
if (UseAVX > 2 && !attributes->is_evex_instruction() && !is_managed()) {
- if ((attributes->get_vector_len() != AVX_512bit) && (nds_enc < 16) && (xreg_enc < 16)) {
+ if ((attributes->get_vector_len() != AVX_512bit) && !is_extended) {
attributes->set_is_legacy_mode();
}
}
@@ -11746,7 +11842,7 @@ void Assembler::vex_prefix(Address adr, int nds_enc, int xreg_enc, VexSimdPrefix
(attributes->get_vector_len() == AVX_512bit) ||
(!_legacy_mode_vl) ||
(attributes->is_legacy_mode())),"XMM register should be 0-15");
- assert(((nds_enc < 16 && xreg_enc < 16) || (!attributes->is_legacy_mode())),"XMM register should be 0-15");
+ assert((!is_extended || (!attributes->is_legacy_mode())),"XMM register should be 0-15");
}
clear_managed();
@@ -11760,8 +11856,10 @@ void Assembler::vex_prefix(Address adr, int nds_enc, int xreg_enc, VexSimdPrefix
} else {
evex_v = (nds_enc >= 16);
}
+ bool eevex_x = adr.index_needs_rex2();
+ bool eevex_b = adr.base_needs_rex2();
attributes->set_is_evex_instruction();
- evex_prefix(vex_r, vex_b, vex_x, evex_r, evex_v, nds_enc, pre, opc);
+ evex_prefix(vex_r, vex_b, vex_x, evex_r, eevex_b, evex_v, eevex_x, nds_enc, pre, opc);
} else {
if (UseAVX > 2 && attributes->is_rex_vex_w_reverted()) {
attributes->set_rex_vex_w(false);
@@ -11770,7 +11868,11 @@ void Assembler::vex_prefix(Address adr, int nds_enc, int xreg_enc, VexSimdPrefix
}
}
-int Assembler::vex_prefix_and_encode(int dst_enc, int nds_enc, int src_enc, VexSimdPrefix pre, VexOpcode opc, InstructionAttr *attributes) {
+int Assembler::vex_prefix_and_encode(int dst_enc, int nds_enc, int src_enc, VexSimdPrefix pre, VexOpcode opc, InstructionAttr *attributes, bool src_is_gpr) {
+ if (src_is_gpr && src_enc >= 16) {
+ assert(UseAPX, "APX features not enabled");
+ }
+ bool is_extended = dst_enc >= 16 || nds_enc >= 16 || src_enc >=16;
bool vex_r = (dst_enc & 8) == 8;
bool vex_b = (src_enc & 8) == 8;
bool vex_x = false;
@@ -11782,7 +11884,7 @@ int Assembler::vex_prefix_and_encode(int dst_enc, int nds_enc, int src_enc, VexS
if (!attributes->is_legacy_mode()) {
if (UseAVX > 2 && !attributes->is_evex_instruction() && !is_managed()) {
if ((!attributes->uses_vl() || (attributes->get_vector_len() != AVX_512bit)) &&
- (dst_enc < 16) && (nds_enc < 16) && (src_enc < 16)) {
+ !is_extended) {
attributes->set_is_legacy_mode();
}
}
@@ -11799,7 +11901,7 @@ int Assembler::vex_prefix_and_encode(int dst_enc, int nds_enc, int src_enc, VexS
(!_legacy_mode_vl) ||
(attributes->is_legacy_mode())),"XMM register should be 0-15");
// Instruction with legacy_mode true should have dst, nds and src < 15
- assert(((dst_enc < 16 && nds_enc < 16 && src_enc < 16) || (!attributes->is_legacy_mode())),"XMM register should be 0-15");
+ assert(((!is_extended) || (!attributes->is_legacy_mode())),"XMM register should be 0-15");
}
clear_managed();
@@ -11807,10 +11909,11 @@ int Assembler::vex_prefix_and_encode(int dst_enc, int nds_enc, int src_enc, VexS
{
bool evex_r = (dst_enc >= 16);
bool evex_v = (nds_enc >= 16);
+ bool evex_b = (src_enc >= 16) && src_is_gpr;
// can use vex_x as bank extender on rm encoding
- vex_x = (src_enc >= 16);
+ vex_x = (src_enc >= 16) && !src_is_gpr;
attributes->set_is_evex_instruction();
- evex_prefix(vex_r, vex_b, vex_x, evex_r, evex_v, nds_enc, pre, opc);
+ evex_prefix(vex_r, vex_b, vex_x, evex_r, evex_b, evex_v, false /*eevex_x*/, nds_enc, pre, opc);
} else {
if (UseAVX > 2 && attributes->is_rex_vex_w_reverted()) {
attributes->set_rex_vex_w(false);
@@ -11822,7 +11925,6 @@ int Assembler::vex_prefix_and_encode(int dst_enc, int nds_enc, int src_enc, VexS
return (((dst_enc & 7) << 3) | (src_enc & 7));
}
-
void Assembler::simd_prefix(XMMRegister xreg, XMMRegister nds, Address adr, VexSimdPrefix pre,
VexOpcode opc, InstructionAttr *attributes) {
if (UseAVX > 0) {
@@ -11836,12 +11938,12 @@ void Assembler::simd_prefix(XMMRegister xreg, XMMRegister nds, Address adr, VexS
}
int Assembler::simd_prefix_and_encode(XMMRegister dst, XMMRegister nds, XMMRegister src, VexSimdPrefix pre,
- VexOpcode opc, InstructionAttr *attributes) {
+ VexOpcode opc, InstructionAttr *attributes, bool src_is_gpr) {
int dst_enc = dst->encoding();
int src_enc = src->encoding();
if (UseAVX > 0) {
int nds_enc = nds->is_valid() ? nds->encoding() : 0;
- return vex_prefix_and_encode(dst_enc, nds_enc, src_enc, pre, opc, attributes);
+ return vex_prefix_and_encode(dst_enc, nds_enc, src_enc, pre, opc, attributes, src_is_gpr);
} else {
assert((nds == dst) || (nds == src) || (nds == xnoreg), "wrong sse encoding");
return rex_prefix_and_encode(dst_enc, src_enc, pre, opc, attributes->is_rex_vex_w());
@@ -12336,50 +12438,51 @@ void Assembler::evpblendmq (XMMRegister dst, KRegister mask, XMMRegister nds, XM
void Assembler::bzhiq(Register dst, Register src1, Register src2) {
assert(VM_Version::supports_bmi2(), "bit manipulation instructions not supported");
- InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = vex_prefix_and_encode(dst->encoding(), src2->encoding(), src1->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ int encode = vex_prefix_and_encode(dst->encoding(), src2->encoding(), src1->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes, true);
emit_int16((unsigned char)0xF5, (0xC0 | encode));
}
void Assembler::bzhil(Register dst, Register src1, Register src2) {
assert(VM_Version::supports_bmi2(), "bit manipulation instructions not supported");
- InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = vex_prefix_and_encode(dst->encoding(), src2->encoding(), src1->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ int encode = vex_prefix_and_encode(dst->encoding(), src2->encoding(), src1->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes, true);
emit_int16((unsigned char)0xF5, (0xC0 | encode));
}
void Assembler::pextl(Register dst, Register src1, Register src2) {
assert(VM_Version::supports_bmi2(), "bit manipulation instructions not supported");
- InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = vex_prefix_and_encode(dst->encoding(), src1->encoding(), src2->encoding(), VEX_SIMD_F3, VEX_OPCODE_0F_38, &attributes);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ int encode = vex_prefix_and_encode(dst->encoding(), src1->encoding(), src2->encoding(), VEX_SIMD_F3, VEX_OPCODE_0F_38, &attributes, true);
emit_int16((unsigned char)0xF5, (0xC0 | encode));
}
void Assembler::pdepl(Register dst, Register src1, Register src2) {
assert(VM_Version::supports_bmi2(), "bit manipulation instructions not supported");
- InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = vex_prefix_and_encode(dst->encoding(), src1->encoding(), src2->encoding(), VEX_SIMD_F2, VEX_OPCODE_0F_38, &attributes);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ int encode = vex_prefix_and_encode(dst->encoding(), src1->encoding(), src2->encoding(), VEX_SIMD_F2, VEX_OPCODE_0F_38, &attributes, true);
emit_int16((unsigned char)0xF5, (0xC0 | encode));
}
void Assembler::pextq(Register dst, Register src1, Register src2) {
assert(VM_Version::supports_bmi2(), "bit manipulation instructions not supported");
- InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = vex_prefix_and_encode(dst->encoding(), src1->encoding(), src2->encoding(), VEX_SIMD_F3, VEX_OPCODE_0F_38, &attributes);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ int encode = vex_prefix_and_encode(dst->encoding(), src1->encoding(), src2->encoding(), VEX_SIMD_F3, VEX_OPCODE_0F_38, &attributes, true);
emit_int16((unsigned char)0xF5, (0xC0 | encode));
}
void Assembler::pdepq(Register dst, Register src1, Register src2) {
assert(VM_Version::supports_bmi2(), "bit manipulation instructions not supported");
- InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = vex_prefix_and_encode(dst->encoding(), src1->encoding(), src2->encoding(), VEX_SIMD_F2, VEX_OPCODE_0F_38, &attributes);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ int encode = vex_prefix_and_encode(dst->encoding(), src1->encoding(), src2->encoding(), VEX_SIMD_F2, VEX_OPCODE_0F_38, &attributes, true);
emit_int16((unsigned char)0xF5, (0xC0 | encode));
}
void Assembler::pextl(Register dst, Register src1, Address src2) {
assert(VM_Version::supports_bmi2(), "bit manipulation instructions not supported");
InstructionMark im(this);
- InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ attributes.set_address_attributes(/* tuple_type */ EVEX_NOSCALE, /* input_size_in_bits */ EVEX_32bit);
vex_prefix(src2, src1->encoding(), dst->encoding(), VEX_SIMD_F3, VEX_OPCODE_0F_38, &attributes);
emit_int8((unsigned char)0xF5);
emit_operand(dst, src2, 0);
@@ -12388,7 +12491,8 @@ void Assembler::pextl(Register dst, Register src1, Address src2) {
void Assembler::pdepl(Register dst, Register src1, Address src2) {
assert(VM_Version::supports_bmi2(), "bit manipulation instructions not supported");
InstructionMark im(this);
- InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ attributes.set_address_attributes(/* tuple_type */ EVEX_NOSCALE, /* input_size_in_bits */ EVEX_32bit);
vex_prefix(src2, src1->encoding(), dst->encoding(), VEX_SIMD_F2, VEX_OPCODE_0F_38, &attributes);
emit_int8((unsigned char)0xF5);
emit_operand(dst, src2, 0);
@@ -12397,7 +12501,8 @@ void Assembler::pdepl(Register dst, Register src1, Address src2) {
void Assembler::pextq(Register dst, Register src1, Address src2) {
assert(VM_Version::supports_bmi2(), "bit manipulation instructions not supported");
InstructionMark im(this);
- InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ attributes.set_address_attributes(/* tuple_type */ EVEX_NOSCALE, /* input_size_in_bits */ EVEX_64bit);
vex_prefix(src2, src1->encoding(), dst->encoding(), VEX_SIMD_F3, VEX_OPCODE_0F_38, &attributes);
emit_int8((unsigned char)0xF5);
emit_operand(dst, src2, 0);
@@ -12406,7 +12511,8 @@ void Assembler::pextq(Register dst, Register src1, Address src2) {
void Assembler::pdepq(Register dst, Register src1, Address src2) {
assert(VM_Version::supports_bmi2(), "bit manipulation instructions not supported");
InstructionMark im(this);
- InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ attributes.set_address_attributes(/* tuple_type */ EVEX_NOSCALE, /* input_size_in_bits */ EVEX_64bit);
vex_prefix(src2, src1->encoding(), dst->encoding(), VEX_SIMD_F2, VEX_OPCODE_0F_38, &attributes);
emit_int8((unsigned char)0xF5);
emit_operand(dst, src2, 0);
@@ -12414,15 +12520,16 @@ void Assembler::pdepq(Register dst, Register src1, Address src2) {
void Assembler::sarxl(Register dst, Register src1, Register src2) {
assert(VM_Version::supports_bmi2(), "");
- InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ true);
- int encode = vex_prefix_and_encode(dst->encoding(), src2->encoding(), src1->encoding(), VEX_SIMD_F3, VEX_OPCODE_0F_38, &attributes);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ true);
+ int encode = vex_prefix_and_encode(dst->encoding(), src2->encoding(), src1->encoding(), VEX_SIMD_F3, VEX_OPCODE_0F_38, &attributes, true);
emit_int16((unsigned char)0xF7, (0xC0 | encode));
}
void Assembler::sarxl(Register dst, Address src1, Register src2) {
assert(VM_Version::supports_bmi2(), "");
InstructionMark im(this);
- InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ true);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ true);
+ attributes.set_address_attributes(/* tuple_type */ EVEX_NOSCALE, /* input_size_in_bits */ EVEX_32bit);
vex_prefix(src1, src2->encoding(), dst->encoding(), VEX_SIMD_F3, VEX_OPCODE_0F_38, &attributes);
emit_int8((unsigned char)0xF7);
emit_operand(dst, src1, 0);
@@ -12430,15 +12537,16 @@ void Assembler::sarxl(Register dst, Address src1, Register src2) {
void Assembler::sarxq(Register dst, Register src1, Register src2) {
assert(VM_Version::supports_bmi2(), "");
- InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ true);
- int encode = vex_prefix_and_encode(dst->encoding(), src2->encoding(), src1->encoding(), VEX_SIMD_F3, VEX_OPCODE_0F_38, &attributes);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ true);
+ int encode = vex_prefix_and_encode(dst->encoding(), src2->encoding(), src1->encoding(), VEX_SIMD_F3, VEX_OPCODE_0F_38, &attributes, true);
emit_int16((unsigned char)0xF7, (0xC0 | encode));
}
void Assembler::sarxq(Register dst, Address src1, Register src2) {
assert(VM_Version::supports_bmi2(), "");
InstructionMark im(this);
- InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ true);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ true);
+ attributes.set_address_attributes(/* tuple_type */ EVEX_NOSCALE, /* input_size_in_bits */ EVEX_64bit);
vex_prefix(src1, src2->encoding(), dst->encoding(), VEX_SIMD_F3, VEX_OPCODE_0F_38, &attributes);
emit_int8((unsigned char)0xF7);
emit_operand(dst, src1, 0);
@@ -12446,15 +12554,16 @@ void Assembler::sarxq(Register dst, Address src1, Register src2) {
void Assembler::shlxl(Register dst, Register src1, Register src2) {
assert(VM_Version::supports_bmi2(), "");
- InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ true);
- int encode = vex_prefix_and_encode(dst->encoding(), src2->encoding(), src1->encoding(), VEX_SIMD_66, VEX_OPCODE_0F_38, &attributes);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ true);
+ int encode = vex_prefix_and_encode(dst->encoding(), src2->encoding(), src1->encoding(), VEX_SIMD_66, VEX_OPCODE_0F_38, &attributes, true);
emit_int16((unsigned char)0xF7, (0xC0 | encode));
}
void Assembler::shlxl(Register dst, Address src1, Register src2) {
assert(VM_Version::supports_bmi2(), "");
InstructionMark im(this);
- InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ true);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ true);
+ attributes.set_address_attributes(/* tuple_type */ EVEX_NOSCALE, /* input_size_in_bits */ EVEX_32bit);
vex_prefix(src1, src2->encoding(), dst->encoding(), VEX_SIMD_66, VEX_OPCODE_0F_38, &attributes);
emit_int8((unsigned char)0xF7);
emit_operand(dst, src1, 0);
@@ -12462,15 +12571,16 @@ void Assembler::shlxl(Register dst, Address src1, Register src2) {
void Assembler::shlxq(Register dst, Register src1, Register src2) {
assert(VM_Version::supports_bmi2(), "");
- InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ true);
- int encode = vex_prefix_and_encode(dst->encoding(), src2->encoding(), src1->encoding(), VEX_SIMD_66, VEX_OPCODE_0F_38, &attributes);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ true);
+ int encode = vex_prefix_and_encode(dst->encoding(), src2->encoding(), src1->encoding(), VEX_SIMD_66, VEX_OPCODE_0F_38, &attributes, true);
emit_int16((unsigned char)0xF7, (0xC0 | encode));
}
void Assembler::shlxq(Register dst, Address src1, Register src2) {
assert(VM_Version::supports_bmi2(), "");
InstructionMark im(this);
- InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ true);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ true);
+ attributes.set_address_attributes(/* tuple_type */ EVEX_NOSCALE, /* input_size_in_bits */ EVEX_64bit);
vex_prefix(src1, src2->encoding(), dst->encoding(), VEX_SIMD_66, VEX_OPCODE_0F_38, &attributes);
emit_int8((unsigned char)0xF7);
emit_operand(dst, src1, 0);
@@ -12478,15 +12588,16 @@ void Assembler::shlxq(Register dst, Address src1, Register src2) {
void Assembler::shrxl(Register dst, Register src1, Register src2) {
assert(VM_Version::supports_bmi2(), "");
- InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ true);
- int encode = vex_prefix_and_encode(dst->encoding(), src2->encoding(), src1->encoding(), VEX_SIMD_F2, VEX_OPCODE_0F_38, &attributes);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ true);
+ int encode = vex_prefix_and_encode(dst->encoding(), src2->encoding(), src1->encoding(), VEX_SIMD_F2, VEX_OPCODE_0F_38, &attributes, true);
emit_int16((unsigned char)0xF7, (0xC0 | encode));
}
void Assembler::shrxl(Register dst, Address src1, Register src2) {
assert(VM_Version::supports_bmi2(), "");
InstructionMark im(this);
- InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ true);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ true);
+ attributes.set_address_attributes(/* tuple_type */ EVEX_NOSCALE, /* input_size_in_bits */ EVEX_32bit);
vex_prefix(src1, src2->encoding(), dst->encoding(), VEX_SIMD_F2, VEX_OPCODE_0F_38, &attributes);
emit_int8((unsigned char)0xF7);
emit_operand(dst, src1, 0);
@@ -12494,15 +12605,16 @@ void Assembler::shrxl(Register dst, Address src1, Register src2) {
void Assembler::shrxq(Register dst, Register src1, Register src2) {
assert(VM_Version::supports_bmi2(), "");
- InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ true);
- int encode = vex_prefix_and_encode(dst->encoding(), src2->encoding(), src1->encoding(), VEX_SIMD_F2, VEX_OPCODE_0F_38, &attributes);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ true);
+ int encode = vex_prefix_and_encode(dst->encoding(), src2->encoding(), src1->encoding(), VEX_SIMD_F2, VEX_OPCODE_0F_38, &attributes, true);
emit_int16((unsigned char)0xF7, (0xC0 | encode));
}
void Assembler::shrxq(Register dst, Address src1, Register src2) {
assert(VM_Version::supports_bmi2(), "");
InstructionMark im(this);
- InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ true);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ true);
+ attributes.set_address_attributes(/* tuple_type */ EVEX_NOSCALE, /* input_size_in_bits */ EVEX_64bit);
vex_prefix(src1, src2->encoding(), dst->encoding(), VEX_SIMD_F2, VEX_OPCODE_0F_38, &attributes);
emit_int8((unsigned char)0xF7);
emit_operand(dst, src1, 0);
@@ -12806,13 +12918,48 @@ void Assembler::emit_data64(jlong data,
emit_int64(data);
}
+int Assembler::get_base_prefix_bits(int enc) {
+ int bits = 0;
+ if (enc & 16) bits |= REX2BIT_B4;
+ if (enc & 8) bits |= REXBIT_B;
+ return bits;
+}
+
+int Assembler::get_index_prefix_bits(int enc) {
+ int bits = 0;
+ if (enc & 16) bits |= REX2BIT_X4;
+ if (enc & 8) bits |= REXBIT_X;
+ return bits;
+}
+
+int Assembler::get_base_prefix_bits(Register base) {
+ return base->is_valid() ? get_base_prefix_bits(base->encoding()) : 0;
+}
+
+int Assembler::get_index_prefix_bits(Register index) {
+ return index->is_valid() ? get_index_prefix_bits(index->encoding()) : 0;
+}
+
+int Assembler::get_reg_prefix_bits(int enc) {
+ int bits = 0;
+ if (enc & 16) bits |= REX2BIT_R4;
+ if (enc & 8) bits |= REXBIT_R;
+ return bits;
+}
+
void Assembler::prefix(Register reg) {
- if (reg->encoding() >= 8) {
+ if (reg->encoding() >= 16) {
+ prefix16(WREX2 | get_base_prefix_bits(reg->encoding()));
+ } else if (reg->encoding() >= 8) {
prefix(REX_B);
}
}
void Assembler::prefix(Register dst, Register src, Prefix p) {
+ if ((p & WREX2) || src->encoding() >= 16 || dst->encoding() >= 16) {
+ prefix_rex2(dst, src);
+ return;
+ }
if (src->encoding() >= 8) {
p = (Prefix)(p | REX_B);
}
@@ -12825,7 +12972,17 @@ void Assembler::prefix(Register dst, Register src, Prefix p) {
}
}
+void Assembler::prefix_rex2(Register dst, Register src) {
+ int bits = 0;
+ bits |= get_base_prefix_bits(src->encoding());
+ bits |= get_reg_prefix_bits(dst->encoding());
+ prefix16(WREX2 | bits);
+}
+
void Assembler::prefix(Register dst, Address adr, Prefix p) {
+ if (adr.base_needs_rex2() || adr.index_needs_rex2() || dst->encoding() >= 16) {
+ prefix_rex2(dst, adr);
+ }
if (adr.base_needs_rex()) {
if (adr.index_needs_rex()) {
assert(false, "prefix(Register dst, Address adr, Prefix p) does not support handling of an X");
@@ -12846,7 +13003,19 @@ void Assembler::prefix(Register dst, Address adr, Prefix p) {
}
}
-void Assembler::prefix(Address adr) {
+void Assembler::prefix_rex2(Register dst, Address adr) {
+ assert(!adr.index_needs_rex2(), "prefix(Register dst, Address adr) does not support handling of an X");
+ int bits = 0;
+ bits |= get_base_prefix_bits(adr.base());
+ bits |= get_reg_prefix_bits(dst->encoding());
+ prefix16(WREX2 | bits);
+}
+
+void Assembler::prefix(Address adr, bool is_map1) {
+ if (adr.base_needs_rex2() || adr.index_needs_rex2()) {
+ prefix_rex2(adr, is_map1);
+ return;
+ }
if (adr.base_needs_rex()) {
if (adr.index_needs_rex()) {
prefix(REX_XB);
@@ -12858,9 +13027,21 @@ void Assembler::prefix(Address adr) {
prefix(REX_X);
}
}
+ if (is_map1) emit_int8(0x0F);
+}
+
+void Assembler::prefix_rex2(Address adr, bool is_map1) {
+ int bits = is_map1 ? REX2BIT_M0 : 0;
+ bits |= get_base_prefix_bits(adr.base());
+ bits |= get_index_prefix_bits(adr.index());
+ prefix16(WREX2 | bits);
}
-void Assembler::prefix(Address adr, Register reg, bool byteinst) {
+void Assembler::prefix(Address adr, Register reg, bool byteinst, bool is_map1) {
+ if (reg->encoding() >= 16 || adr.base_needs_rex2() || adr.index_needs_rex2()) {
+ prefix_rex2(adr, reg, byteinst, is_map1);
+ return;
+ }
if (reg->encoding() < 8) {
if (adr.base_needs_rex()) {
if (adr.index_needs_rex()) {
@@ -12890,9 +13071,22 @@ void Assembler::prefix(Address adr, Register reg, bool byteinst) {
}
}
}
+ if (is_map1) emit_int8(0x0F);
+}
+
+void Assembler::prefix_rex2(Address adr, Register reg, bool byteinst, bool is_map1) {
+ int bits = is_map1 ? REX2BIT_M0 : 0;
+ bits |= get_base_prefix_bits(adr.base());
+ bits |= get_index_prefix_bits(adr.index());
+ bits |= get_reg_prefix_bits(reg->encoding());
+ prefix16(WREX2 | bits);
}
void Assembler::prefix(Address adr, XMMRegister reg) {
+ if (reg->encoding() >= 16 || adr.base_needs_rex2() || adr.index_needs_rex2()) {
+ prefixq_rex2(adr, reg);
+ return;
+ }
if (reg->encoding() < 8) {
if (adr.base_needs_rex()) {
if (adr.index_needs_rex()) {
@@ -12922,17 +13116,37 @@ void Assembler::prefix(Address adr, XMMRegister reg) {
}
}
-int Assembler::prefix_and_encode(int reg_enc, bool byteinst) {
+void Assembler::prefix_rex2(Address adr, XMMRegister src) {
+ int bits = 0;
+ bits |= get_base_prefix_bits(adr.base());
+ bits |= get_index_prefix_bits(adr.index());
+ bits |= get_reg_prefix_bits(src->encoding());
+ prefix16(WREX2 | bits);
+}
+
+int Assembler::prefix_and_encode(int reg_enc, bool byteinst, bool is_map1) {
+ if (reg_enc >= 16) {
+ return prefix_and_encode_rex2(reg_enc, is_map1);
+ }
if (reg_enc >= 8) {
prefix(REX_B);
reg_enc -= 8;
} else if (byteinst && reg_enc >= 4) {
prefix(REX);
}
- return reg_enc;
+ int opc_prefix = is_map1 ? 0x0F00 : 0;
+ return opc_prefix | reg_enc;
}
-int Assembler::prefix_and_encode(int dst_enc, bool dst_is_byte, int src_enc, bool src_is_byte) {
+int Assembler::prefix_and_encode_rex2(int reg_enc, bool is_map1) {
+ prefix16(WREX2 | (is_map1 ? REX2BIT_M0 : 0) | get_base_prefix_bits(reg_enc));
+ return reg_enc & 0x7;
+}
+
+int Assembler::prefix_and_encode(int dst_enc, bool dst_is_byte, int src_enc, bool src_is_byte, bool is_map1) {
+ if (src_enc >= 16 || dst_enc >= 16) {
+ return prefix_and_encode_rex2(dst_enc, src_enc, is_map1 ? REX2BIT_M0 : 0);
+ }
if (dst_enc < 8) {
if (src_enc >= 8) {
prefix(REX_B);
@@ -12949,16 +13163,46 @@ int Assembler::prefix_and_encode(int dst_enc, bool dst_is_byte, int src_enc, boo
}
dst_enc -= 8;
}
+ int opcode_prefix = is_map1 ? 0x0F00 : 0;
+ return opcode_prefix | (dst_enc << 3 | src_enc);
+}
+
+int Assembler::prefix_and_encode_rex2(int dst_enc, int src_enc, int init_bits) {
+ int bits = init_bits;
+ bits |= get_reg_prefix_bits(dst_enc);
+ bits |= get_base_prefix_bits(src_enc);
+ dst_enc &= 0x7;
+ src_enc &= 0x7;
+ prefix16(WREX2 | bits);
return dst_enc << 3 | src_enc;
}
-int8_t Assembler::get_prefixq(Address adr) {
+bool Assembler::prefix_is_rex2(int prefix) {
+ return (prefix & 0xFF00) == WREX2;
+}
+
+int Assembler::get_prefixq(Address adr, bool is_map1) {
+ if (adr.base_needs_rex2() || adr.index_needs_rex2()) {
+ return get_prefixq_rex2(adr, is_map1);
+ }
int8_t prfx = get_prefixq(adr, rax);
assert(REX_W <= prfx && prfx <= REX_WXB, "must be");
- return prfx;
+ return is_map1 ? (((int16_t)prfx) << 8) | 0x0F : (int16_t)prfx;
+}
+
+int Assembler::get_prefixq_rex2(Address adr, bool is_map1) {
+ assert(UseAPX, "APX features not enabled");
+ int bits = REXBIT_W;
+ if (is_map1) bits |= REX2BIT_M0;
+ bits |= get_base_prefix_bits(adr.base());
+ bits |= get_index_prefix_bits(adr.index());
+ return WREX2 | bits;
}
-int8_t Assembler::get_prefixq(Address adr, Register src) {
+int Assembler::get_prefixq(Address adr, Register src, bool is_map1) {
+ if (adr.base_needs_rex2() || adr.index_needs_rex2() || src->encoding() >= 16) {
+ return get_prefixq_rex2(adr, src, is_map1);
+ }
int8_t prfx = (int8_t)(REX_W +
((int)adr.base_needs_rex()) +
((int)adr.index_needs_rex() << 1) +
@@ -12994,18 +13238,42 @@ int8_t Assembler::get_prefixq(Address adr, Register src) {
}
}
#endif
- return prfx;
+ return is_map1 ? (((int16_t)prfx) << 8) | 0x0F : (int16_t)prfx;
+}
+
+int Assembler::get_prefixq_rex2(Address adr, Register src, bool is_map1) {
+ assert(UseAPX, "APX features not enabled");
+ int bits = REXBIT_W;
+ if (is_map1) bits |= REX2BIT_M0;
+ bits |= get_base_prefix_bits(adr.base());
+ bits |= get_index_prefix_bits(adr.index());
+ bits |= get_reg_prefix_bits(src->encoding());
+ return WREX2 | bits;
}
void Assembler::prefixq(Address adr) {
- emit_int8(get_prefixq(adr));
+ if (adr.base_needs_rex2() || adr.index_needs_rex2()) {
+ prefix16(get_prefixq_rex2(adr));
+ } else {
+ emit_int8(get_prefixq(adr));
+ }
}
-void Assembler::prefixq(Address adr, Register src) {
- emit_int8(get_prefixq(adr, src));
+void Assembler::prefixq(Address adr, Register src, bool is_map1) {
+ if (adr.base_needs_rex2() || adr.index_needs_rex2() || src->encoding() >= 16) {
+ prefix16(get_prefixq_rex2(adr, src, is_map1));
+ } else {
+ emit_int8(get_prefixq(adr, src));
+ if (is_map1) emit_int8(0x0F);
+ }
}
+
void Assembler::prefixq(Address adr, XMMRegister src) {
+ if (src->encoding() >= 16 || adr.base_needs_rex2() || adr.index_needs_rex2()) {
+ prefixq_rex2(adr, src);
+ return;
+ }
if (src->encoding() < 8) {
if (adr.base_needs_rex()) {
if (adr.index_needs_rex()) {
@@ -13037,17 +13305,38 @@ void Assembler::prefixq(Address adr, XMMRegister src) {
}
}
-int Assembler::prefixq_and_encode(int reg_enc) {
+void Assembler::prefixq_rex2(Address adr, XMMRegister src) {
+ int bits = REXBIT_W;
+ bits |= get_base_prefix_bits(adr.base());
+ bits |= get_index_prefix_bits(adr.index());
+ bits |= get_reg_prefix_bits(src->encoding());
+ prefix16(WREX2 | bits);
+}
+
+int Assembler::prefixq_and_encode(int reg_enc, bool is_map1) {
+ if (reg_enc >= 16) {
+ return prefixq_and_encode_rex2(reg_enc, is_map1);
+ }
if (reg_enc < 8) {
prefix(REX_W);
} else {
prefix(REX_WB);
reg_enc -= 8;
}
- return reg_enc;
+ int opcode_prefix = is_map1 ? 0x0F00 : 0;
+ return opcode_prefix | reg_enc;
}
-int Assembler::prefixq_and_encode(int dst_enc, int src_enc) {
+
+int Assembler::prefixq_and_encode_rex2(int reg_enc, bool is_map1) {
+ prefix16(WREX2 | REXBIT_W | (is_map1 ? REX2BIT_M0: 0) | get_base_prefix_bits(reg_enc));
+ return reg_enc & 0x7;
+}
+
+int Assembler::prefixq_and_encode(int dst_enc, int src_enc, bool is_map1) {
+ if (dst_enc >= 16 || src_enc >= 16) {
+ return prefixq_and_encode_rex2(dst_enc, src_enc, is_map1);
+ }
if (dst_enc < 8) {
if (src_enc < 8) {
prefix(REX_W);
@@ -13064,7 +13353,22 @@ int Assembler::prefixq_and_encode(int dst_enc, int src_enc) {
}
dst_enc -= 8;
}
- return dst_enc << 3 | src_enc;
+ int opcode_prefix = is_map1 ? 0x0F00 : 0;
+ return opcode_prefix | (dst_enc << 3 | src_enc);
+}
+
+int Assembler::prefixq_and_encode_rex2(int dst_enc, int src_enc, bool is_map1) {
+ int init_bits = REXBIT_W | (is_map1 ? REX2BIT_M0 : 0);
+ return prefix_and_encode_rex2(dst_enc, src_enc, init_bits);
+}
+
+void Assembler::emit_prefix_and_int8(int prefix, int b1) {
+ if ((prefix & 0xFF00) == 0) {
+ emit_int16(prefix, b1);
+ } else {
+ assert((prefix & 0xFF00) != WREX2 || UseAPX, "APX features not enabled");
+ emit_int24((prefix & 0xFF00) >> 8, prefix & 0x00FF, b1);
+ }
}
void Assembler::adcq(Register dst, int32_t imm32) {
@@ -13074,7 +13378,7 @@ void Assembler::adcq(Register dst, int32_t imm32) {
void Assembler::adcq(Register dst, Address src) {
InstructionMark im(this);
- emit_int16(get_prefixq(src, dst), 0x13);
+ emit_prefix_and_int8(get_prefixq(src, dst), 0x13);
emit_operand(dst, src, 0);
}
@@ -13091,7 +13395,7 @@ void Assembler::addq(Address dst, int32_t imm32) {
void Assembler::addq(Address dst, Register src) {
InstructionMark im(this);
- emit_int16(get_prefixq(dst, src), 0x01);
+ emit_prefix_and_int8(get_prefixq(dst, src), 0x01);
emit_operand(src, dst, 0);
}
@@ -13102,7 +13406,7 @@ void Assembler::addq(Register dst, int32_t imm32) {
void Assembler::addq(Register dst, Address src) {
InstructionMark im(this);
- emit_int16(get_prefixq(src, dst), 0x03);
+ emit_prefix_and_int8(get_prefixq(src, dst), 0x03);
emit_operand(dst, src, 0);
}
@@ -13113,24 +13417,35 @@ void Assembler::addq(Register dst, Register src) {
void Assembler::adcxq(Register dst, Register src) {
//assert(VM_Version::supports_adx(), "adx instructions not supported");
- emit_int8(0x66);
- int encode = prefixq_and_encode(dst->encoding(), src->encoding());
- emit_int32(0x0F,
- 0x38,
- (unsigned char)0xF6,
- (0xC0 | encode));
+ if (needs_rex2(dst, src)) {
+ InstructionAttr attributes(AVX_128bit, /* rex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ int encode = vex_prefix_and_encode(dst->encoding(), 0, src->encoding(), VEX_SIMD_66, VEX_OPCODE_0F_3C, &attributes, true);
+ emit_int16((unsigned char)0x66, (0xC0 | encode));
+ } else {
+ emit_int8(0x66);
+ int encode = prefixq_and_encode(dst->encoding(), src->encoding());
+ emit_int32(0x0F,
+ 0x38,
+ (unsigned char)0xF6,
+ (0xC0 | encode));
+ }
}
void Assembler::adoxq(Register dst, Register src) {
//assert(VM_Version::supports_adx(), "adx instructions not supported");
- emit_int8((unsigned char)0xF3);
- int encode = prefixq_and_encode(dst->encoding(), src->encoding());
- emit_int32(0x0F,
- 0x38,
- (unsigned char)0xF6,
- (0xC0 | encode));
+ if (needs_rex2(dst, src)) {
+ InstructionAttr attributes(AVX_128bit, /* rex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ int encode = vex_prefix_and_encode(dst->encoding(), 0, src->encoding(), VEX_SIMD_F3, VEX_OPCODE_0F_3C, &attributes, true);
+ emit_int16((unsigned char)0x66, (0xC0 | encode));
+ } else {
+ emit_int8((unsigned char)0xF3);
+ int encode = prefixq_and_encode(dst->encoding(), src->encoding());
+ emit_int32(0x0F,
+ 0x38,
+ (unsigned char)0xF6,
+ (0xC0 | encode));
+ }
}
-
void Assembler::andq(Address dst, int32_t imm32) {
InstructionMark im(this);
prefixq(dst);
@@ -13144,7 +13459,7 @@ void Assembler::andq(Register dst, int32_t imm32) {
void Assembler::andq(Register dst, Address src) {
InstructionMark im(this);
- emit_int16(get_prefixq(src, dst), 0x23);
+ emit_prefix_and_int8(get_prefixq(src, dst), 0x23);
emit_operand(dst, src, 0);
}
@@ -13155,52 +13470,54 @@ void Assembler::andq(Register dst, Register src) {
void Assembler::andq(Address dst, Register src) {
InstructionMark im(this);
- emit_int16(get_prefixq(dst, src), 0x21);
+ emit_prefix_and_int8(get_prefixq(dst, src), 0x21);
emit_operand(src, dst, 0);
}
void Assembler::andnq(Register dst, Register src1, Register src2) {
assert(VM_Version::supports_bmi1(), "bit manipulation instructions not supported");
- InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = vex_prefix_and_encode(dst->encoding(), src1->encoding(), src2->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ int encode = vex_prefix_and_encode(dst->encoding(), src1->encoding(), src2->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes, true);
emit_int16((unsigned char)0xF2, (0xC0 | encode));
}
void Assembler::andnq(Register dst, Register src1, Address src2) {
assert(VM_Version::supports_bmi1(), "bit manipulation instructions not supported");
InstructionMark im(this);
- InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ attributes.set_address_attributes(/* tuple_type */ EVEX_NOSCALE, /* input_size_in_bits */ EVEX_64bit);
vex_prefix(src2, src1->encoding(), dst->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes);
emit_int8((unsigned char)0xF2);
emit_operand(dst, src2, 0);
}
void Assembler::bsfq(Register dst, Register src) {
- int encode = prefixq_and_encode(dst->encoding(), src->encoding());
- emit_int24(0x0F, (unsigned char)0xBC, (0xC0 | encode));
+ int encode = prefixq_and_encode(dst->encoding(), src->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xBC, 0xC0, encode);
}
void Assembler::bsrq(Register dst, Register src) {
- int encode = prefixq_and_encode(dst->encoding(), src->encoding());
- emit_int24(0x0F, (unsigned char)0xBD, (0xC0 | encode));
+ int encode = prefixq_and_encode(dst->encoding(), src->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xBD, 0xC0, encode);
}
void Assembler::bswapq(Register reg) {
- int encode = prefixq_and_encode(reg->encoding());
- emit_int16(0x0F, (0xC8 | encode));
+ int encode = prefixq_and_encode(reg->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xC8, encode);
}
void Assembler::blsiq(Register dst, Register src) {
assert(VM_Version::supports_bmi1(), "bit manipulation instructions not supported");
- InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = vex_prefix_and_encode(rbx->encoding(), dst->encoding(), src->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ int encode = vex_prefix_and_encode(rbx->encoding(), dst->encoding(), src->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes, true);
emit_int16((unsigned char)0xF3, (0xC0 | encode));
}
void Assembler::blsiq(Register dst, Address src) {
assert(VM_Version::supports_bmi1(), "bit manipulation instructions not supported");
InstructionMark im(this);
- InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ attributes.set_address_attributes(/* tuple_type */ EVEX_NOSCALE, /* input_size_in_bits */ EVEX_64bit);
vex_prefix(src, dst->encoding(), rbx->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes);
emit_int8((unsigned char)0xF3);
emit_operand(rbx, src, 0);
@@ -13208,15 +13525,16 @@ void Assembler::blsiq(Register dst, Address src) {
void Assembler::blsmskq(Register dst, Register src) {
assert(VM_Version::supports_bmi1(), "bit manipulation instructions not supported");
- InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = vex_prefix_and_encode(rdx->encoding(), dst->encoding(), src->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ int encode = vex_prefix_and_encode(rdx->encoding(), dst->encoding(), src->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes, true);
emit_int16((unsigned char)0xF3, (0xC0 | encode));
}
void Assembler::blsmskq(Register dst, Address src) {
assert(VM_Version::supports_bmi1(), "bit manipulation instructions not supported");
InstructionMark im(this);
- InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ attributes.set_address_attributes(/* tuple_type */ EVEX_NOSCALE, /* input_size_in_bits */ EVEX_64bit);
vex_prefix(src, dst->encoding(), rdx->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes);
emit_int8((unsigned char)0xF3);
emit_operand(rdx, src, 0);
@@ -13224,15 +13542,16 @@ void Assembler::blsmskq(Register dst, Address src) {
void Assembler::blsrq(Register dst, Register src) {
assert(VM_Version::supports_bmi1(), "bit manipulation instructions not supported");
- InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = vex_prefix_and_encode(rcx->encoding(), dst->encoding(), src->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ int encode = vex_prefix_and_encode(rcx->encoding(), dst->encoding(), src->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes, true);
emit_int16((unsigned char)0xF3, (0xC0 | encode));
}
void Assembler::blsrq(Register dst, Address src) {
assert(VM_Version::supports_bmi1(), "bit manipulation instructions not supported");
InstructionMark im(this);
- InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ attributes.set_address_attributes(/* tuple_type */ EVEX_NOSCALE, /* input_size_in_bits */ EVEX_64bit);
vex_prefix(src, dst->encoding(), rcx->encoding(), VEX_SIMD_NONE, VEX_OPCODE_0F_38, &attributes);
emit_int8((unsigned char)0xF3);
emit_operand(rcx, src, 0);
@@ -13248,8 +13567,8 @@ void Assembler::cdqe() {
void Assembler::clflush(Address adr) {
assert(VM_Version::supports_clflush(), "should do");
- prefix(adr);
- emit_int16(0x0F, (unsigned char)0xAE);
+ prefix(adr, true /* is_map1 */);
+ emit_int8((unsigned char)0xAE);
emit_operand(rdi, adr, 0);
}
@@ -13261,9 +13580,9 @@ void Assembler::clflushopt(Address adr) {
assert(adr.disp() == 0, "displacement should be 0");
// instruction prefix is 0x66
emit_int8(0x66);
- prefix(adr);
+ prefix(adr, true /* is_map1 */);
// opcode family is 0x0F 0xAE
- emit_int16(0x0F, (unsigned char)0xAE);
+ emit_int8((unsigned char)0xAE);
// extended opcode byte is 7 == rdi
emit_operand(rdi, adr, 0);
}
@@ -13276,21 +13595,22 @@ void Assembler::clwb(Address adr) {
assert(adr.disp() == 0, "displacement should be 0");
// instruction prefix is 0x66
emit_int8(0x66);
- prefix(adr);
+ prefix(adr, true /* is_map1 */);
// opcode family is 0x0f 0xAE
- emit_int16(0x0F, (unsigned char)0xAE);
+ emit_int8((unsigned char)0xAE);
// extended opcode byte is 6 == rsi
emit_operand(rsi, adr, 0);
}
void Assembler::cmovq(Condition cc, Register dst, Register src) {
- int encode = prefixq_and_encode(dst->encoding(), src->encoding());
- emit_int24(0x0F, (0x40 | cc), (0xC0 | encode));
+ int encode = prefixq_and_encode(dst->encoding(), src->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((0x40 | cc), 0xC0, encode);
}
void Assembler::cmovq(Condition cc, Register dst, Address src) {
InstructionMark im(this);
- emit_int24(get_prefixq(src, dst), 0x0F, (0x40 | cc));
+ int prefix = get_prefixq(src, dst, true /* is_map1 */);
+ emit_prefix_and_int8(prefix, (0x40 | cc));
emit_operand(dst, src, 0);
}
@@ -13307,7 +13627,7 @@ void Assembler::cmpq(Register dst, int32_t imm32) {
void Assembler::cmpq(Address dst, Register src) {
InstructionMark im(this);
- emit_int16(get_prefixq(dst, src), 0x39);
+ emit_prefix_and_int8(get_prefixq(dst, src), 0x39);
emit_operand(src, dst, 0);
}
@@ -13318,20 +13638,21 @@ void Assembler::cmpq(Register dst, Register src) {
void Assembler::cmpq(Register dst, Address src) {
InstructionMark im(this);
- emit_int16(get_prefixq(src, dst), 0x3B);
+ emit_prefix_and_int8(get_prefixq(src, dst), 0x3B);
emit_operand(dst, src, 0);
}
void Assembler::cmpxchgq(Register reg, Address adr) {
InstructionMark im(this);
- emit_int24(get_prefixq(adr, reg), 0x0F, (unsigned char)0xB1);
+ int prefix = get_prefixq(adr, reg, true /* is_map1 */);
+ emit_prefix_and_int8(prefix, (unsigned char)0xB1);
emit_operand(reg, adr, 0);
}
void Assembler::cvtsi2sdq(XMMRegister dst, Register src) {
NOT_LP64(assert(VM_Version::supports_sse2(), ""));
InstructionAttr attributes(AVX_128bit, /* rex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = simd_prefix_and_encode(dst, dst, as_XMMRegister(src->encoding()), VEX_SIMD_F2, VEX_OPCODE_0F, &attributes);
+ int encode = simd_prefix_and_encode(dst, dst, as_XMMRegister(src->encoding()), VEX_SIMD_F2, VEX_OPCODE_0F, &attributes, true);
emit_int16(0x2A, (0xC0 | encode));
}
@@ -13360,7 +13681,9 @@ void Assembler::cvttsd2siq(Register dst, Address src) {
// F2 REX.W 0F 2C /r
// CVTTSD2SI r64, xmm1/m64
InstructionMark im(this);
- emit_int32((unsigned char)0xF2, REX_W, 0x0F, 0x2C);
+ emit_int8((unsigned char)0xF2);
+ prefixq(src, dst, true /* is_map1 */);
+ emit_int8((unsigned char)0x2C);
emit_operand(dst, src, 0);
}
@@ -13402,28 +13725,32 @@ void Assembler::decq(Register dst) {
void Assembler::decq(Address dst) {
// Don't use it directly. Use MacroAssembler::decrementq() instead.
InstructionMark im(this);
- emit_int16(get_prefixq(dst), (unsigned char)0xFF);
+ emit_prefix_and_int8(get_prefixq(dst), (unsigned char)0xFF);
emit_operand(rcx, dst, 0);
}
+// can't use REX2
void Assembler::fxrstor(Address src) {
InstructionMark im(this);
emit_int24(get_prefixq(src), 0x0F, (unsigned char)0xAE);
emit_operand(as_Register(1), src, 0);
}
+// can't use REX2
void Assembler::xrstor(Address src) {
InstructionMark im(this);
emit_int24(get_prefixq(src), 0x0F, (unsigned char)0xAE);
emit_operand(as_Register(5), src, 0);
}
+// can't use REX2
void Assembler::fxsave(Address dst) {
InstructionMark im(this);
emit_int24(get_prefixq(dst), 0x0F, (unsigned char)0xAE);
emit_operand(as_Register(0), dst, 0);
}
+// cant use REX2
void Assembler::xsave(Address dst) {
InstructionMark im(this);
emit_int24(get_prefixq(dst), 0x0F, (unsigned char)0xAE);
@@ -13441,8 +13768,8 @@ void Assembler::divq(Register src) {
}
void Assembler::imulq(Register dst, Register src) {
- int encode = prefixq_and_encode(dst->encoding(), src->encoding());
- emit_int24(0x0F, (unsigned char)0xAF, (0xC0 | encode));
+ int encode = prefixq_and_encode(dst->encoding(), src->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xAF, 0xC0, encode);
}
void Assembler::imulq(Register src) {
@@ -13476,7 +13803,8 @@ void Assembler::imulq(Register dst, Register src, int value) {
void Assembler::imulq(Register dst, Address src) {
InstructionMark im(this);
- emit_int24(get_prefixq(src, dst), 0x0F, (unsigned char)0xAF);
+ int prefix = get_prefixq(src, dst, true /* is_map1 */);
+ emit_prefix_and_int8(prefix, (unsigned char)0xAF);
emit_operand(dst, src, 0);
}
@@ -13497,7 +13825,7 @@ void Assembler::incq(Register dst) {
void Assembler::incq(Address dst) {
// Don't use it directly. Use MacroAssembler::incrementq() instead.
InstructionMark im(this);
- emit_int16(get_prefixq(dst), (unsigned char)0xFF);
+ emit_prefix_and_int8(get_prefixq(dst), (unsigned char)0xFF);
emit_operand(rax, dst, 0);
}
@@ -13507,7 +13835,7 @@ void Assembler::lea(Register dst, Address src) {
void Assembler::leaq(Register dst, Address src) {
InstructionMark im(this);
- emit_int16(get_prefixq(src, dst), (unsigned char)0x8D);
+ emit_prefix_and_int8(get_prefixq(src, dst), (unsigned char)0x8D);
emit_operand(dst, src, 0);
}
@@ -13565,16 +13893,16 @@ void Assembler::cmp_narrow_oop(Address src1, int32_t imm32, RelocationHolder con
void Assembler::lzcntq(Register dst, Register src) {
assert(VM_Version::supports_lzcnt(), "encoding is treated as BSR");
emit_int8((unsigned char)0xF3);
- int encode = prefixq_and_encode(dst->encoding(), src->encoding());
- emit_int24(0x0F, (unsigned char)0xBD, (0xC0 | encode));
+ int encode = prefixq_and_encode(dst->encoding(), src->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xBD, 0xC0, encode);
}
void Assembler::lzcntq(Register dst, Address src) {
assert(VM_Version::supports_lzcnt(), "encoding is treated as BSR");
InstructionMark im(this);
emit_int8((unsigned char)0xF3);
- prefixq(src, dst);
- emit_int16(0x0F, (unsigned char)0xBD);
+ prefixq(src, dst, true /* is_map1 */);
+ emit_int8((unsigned char)0xBD);
emit_operand(dst, src, 0);
}
@@ -13582,7 +13910,7 @@ void Assembler::movdq(XMMRegister dst, Register src) {
// table D-1 says MMX/SSE2
NOT_LP64(assert(VM_Version::supports_sse2(), ""));
InstructionAttr attributes(AVX_128bit, /* rex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = simd_prefix_and_encode(dst, xnoreg, as_XMMRegister(src->encoding()), VEX_SIMD_66, VEX_OPCODE_0F, &attributes);
+ int encode = simd_prefix_and_encode(dst, xnoreg, as_XMMRegister(src->encoding()), VEX_SIMD_66, VEX_OPCODE_0F, &attributes, true);
emit_int16(0x6E, (0xC0 | encode));
}
@@ -13591,7 +13919,7 @@ void Assembler::movdq(Register dst, XMMRegister src) {
NOT_LP64(assert(VM_Version::supports_sse2(), ""));
InstructionAttr attributes(AVX_128bit, /* rex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
// swap src/dst to get correct prefix
- int encode = simd_prefix_and_encode(src, xnoreg, as_XMMRegister(dst->encoding()), VEX_SIMD_66, VEX_OPCODE_0F, &attributes);
+ int encode = simd_prefix_and_encode(src, xnoreg, as_XMMRegister(dst->encoding()), VEX_SIMD_66, VEX_OPCODE_0F, &attributes, true);
emit_int16(0x7E,
(0xC0 | encode));
}
@@ -13604,19 +13932,19 @@ void Assembler::movq(Register dst, Register src) {
void Assembler::movq(Register dst, Address src) {
InstructionMark im(this);
- emit_int16(get_prefixq(src, dst), (unsigned char)0x8B);
+ emit_prefix_and_int8(get_prefixq(src, dst), (unsigned char)0x8B);
emit_operand(dst, src, 0);
}
void Assembler::movq(Address dst, Register src) {
InstructionMark im(this);
- emit_int16(get_prefixq(dst, src), (unsigned char)0x89);
+ emit_prefix_and_int8(get_prefixq(dst, src), (unsigned char)0x89);
emit_operand(src, dst, 0);
}
void Assembler::movq(Address dst, int32_t imm32) {
InstructionMark im(this);
- emit_int16(get_prefixq(dst), (unsigned char)0xC7);
+ emit_prefix_and_int8(get_prefixq(dst), (unsigned char)0xC7);
emit_operand(as_Register(0), dst, 4);
emit_int32(imm32);
}
@@ -13629,28 +13957,27 @@ void Assembler::movq(Register dst, int32_t imm32) {
void Assembler::movsbq(Register dst, Address src) {
InstructionMark im(this);
- emit_int24(get_prefixq(src, dst),
- 0x0F,
- (unsigned char)0xBE);
+ int prefix = get_prefixq(src, dst, true /* is_map1 */);
+ emit_prefix_and_int8(prefix, (unsigned char)0xBE);
emit_operand(dst, src, 0);
}
void Assembler::movsbq(Register dst, Register src) {
- int encode = prefixq_and_encode(dst->encoding(), src->encoding());
- emit_int24(0x0F, (unsigned char)0xBE, (0xC0 | encode));
+ int encode = prefixq_and_encode(dst->encoding(), src->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xBE, 0xC0, encode);
}
void Assembler::movslq(Address dst, int32_t imm32) {
assert(is_simm32(imm32), "lost bits");
InstructionMark im(this);
- emit_int16(get_prefixq(dst), (unsigned char)0xC7);
+ emit_prefix_and_int8(get_prefixq(dst), (unsigned char)0xC7);
emit_operand(rax, dst, 4);
emit_int32(imm32);
}
void Assembler::movslq(Register dst, Address src) {
InstructionMark im(this);
- emit_int16(get_prefixq(src, dst), 0x63);
+ emit_prefix_and_int8(get_prefixq(src, dst), 0x63);
emit_operand(dst, src, 0);
}
@@ -13661,46 +13988,43 @@ void Assembler::movslq(Register dst, Register src) {
void Assembler::movswq(Register dst, Address src) {
InstructionMark im(this);
- emit_int24(get_prefixq(src, dst),
- 0x0F,
- (unsigned char)0xBF);
+ int prefix = get_prefixq(src, dst, true /* is_map1 */);
+ emit_prefix_and_int8(prefix, (unsigned char)0xBF);
emit_operand(dst, src, 0);
}
void Assembler::movswq(Register dst, Register src) {
- int encode = prefixq_and_encode(dst->encoding(), src->encoding());
- emit_int24(0x0F, (unsigned char)0xBF, (0xC0 | encode));
+ int encode = prefixq_and_encode(dst->encoding(), src->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xBF, 0xC0, encode);
}
void Assembler::movzbq(Register dst, Address src) {
InstructionMark im(this);
- emit_int24(get_prefixq(src, dst),
- 0x0F,
- (unsigned char)0xB6);
+ int prefix = get_prefixq(src, dst, true /* is_map1 */);
+ emit_prefix_and_int8(prefix, (unsigned char)0xB6);
emit_operand(dst, src, 0);
}
void Assembler::movzbq(Register dst, Register src) {
- int encode = prefixq_and_encode(dst->encoding(), src->encoding());
- emit_int24(0x0F, (unsigned char)0xB6, (0xC0 | encode));
+ int encode = prefixq_and_encode(dst->encoding(), src->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xB6, 0xC0, encode);
}
void Assembler::movzwq(Register dst, Address src) {
InstructionMark im(this);
- emit_int24(get_prefixq(src, dst),
- 0x0F,
- (unsigned char)0xB7);
+ int prefix = get_prefixq(src, dst, true /* is_map1 */);
+ emit_prefix_and_int8(prefix, (unsigned char)0xB7);
emit_operand(dst, src, 0);
}
void Assembler::movzwq(Register dst, Register src) {
- int encode = prefixq_and_encode(dst->encoding(), src->encoding());
- emit_int24(0x0F, (unsigned char)0xB7, (0xC0 | encode));
+ int encode = prefixq_and_encode(dst->encoding(), src->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xB7, 0xC0, encode);
}
void Assembler::mulq(Address src) {
InstructionMark im(this);
- emit_int16(get_prefixq(src), (unsigned char)0xF7);
+ emit_prefix_and_int8(get_prefixq(src), (unsigned char)0xF7);
emit_operand(rsp, src, 0);
}
@@ -13711,8 +14035,8 @@ void Assembler::mulq(Register src) {
void Assembler::mulxq(Register dst1, Register dst2, Register src) {
assert(VM_Version::supports_bmi2(), "bit manipulation instructions not supported");
- InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = vex_prefix_and_encode(dst1->encoding(), dst2->encoding(), src->encoding(), VEX_SIMD_F2, VEX_OPCODE_0F_38, &attributes);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ int encode = vex_prefix_and_encode(dst1->encoding(), dst2->encoding(), src->encoding(), VEX_SIMD_F2, VEX_OPCODE_0F_38, &attributes, true);
emit_int16((unsigned char)0xF6, (0xC0 | encode));
}
@@ -13723,7 +14047,7 @@ void Assembler::negq(Register dst) {
void Assembler::negq(Address dst) {
InstructionMark im(this);
- emit_int16(get_prefixq(dst), (unsigned char)0xF7);
+ emit_prefix_and_int8(get_prefixq(dst), (unsigned char)0xF7);
emit_operand(as_Register(3), dst, 0);
}
@@ -13733,24 +14057,22 @@ void Assembler::notq(Register dst) {
}
void Assembler::btq(Register dst, Register src) {
- int encode = prefixq_and_encode(src->encoding(), dst->encoding());
- emit_int24(0x0F, (unsigned char)0xA3, (encode | 0xC0));
+ int encode = prefixq_and_encode(src->encoding(), dst->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xA3, 0xC0, encode);
}
void Assembler::btq(Register src, int imm8) {
assert(isByte(imm8), "not a byte");
- int encode = prefixq_and_encode(src->encoding());
- emit_int16(0x0f, 0xba);
- emit_int8(0xe0|encode);
+ int encode = prefixq_and_encode(src->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xBA, 0xE0, encode);
emit_int8(imm8);
}
void Assembler::btsq(Address dst, int imm8) {
assert(isByte(imm8), "not a byte");
InstructionMark im(this);
- emit_int24(get_prefixq(dst),
- 0x0F,
- (unsigned char)0xBA);
+ int prefix = get_prefixq(dst, true /* is_map1 */);
+ emit_prefix_and_int8(prefix, (unsigned char)0xBA);
emit_operand(rbp /* 5 */, dst, 1);
emit_int8(imm8);
}
@@ -13758,9 +14080,8 @@ void Assembler::btsq(Address dst, int imm8) {
void Assembler::btrq(Address dst, int imm8) {
assert(isByte(imm8), "not a byte");
InstructionMark im(this);
- emit_int24(get_prefixq(dst),
- 0x0F,
- (unsigned char)0xBA);
+ int prefix = get_prefixq(dst, true /* is_map1 */);
+ emit_prefix_and_int8(prefix, (unsigned char)0xBA);
emit_operand(rsi /* 6 */, dst, 1);
emit_int8(imm8);
}
@@ -13773,7 +14094,7 @@ void Assembler::orq(Address dst, int32_t imm32) {
void Assembler::orq(Address dst, Register src) {
InstructionMark im(this);
- emit_int16(get_prefixq(dst, src), (unsigned char)0x09);
+ emit_prefix_and_int8(get_prefixq(dst, src), (unsigned char)0x09);
emit_operand(src, dst, 0);
}
@@ -13789,7 +14110,7 @@ void Assembler::orq_imm32(Register dst, int32_t imm32) {
void Assembler::orq(Register dst, Address src) {
InstructionMark im(this);
- emit_int16(get_prefixq(src, dst), 0x0B);
+ emit_prefix_and_int8(get_prefixq(src, dst), 0x0B);
emit_operand(dst, src, 0);
}
@@ -13801,28 +14122,27 @@ void Assembler::orq(Register dst, Register src) {
void Assembler::popcntq(Register dst, Address src) {
assert(VM_Version::supports_popcnt(), "must support");
InstructionMark im(this);
- emit_int32((unsigned char)0xF3,
- get_prefixq(src, dst),
- 0x0F,
- (unsigned char)0xB8);
+ emit_int8((unsigned char)0xF3);
+ emit_prefix_and_int8(get_prefixq(src, dst, true /* is_map1 */), (unsigned char) 0xB8);
emit_operand(dst, src, 0);
}
void Assembler::popcntq(Register dst, Register src) {
assert(VM_Version::supports_popcnt(), "must support");
emit_int8((unsigned char)0xF3);
- int encode = prefixq_and_encode(dst->encoding(), src->encoding());
- emit_int24(0x0F, (unsigned char)0xB8, (0xC0 | encode));
+ int encode = prefixq_and_encode(dst->encoding(), src->encoding(), true /* is_map1 */);
+ emit_opcode_prefix_and_encoding((unsigned char)0xB8, 0xC0, encode);
}
void Assembler::popq(Address dst) {
InstructionMark im(this);
- emit_int16(get_prefixq(dst), (unsigned char)0x8F);
+ emit_prefix_and_int8(get_prefixq(dst), (unsigned char)0x8F);
emit_operand(rax, dst, 0);
}
void Assembler::popq(Register dst) {
- emit_int8((unsigned char)0x58 | dst->encoding());
+ int encode = prefix_and_encode(dst->encoding());
+ emit_int8((unsigned char)0x58 | encode);
}
// Precomputable: popa, pusha, vzeroupper
@@ -13962,7 +14282,7 @@ void Assembler::vzeroall() {
void Assembler::pushq(Address src) {
InstructionMark im(this);
- emit_int16(get_prefixq(src), (unsigned char)0xFF);
+ emit_prefix_and_int8(get_prefixq(src), (unsigned char)0xFF);
emit_operand(rsi, src, 0);
}
@@ -13988,15 +14308,16 @@ void Assembler::rcrq(Register dst, int imm8) {
void Assembler::rorxl(Register dst, Register src, int imm8) {
assert(VM_Version::supports_bmi2(), "bit manipulation instructions not supported");
- InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = vex_prefix_and_encode(dst->encoding(), 0, src->encoding(), VEX_SIMD_F2, VEX_OPCODE_0F_3A, &attributes);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ int encode = vex_prefix_and_encode(dst->encoding(), 0, src->encoding(), VEX_SIMD_F2, VEX_OPCODE_0F_3A, &attributes, true);
emit_int24((unsigned char)0xF0, (0xC0 | encode), imm8);
}
void Assembler::rorxl(Register dst, Address src, int imm8) {
assert(VM_Version::supports_bmi2(), "bit manipulation instructions not supported");
InstructionMark im(this);
- InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ false, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ attributes.set_address_attributes(/* tuple_type */ EVEX_NOSCALE, /* input_size_in_bits */ EVEX_32bit);
vex_prefix(src, 0, dst->encoding(), VEX_SIMD_F2, VEX_OPCODE_0F_3A, &attributes);
emit_int8((unsigned char)0xF0);
emit_operand(dst, src, 1);
@@ -14005,15 +14326,16 @@ void Assembler::rorxl(Register dst, Address src, int imm8) {
void Assembler::rorxq(Register dst, Register src, int imm8) {
assert(VM_Version::supports_bmi2(), "bit manipulation instructions not supported");
- InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
- int encode = vex_prefix_and_encode(dst->encoding(), 0, src->encoding(), VEX_SIMD_F2, VEX_OPCODE_0F_3A, &attributes);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ int encode = vex_prefix_and_encode(dst->encoding(), 0, src->encoding(), VEX_SIMD_F2, VEX_OPCODE_0F_3A, &attributes, true);
emit_int24((unsigned char)0xF0, (0xC0 | encode), imm8);
}
void Assembler::rorxq(Register dst, Address src, int imm8) {
assert(VM_Version::supports_bmi2(), "bit manipulation instructions not supported");
InstructionMark im(this);
- InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ true, /* no_mask_reg */ true, /* uses_vl */ false);
+ InstructionAttr attributes(AVX_128bit, /* vex_w */ true, /* legacy_mode */ false, /* no_mask_reg */ true, /* uses_vl */ false);
+ attributes.set_address_attributes(/* tuple_type */ EVEX_NOSCALE, /* input_size_in_bits */ EVEX_64bit);
vex_prefix(src, 0, dst->encoding(), VEX_SIMD_F2, VEX_OPCODE_0F_3A, &attributes);
emit_int8((unsigned char)0xF0);
emit_operand(dst, src, 1);
@@ -14025,11 +14347,11 @@ void Assembler::salq(Address dst, int imm8) {
InstructionMark im(this);
assert(isShiftCount(imm8 >> 1), "illegal shift count");
if (imm8 == 1) {
- emit_int16(get_prefixq(dst), (unsigned char)0xD1);
+ emit_prefix_and_int8(get_prefixq(dst), (unsigned char)0xD1);
emit_operand(as_Register(4), dst, 0);
}
else {
- emit_int16(get_prefixq(dst), (unsigned char)0xC1);
+ emit_prefix_and_int8(get_prefixq(dst), (unsigned char)0xC1);
emit_operand(as_Register(4), dst, 1);
emit_int8(imm8);
}
@@ -14037,7 +14359,7 @@ void Assembler::salq(Address dst, int imm8) {
void Assembler::salq(Address dst) {
InstructionMark im(this);
- emit_int16(get_prefixq(dst), (unsigned char)0xD3);
+ emit_prefix_and_int8(get_prefixq(dst), (unsigned char)0xD3);
emit_operand(as_Register(4), dst, 0);
}
@@ -14060,11 +14382,11 @@ void Assembler::sarq(Address dst, int imm8) {
InstructionMark im(this);
assert(isShiftCount(imm8 >> 1), "illegal shift count");
if (imm8 == 1) {
- emit_int16(get_prefixq(dst), (unsigned char)0xD1);
+ emit_prefix_and_int8(get_prefixq(dst), (unsigned char)0xD1);
emit_operand(as_Register(7), dst, 0);
}
else {
- emit_int16(get_prefixq(dst), (unsigned char)0xC1);
+ emit_prefix_and_int8(get_prefixq(dst), (unsigned char)0xC1);
emit_operand(as_Register(7), dst, 1);
emit_int8(imm8);
}
@@ -14072,7 +14394,7 @@ void Assembler::sarq(Address dst, int imm8) {
void Assembler::sarq(Address dst) {
InstructionMark im(this);
- emit_int16(get_prefixq(dst), (unsigned char)0xD3);
+ emit_prefix_and_int8(get_prefixq(dst), (unsigned char)0xD3);
emit_operand(as_Register(7), dst, 0);
}
@@ -14105,7 +14427,7 @@ void Assembler::sbbq(Register dst, int32_t imm32) {
void Assembler::sbbq(Register dst, Address src) {
InstructionMark im(this);
- emit_int16(get_prefixq(src, dst), 0x1B);
+ emit_prefix_and_int8(get_prefixq(src, dst), 0x1B);
emit_operand(dst, src, 0);
}
@@ -14147,7 +14469,7 @@ void Assembler::shrq(Register dst) {
void Assembler::shrq(Address dst) {
InstructionMark im(this);
- emit_int16(get_prefixq(dst), (unsigned char)0xD3);
+ emit_prefix_and_int8(get_prefixq(dst), (unsigned char)0xD3);
emit_operand(as_Register(5), dst, 0);
}
@@ -14155,11 +14477,11 @@ void Assembler::shrq(Address dst, int imm8) {
InstructionMark im(this);
assert(isShiftCount(imm8 >> 1), "illegal shift count");
if (imm8 == 1) {
- emit_int16(get_prefixq(dst), (unsigned char)0xD1);
+ emit_prefix_and_int8(get_prefixq(dst), (unsigned char)0xD1);
emit_operand(as_Register(5), dst, 0);
}
else {
- emit_int16(get_prefixq(dst), (unsigned char)0xC1);
+ emit_prefix_and_int8(get_prefixq(dst), (unsigned char)0xC1);
emit_operand(as_Register(5), dst, 1);
emit_int8(imm8);
}
@@ -14173,7 +14495,7 @@ void Assembler::subq(Address dst, int32_t imm32) {
void Assembler::subq(Address dst, Register src) {
InstructionMark im(this);
- emit_int16(get_prefixq(dst, src), 0x29);
+ emit_prefix_and_int8(get_prefixq(dst, src), 0x29);
emit_operand(src, dst, 0);
}
@@ -14190,7 +14512,7 @@ void Assembler::subq_imm32(Register dst, int32_t imm32) {
void Assembler::subq(Register dst, Address src) {
InstructionMark im(this);
- emit_int16(get_prefixq(src, dst), 0x2B);
+ emit_prefix_and_int8(get_prefixq(src, dst), 0x2B);
emit_operand(dst, src, 0);
}
@@ -14201,7 +14523,7 @@ void Assembler::subq(Register dst, Register src) {
void Assembler::testq(Address dst, int32_t imm32) {
InstructionMark im(this);
- emit_int16(get_prefixq(dst), (unsigned char)0xF7);
+ emit_prefix_and_int8(get_prefixq(dst), (unsigned char)0xF7);
emit_operand(as_Register(0), dst, 4);
emit_int32(imm32);
}
@@ -14229,19 +14551,20 @@ void Assembler::testq(Register dst, Register src) {
void Assembler::testq(Register dst, Address src) {
InstructionMark im(this);
- emit_int16(get_prefixq(src, dst), (unsigned char)0x85);
+ emit_prefix_and_int8(get_prefixq(src, dst), (unsigned char)0x85);
emit_operand(dst, src, 0);
}
void Assembler::xaddq(Address dst, Register src) {
InstructionMark im(this);
- emit_int24(get_prefixq(dst, src), 0x0F, (unsigned char)0xC1);
+ int prefix = get_prefixq(dst, src, true /* is_map1 */);
+ emit_prefix_and_int8(prefix, (unsigned char)0xC1);
emit_operand(src, dst, 0);
}
void Assembler::xchgq(Register dst, Address src) {
InstructionMark im(this);
- emit_int16(get_prefixq(src, dst), (unsigned char)0x87);
+ emit_prefix_and_int8(get_prefixq(src, dst), (unsigned char)0x87);
emit_operand(dst, src, 0);
}
@@ -14257,7 +14580,7 @@ void Assembler::xorq(Register dst, Register src) {
void Assembler::xorq(Register dst, Address src) {
InstructionMark im(this);
- emit_int16(get_prefixq(src, dst), 0x33);
+ emit_prefix_and_int8(get_prefixq(src, dst), 0x33);
emit_operand(dst, src, 0);
}
@@ -14274,7 +14597,7 @@ void Assembler::xorq(Address dst, int32_t imm32) {
void Assembler::xorq(Address dst, Register src) {
InstructionMark im(this);
- emit_int16(get_prefixq(dst, src), 0x31);
+ emit_prefix_and_int8(get_prefixq(dst, src), 0x31);
emit_operand(src, dst, 0);
}
diff --git a/src/hotspot/cpu/x86/assembler_x86.hpp b/src/hotspot/cpu/x86/assembler_x86.hpp
index 097bc9de62a39..41a639248286e 100644
--- a/src/hotspot/cpu/x86/assembler_x86.hpp
+++ b/src/hotspot/cpu/x86/assembler_x86.hpp
@@ -308,17 +308,29 @@ class Address {
private:
bool base_needs_rex() const {
- return _base->is_valid() && _base->encoding() >= 8;
+ return _base->is_valid() && ((_base->encoding() & 8) == 8);
+ }
+
+ bool base_needs_rex2() const {
+ return _base->is_valid() && _base->encoding() >= 16;
}
bool index_needs_rex() const {
- return _index->is_valid() &&_index->encoding() >= 8;
+ return _index->is_valid() && ((_index->encoding() & 8) == 8);
+ }
+
+ bool index_needs_rex2() const {
+ return _index->is_valid() &&_index->encoding() >= 16;
}
bool xmmindex_needs_rex() const {
return _xmmindex->is_valid() && ((_xmmindex->encoding() & 8) == 8);
}
+ bool xmmindex_needs_rex2() const {
+ return _xmmindex->is_valid() && _xmmindex->encoding() >= 16;
+ }
+
relocInfo::relocType reloc() const { return _rspec.type(); }
friend class Assembler;
@@ -508,12 +520,26 @@ class Assembler : public AbstractAssembler {
REX_WRX = 0x4E,
REX_WRXB = 0x4F,
+ REX2 = 0xd5,
+ WREX2 = REX2 << 8,
+
VEX_3bytes = 0xC4,
VEX_2bytes = 0xC5,
EVEX_4bytes = 0x62,
Prefix_EMPTY = 0x0
};
+ enum PrefixBits {
+ REXBIT_B = 0x01,
+ REXBIT_X = 0x02,
+ REXBIT_R = 0x04,
+ REXBIT_W = 0x08,
+ REX2BIT_B4 = 0x10,
+ REX2BIT_X4 = 0x20,
+ REX2BIT_R4 = 0x40,
+ REX2BIT_M0 = 0x80
+ };
+
enum VexPrefix {
VEX_B = 0x20,
VEX_X = 0x40,
@@ -525,10 +551,18 @@ class Assembler : public AbstractAssembler {
EVEX_F = 0x04,
EVEX_V = 0x08,
EVEX_Rb = 0x10,
+ EVEX_B = 0x20,
EVEX_X = 0x40,
EVEX_Z = 0x80
};
+ enum ExtEvexPrefix {
+ EEVEX_R = 0x10,
+ EEVEX_B = 0x08,
+ EEVEX_X = 0x04,
+ EEVEX_V = 0x08
+ };
+
enum EvexRoundPrefix {
EVEX_RNE = 0x0,
EVEX_RD = 0x1,
@@ -540,7 +574,7 @@ class Assembler : public AbstractAssembler {
VEX_SIMD_NONE = 0x0,
VEX_SIMD_66 = 0x1,
VEX_SIMD_F3 = 0x2,
- VEX_SIMD_F2 = 0x3
+ VEX_SIMD_F2 = 0x3,
};
enum VexOpcode {
@@ -548,6 +582,7 @@ class Assembler : public AbstractAssembler {
VEX_OPCODE_0F = 0x1,
VEX_OPCODE_0F_38 = 0x2,
VEX_OPCODE_0F_3A = 0x3,
+ VEX_OPCODE_0F_3C = 0x4,
VEX_OPCODE_MASK = 0x1F
};
@@ -572,7 +607,8 @@ class Assembler : public AbstractAssembler {
EVEX_OVM = 20,
EVEX_M128 = 21,
EVEX_DUP = 22,
- EVEX_ETUP = 23
+ EVEX_NOSCALE = 23,
+ EVEX_ETUP = 24
};
enum EvexInputSizeInBits {
@@ -686,33 +722,62 @@ class Assembler : public AbstractAssembler {
InstructionAttr *_attributes;
void set_attributes(InstructionAttr* attributes);
+ int get_base_prefix_bits(int enc);
+ int get_index_prefix_bits(int enc);
+ int get_base_prefix_bits(Register base);
+ int get_index_prefix_bits(Register index);
+ int get_reg_prefix_bits(int enc);
+
// 64bit prefixes
void prefix(Register reg);
void prefix(Register dst, Register src, Prefix p);
+ void prefix_rex2(Register dst, Register src);
void prefix(Register dst, Address adr, Prefix p);
-
- void prefix(Address adr);
- void prefix(Address adr, Register reg, bool byteinst = false);
+ void prefix_rex2(Register dst, Address adr);
+
+ // The is_map1 bool indicates an x86 map1 instruction which, when
+ // legacy encoded, uses a 0x0F opcode prefix. By specification, the
+ // opcode prefix is omitted when using rex2 encoding in support
+ // of APX extended GPRs.
+ void prefix(Address adr, bool is_map1 = false);
+ void prefix_rex2(Address adr, bool is_map1 = false);
+ void prefix(Address adr, Register reg, bool byteinst = false, bool is_map1 = false);
+ void prefix_rex2(Address adr, Register reg, bool byteinst = false, bool is_map1 = false);
void prefix(Address adr, XMMRegister reg);
+ void prefix_rex2(Address adr, XMMRegister reg);
- int prefix_and_encode(int reg_enc, bool byteinst = false);
- int prefix_and_encode(int dst_enc, int src_enc) {
- return prefix_and_encode(dst_enc, false, src_enc, false);
+ int prefix_and_encode(int reg_enc, bool byteinst = false, bool is_map1 = false);
+ int prefix_and_encode_rex2(int reg_enc, bool is_map1 = false);
+ int prefix_and_encode(int dst_enc, int src_enc, bool is_map1 = false) {
+ return prefix_and_encode(dst_enc, false, src_enc, false, is_map1);
}
- int prefix_and_encode(int dst_enc, bool dst_is_byte, int src_enc, bool src_is_byte);
+ int prefix_and_encode(int dst_enc, bool dst_is_byte, int src_enc, bool src_is_byte, bool is_map1 = false);
+ int prefix_and_encode_rex2(int dst_enc, int src_enc, int init_bits = 0);
// Some prefixq variants always emit exactly one prefix byte, so besides a
// prefix-emitting method we provide a method to get the prefix byte to emit,
// which can then be folded into a byte stream.
- int8_t get_prefixq(Address adr);
- int8_t get_prefixq(Address adr, Register reg);
+ int get_prefixq(Address adr, bool is_map1 = false);
+ int get_prefixq_rex2(Address adr, bool is_map1 = false);
+ int get_prefixq(Address adr, Register reg, bool is_map1 = false);
+ int get_prefixq_rex2(Address adr, Register reg, bool ismap1 = false);
void prefixq(Address adr);
- void prefixq(Address adr, Register reg);
+ void prefixq(Address adr, Register reg, bool is_map1 = false);
void prefixq(Address adr, XMMRegister reg);
+ void prefixq_rex2(Address adr, XMMRegister src);
- int prefixq_and_encode(int reg_enc);
- int prefixq_and_encode(int dst_enc, int src_enc);
+ bool prefix_is_rex2(int prefix);
+
+ int prefixq_and_encode(int reg_enc, bool is_map1 = false);
+ int prefixq_and_encode_rex2(int reg_enc, bool is_map1 = false);
+ int prefixq_and_encode(int dst_enc, int src_enc, bool is_map1 = false);
+ int prefixq_and_encode_rex2(int dst_enc, int src_enc, bool is_map1 = false);
+
+ bool needs_rex2(Register reg1, Register reg2 = noreg, Register reg3 = noreg);
+
+ bool needs_eevex(Register reg1, Register reg2 = noreg, Register reg3 = noreg);
+ bool needs_eevex(int enc1, int enc2 = -1, int enc3 = -1);
void rex_prefix(Address adr, XMMRegister xreg,
VexSimdPrefix pre, VexOpcode opc, bool rex_w);
@@ -721,22 +786,21 @@ class Assembler : public AbstractAssembler {
void vex_prefix(bool vex_r, bool vex_b, bool vex_x, int nds_enc, VexSimdPrefix pre, VexOpcode opc);
- void evex_prefix(bool vex_r, bool vex_b, bool vex_x, bool evex_r, bool evex_v,
- int nds_enc, VexSimdPrefix pre, VexOpcode opc);
+ void evex_prefix(bool vex_r, bool vex_b, bool vex_x, bool evex_v, bool evex_r, bool evex_b,
+ bool eevex_x, int nds_enc, VexSimdPrefix pre, VexOpcode opc);
- void vex_prefix(Address adr, int nds_enc, int xreg_enc,
- VexSimdPrefix pre, VexOpcode opc,
+ void vex_prefix(Address adr, int nds_enc, int xreg_enc, VexSimdPrefix pre, VexOpcode opc,
InstructionAttr *attributes);
int vex_prefix_and_encode(int dst_enc, int nds_enc, int src_enc,
VexSimdPrefix pre, VexOpcode opc,
- InstructionAttr *attributes);
+ InstructionAttr *attributes, bool src_is_gpr = false);
void simd_prefix(XMMRegister xreg, XMMRegister nds, Address adr, VexSimdPrefix pre,
VexOpcode opc, InstructionAttr *attributes);
int simd_prefix_and_encode(XMMRegister dst, XMMRegister nds, XMMRegister src, VexSimdPrefix pre,
- VexOpcode opc, InstructionAttr *attributes);
+ VexOpcode opc, InstructionAttr *attributes, bool src_is_gpr = false);
// Helper functions for groups of instructions
void emit_arith_b(int op1, int op2, Register dst, int imm8);
@@ -821,6 +885,10 @@ class Assembler : public AbstractAssembler {
void emit_data64(jlong data, relocInfo::relocType rtype, int format = 0);
void emit_data64(jlong data, RelocationHolder const& rspec, int format = 0);
+ void emit_prefix_and_int8(int prefix, int b1);
+ void emit_opcode_prefix_and_encoding(int byte1, int ocp_and_encoding);
+ void emit_opcode_prefix_and_encoding(int byte1, int byte2, int ocp_and_encoding);
+ void emit_opcode_prefix_and_encoding(int byte1, int byte2, int ocp_and_encoding, int byte3);
bool always_reachable(AddressLiteral adr) NOT_LP64( { return true; } );
bool reachable(AddressLiteral adr) NOT_LP64( { return true; } );
@@ -907,6 +975,8 @@ class Assembler : public AbstractAssembler {
// Instruction prefixes
void prefix(Prefix p);
+ void prefix16(int p);
+
public:
// Creation
@@ -1730,7 +1800,7 @@ class Assembler : public AbstractAssembler {
void negq(Address dst);
#endif
- void nop(int i = 1);
+ void nop(uint i = 1);
void notl(Register dst);
diff --git a/src/hotspot/cpu/x86/assembler_x86.inline.hpp b/src/hotspot/cpu/x86/assembler_x86.inline.hpp
index 66aebb32e21f7..d2aa76fcf5650 100644
--- a/src/hotspot/cpu/x86/assembler_x86.inline.hpp
+++ b/src/hotspot/cpu/x86/assembler_x86.inline.hpp
@@ -30,23 +30,53 @@
#include "code/codeCache.hpp"
#ifndef _LP64
-inline int Assembler::prefix_and_encode(int reg_enc, bool byteinst) { return reg_enc; }
-inline int Assembler::prefixq_and_encode(int reg_enc) { return reg_enc; }
+inline int Assembler::prefix_and_encode(int reg_enc, bool byteinst, bool is_map1)
+{
+ int opc_prefix = is_map1 ? 0x0F00 : 0;
+ return opc_prefix | reg_enc;
+}
-inline int Assembler::prefix_and_encode(int dst_enc, bool dst_is_byte, int src_enc, bool src_is_byte) { return dst_enc << 3 | src_enc; }
-inline int Assembler::prefixq_and_encode(int dst_enc, int src_enc) { return dst_enc << 3 | src_enc; }
+inline int Assembler::prefixq_and_encode(int reg_enc, bool is_map1) {
+ int opc_prefix = is_map1 ? 0xF00 : 0;
+ return opc_prefix | reg_enc;
+}
+
+inline int Assembler::prefix_and_encode(int dst_enc, bool dst_is_byte, int src_enc, bool src_is_byte, bool is_map1) {
+ int opc_prefix = is_map1 ? 0xF00 : 0;
+ return opc_prefix | (dst_enc << 3 | src_enc);
+}
+
+inline int Assembler::prefixq_and_encode(int dst_enc, int src_enc, bool is_map1) {
+ int opc_prefix = is_map1 ? 0xF00 : 0;
+ return opc_prefix | dst_enc << 3 | src_enc;
+}
inline void Assembler::prefix(Register reg) {}
inline void Assembler::prefix(Register dst, Register src, Prefix p) {}
inline void Assembler::prefix(Register dst, Address adr, Prefix p) {}
-inline void Assembler::prefix(Address adr) {}
+
+inline void Assembler::prefix(Address adr, bool is_map1) {
+ if (is_map1) {
+ emit_int8(0x0F);
+ }
+}
+
inline void Assembler::prefixq(Address adr) {}
-inline void Assembler::prefix(Address adr, Register reg, bool byteinst) {}
-inline void Assembler::prefixq(Address adr, Register reg) {}
+inline void Assembler::prefix(Address adr, Register reg, bool byteinst, bool is_map1) {
+ if (is_map1) {
+ emit_int8(0x0F);
+ }
+}
+inline void Assembler::prefixq(Address adr, Register reg, bool is_map1) {
+ if (is_map1) {
+ emit_int8(0x0F);
+ }
+}
inline void Assembler::prefix(Address adr, XMMRegister reg) {}
inline void Assembler::prefixq(Address adr, XMMRegister reg) {}
+
#endif // _LP64
#endif // CPU_X86_ASSEMBLER_X86_INLINE_HPP
diff --git a/src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp b/src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp
index c279e3073af87..978708d03e66b 100644
--- a/src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp
+++ b/src/hotspot/cpu/x86/c1_LIRAssembler_x86.cpp
@@ -1621,7 +1621,8 @@ void LIR_Assembler::emit_alloc_array(LIR_OpAllocArray* op) {
arrayOopDesc::base_offset_in_bytes(op->type()),
array_element_size(op->type()),
op->klass()->as_register(),
- *op->stub()->entry());
+ *op->stub()->entry(),
+ op->zero_array());
}
__ bind(*op->stub()->continuation());
}
@@ -3453,7 +3454,9 @@ void LIR_Assembler::emit_arraycopy(LIR_OpArrayCopy* op) {
address entry = StubRoutines::select_arraycopy_function(basic_type, aligned, disjoint, name, false);
__ call_VM_leaf(entry, 0);
- __ bind(*stub->continuation());
+ if (stub != nullptr) {
+ __ bind(*stub->continuation());
+ }
}
void LIR_Assembler::emit_updatecrc32(LIR_OpUpdateCRC32* op) {
diff --git a/src/hotspot/cpu/x86/c1_LIRGenerator_x86.cpp b/src/hotspot/cpu/x86/c1_LIRGenerator_x86.cpp
index 7088cf33cf646..d3add6975b4f2 100644
--- a/src/hotspot/cpu/x86/c1_LIRGenerator_x86.cpp
+++ b/src/hotspot/cpu/x86/c1_LIRGenerator_x86.cpp
@@ -1004,7 +1004,13 @@ void LIRGenerator::do_ArrayCopy(Intrinsic* x) {
assert(x->number_of_arguments() == 5, "wrong type");
// Make all state_for calls early since they can emit code
- CodeEmitInfo* info = state_for(x, x->state());
+ CodeEmitInfo* info = nullptr;
+ if (x->state_before() != nullptr && x->state_before()->force_reexecute()) {
+ info = state_for(x, x->state_before());
+ info->set_force_reexecute();
+ } else {
+ info = state_for(x, x->state());
+ }
LIRItem src(x->argument_at(0), this);
LIRItem src_pos(x->argument_at(1), this);
@@ -1016,6 +1022,13 @@ void LIRGenerator::do_ArrayCopy(Intrinsic* x) {
// LinearScan will fail allocation (because arraycopy always needs a
// call)
+ int flags;
+ ciArrayKlass* expected_type;
+ arraycopy_helper(x, &flags, &expected_type);
+ if (x->check_flag(Instruction::OmitChecksFlag)) {
+ flags = 0;
+ }
+
#ifndef _LP64
src.load_item_force (FrameMap::rcx_oop_opr);
src_pos.load_item_force (FrameMap::rdx_opr);
@@ -1023,6 +1036,11 @@ void LIRGenerator::do_ArrayCopy(Intrinsic* x) {
dst_pos.load_item_force (FrameMap::rbx_opr);
length.load_item_force (FrameMap::rdi_opr);
LIR_Opr tmp = (FrameMap::rsi_opr);
+
+ if (expected_type != nullptr && flags == 0) {
+ FrameMap* f = Compilation::current()->frame_map();
+ f->update_reserved_argument_area_size(3 * BytesPerWord);
+ }
#else
// The java calling convention will give us enough registers
@@ -1044,10 +1062,6 @@ void LIRGenerator::do_ArrayCopy(Intrinsic* x) {
set_no_result(x);
- int flags;
- ciArrayKlass* expected_type;
- arraycopy_helper(x, &flags, &expected_type);
-
__ arraycopy(src.result(), src_pos.result(), dst.result(), dst_pos.result(), length.result(), tmp, expected_type, flags, info); // does add_safepoint
}
@@ -1310,7 +1324,13 @@ void LIRGenerator::do_NewInstance(NewInstance* x) {
void LIRGenerator::do_NewTypeArray(NewTypeArray* x) {
- CodeEmitInfo* info = state_for(x, x->state());
+ CodeEmitInfo* info = nullptr;
+ if (x->state_before() != nullptr && x->state_before()->force_reexecute()) {
+ info = state_for(x, x->state_before());
+ info->set_force_reexecute();
+ } else {
+ info = state_for(x, x->state());
+ }
LIRItem length(x->length(), this);
length.load_item_force(FrameMap::rbx_opr);
@@ -1327,7 +1347,7 @@ void LIRGenerator::do_NewTypeArray(NewTypeArray* x) {
__ metadata2reg(ciTypeArrayKlass::make(elem_type)->constant_encoding(), klass_reg);
CodeStub* slow_path = new NewTypeArrayStub(klass_reg, len, reg, info);
- __ allocate_array(reg, len, tmp1, tmp2, tmp3, tmp4, elem_type, klass_reg, slow_path);
+ __ allocate_array(reg, len, tmp1, tmp2, tmp3, tmp4, elem_type, klass_reg, slow_path, x->zero_array());
LIR_Opr result = rlock_result(x);
__ move(reg, result);
diff --git a/src/hotspot/cpu/x86/c1_MacroAssembler_x86.cpp b/src/hotspot/cpu/x86/c1_MacroAssembler_x86.cpp
index caca3a1528261..2374324ca7c13 100644
--- a/src/hotspot/cpu/x86/c1_MacroAssembler_x86.cpp
+++ b/src/hotspot/cpu/x86/c1_MacroAssembler_x86.cpp
@@ -278,7 +278,7 @@ void C1_MacroAssembler::initialize_object(Register obj, Register klass, Register
verify_oop(obj);
}
-void C1_MacroAssembler::allocate_array(Register obj, Register len, Register t1, Register t2, int base_offset_in_bytes, Address::ScaleFactor f, Register klass, Label& slow_case) {
+void C1_MacroAssembler::allocate_array(Register obj, Register len, Register t1, Register t2, int base_offset_in_bytes, Address::ScaleFactor f, Register klass, Label& slow_case, bool zero_array) {
assert(obj == rax, "obj must be in rax, for cmpxchg");
assert_different_registers(obj, len, t1, t2, klass);
@@ -300,11 +300,13 @@ void C1_MacroAssembler::allocate_array(Register obj, Register len, Register t1,
initialize_header(obj, klass, len, t1, t2);
// clear rest of allocated space
- const Register len_zero = len;
- // Align-up to word boundary, because we clear the 4 bytes potentially
- // following the length field in initialize_header().
- int base_offset = align_up(base_offset_in_bytes, BytesPerWord);
- initialize_body(obj, arr_size, base_offset, len_zero);
+ if (zero_array) {
+ const Register len_zero = len;
+ // Align-up to word boundary, because we clear the 4 bytes potentially
+ // following the length field in initialize_header().
+ int base_offset = align_up(base_offset_in_bytes, BytesPerWord);
+ initialize_body(obj, arr_size, base_offset, len_zero);
+ }
if (CURRENT_ENV->dtrace_alloc_probes()) {
assert(obj == rax, "must be");
diff --git a/src/hotspot/cpu/x86/c1_MacroAssembler_x86.hpp b/src/hotspot/cpu/x86/c1_MacroAssembler_x86.hpp
index a705dd70efd06..6344a7b6ef19e 100644
--- a/src/hotspot/cpu/x86/c1_MacroAssembler_x86.hpp
+++ b/src/hotspot/cpu/x86/c1_MacroAssembler_x86.hpp
@@ -89,7 +89,8 @@
// base_offset_in_bytes: offset of the first array element, in bytes
// f : element scale factor
// slow_case : exit to slow case implementation if fast allocation fails
- void allocate_array(Register obj, Register len, Register t, Register t2, int base_offset_in_bytes, Address::ScaleFactor f, Register klass, Label& slow_case);
+ // zero_array : zero the allocated array or not
+ void allocate_array(Register obj, Register len, Register t, Register t2, int base_offset_in_bytes, Address::ScaleFactor f, Register klass, Label& slow_case, bool zero_array);
int rsp_offset() const { return _rsp_offset; }
void set_rsp_offset(int n) { _rsp_offset = n; }
diff --git a/src/hotspot/cpu/x86/c2_MacroAssembler_x86.cpp b/src/hotspot/cpu/x86/c2_MacroAssembler_x86.cpp
index d0eb103d81b87..50f957aef999c 100644
--- a/src/hotspot/cpu/x86/c2_MacroAssembler_x86.cpp
+++ b/src/hotspot/cpu/x86/c2_MacroAssembler_x86.cpp
@@ -5631,7 +5631,6 @@ void C2_MacroAssembler::vector_mask_compress(KRegister dst, KRegister src, Regis
kmov(dst, rtmp2);
}
-#ifdef _LP64
void C2_MacroAssembler::vector_compress_expand_avx2(int opcode, XMMRegister dst, XMMRegister src,
XMMRegister mask, Register rtmp, Register rscratch,
XMMRegister permv, XMMRegister xtmp, BasicType bt,
@@ -5665,7 +5664,6 @@ void C2_MacroAssembler::vector_compress_expand_avx2(int opcode, XMMRegister dst,
// compressing/expanding the source vector lanes.
vblendvps(dst, dst, xtmp, permv, vec_enc, false, permv);
}
-#endif
void C2_MacroAssembler::vector_compress_expand(int opcode, XMMRegister dst, XMMRegister src, KRegister mask,
bool merge, BasicType bt, int vec_enc) {
diff --git a/src/hotspot/cpu/x86/gc/g1/g1BarrierSetAssembler_x86.cpp b/src/hotspot/cpu/x86/gc/g1/g1BarrierSetAssembler_x86.cpp
index 0ea93ec067ebd..b52be627776b8 100644
--- a/src/hotspot/cpu/x86/gc/g1/g1BarrierSetAssembler_x86.cpp
+++ b/src/hotspot/cpu/x86/gc/g1/g1BarrierSetAssembler_x86.cpp
@@ -286,7 +286,7 @@ void G1BarrierSetAssembler::g1_write_barrier_post(MacroAssembler* masm,
__ movptr(tmp, store_addr);
__ xorptr(tmp, new_val);
- __ shrptr(tmp, HeapRegion::LogOfHRGrainBytes);
+ __ shrptr(tmp, G1HeapRegion::LogOfHRGrainBytes);
__ jcc(Assembler::equal, done);
// crosses regions, storing null?
diff --git a/src/hotspot/cpu/x86/gc/shared/barrierSetAssembler_x86.cpp b/src/hotspot/cpu/x86/gc/shared/barrierSetAssembler_x86.cpp
index a06fde061f85f..b71b5a2ab47ef 100644
--- a/src/hotspot/cpu/x86/gc/shared/barrierSetAssembler_x86.cpp
+++ b/src/hotspot/cpu/x86/gc/shared/barrierSetAssembler_x86.cpp
@@ -352,36 +352,6 @@ void BarrierSetAssembler::tlab_allocate(MacroAssembler* masm,
__ verify_tlab();
}
-void BarrierSetAssembler::incr_allocated_bytes(MacroAssembler* masm, Register thread,
- Register var_size_in_bytes,
- int con_size_in_bytes,
- Register t1) {
- if (!thread->is_valid()) {
-#ifdef _LP64
- thread = r15_thread;
-#else
- assert(t1->is_valid(), "need temp reg");
- thread = t1;
- __ get_thread(thread);
-#endif
- }
-
-#ifdef _LP64
- if (var_size_in_bytes->is_valid()) {
- __ addq(Address(thread, in_bytes(JavaThread::allocated_bytes_offset())), var_size_in_bytes);
- } else {
- __ addq(Address(thread, in_bytes(JavaThread::allocated_bytes_offset())), con_size_in_bytes);
- }
-#else
- if (var_size_in_bytes->is_valid()) {
- __ addl(Address(thread, in_bytes(JavaThread::allocated_bytes_offset())), var_size_in_bytes);
- } else {
- __ addl(Address(thread, in_bytes(JavaThread::allocated_bytes_offset())), con_size_in_bytes);
- }
- __ adcl(Address(thread, in_bytes(JavaThread::allocated_bytes_offset())+4), 0);
-#endif
-}
-
#ifdef _LP64
void BarrierSetAssembler::nmethod_entry_barrier(MacroAssembler* masm, Label* slow_path, Label* continuation) {
BarrierSetNMethod* bs_nm = BarrierSet::barrier_set()->barrier_set_nmethod();
@@ -613,19 +583,12 @@ void SaveLiveRegisters::initialize(BarrierStubC2* stub) {
caller_saved.Insert(OptoReg::as_OptoReg(r10->as_VMReg()));
caller_saved.Insert(OptoReg::as_OptoReg(r11->as_VMReg()));
- if (stub->result() != noreg) {
- caller_saved.Remove(OptoReg::as_OptoReg(stub->result()->as_VMReg()));
- }
-
- // Create mask of live registers
- RegMask live = stub->live();
-
int gp_spill_size = 0;
int opmask_spill_size = 0;
int xmm_spill_size = 0;
// Record registers that needs to be saved/restored
- RegMaskIterator rmi(live);
+ RegMaskIterator rmi(stub->preserve_set());
while (rmi.has_next()) {
const OptoReg::Name opto_reg = rmi.next();
const VMReg vm_reg = OptoReg::as_VMReg(opto_reg);
diff --git a/src/hotspot/cpu/x86/gc/shared/barrierSetAssembler_x86.hpp b/src/hotspot/cpu/x86/gc/shared/barrierSetAssembler_x86.hpp
index a74d5cc767638..81797907aedde 100644
--- a/src/hotspot/cpu/x86/gc/shared/barrierSetAssembler_x86.hpp
+++ b/src/hotspot/cpu/x86/gc/shared/barrierSetAssembler_x86.hpp
@@ -37,12 +37,6 @@ class Node;
class InterpreterMacroAssembler;
class BarrierSetAssembler: public CHeapObj {
-private:
- void incr_allocated_bytes(MacroAssembler* masm, Register thread,
- Register var_size_in_bytes,
- int con_size_in_bytes,
- Register t1);
-
public:
virtual void arraycopy_prologue(MacroAssembler* masm, DecoratorSet decorators, BasicType type,
Register src, Register dst, Register count) {}
diff --git a/src/hotspot/cpu/x86/gc/shenandoah/shenandoahBarrierSetAssembler_x86.cpp b/src/hotspot/cpu/x86/gc/shenandoah/shenandoahBarrierSetAssembler_x86.cpp
index 573edc26dad4c..25a2b931468cd 100644
--- a/src/hotspot/cpu/x86/gc/shenandoah/shenandoahBarrierSetAssembler_x86.cpp
+++ b/src/hotspot/cpu/x86/gc/shenandoah/shenandoahBarrierSetAssembler_x86.cpp
@@ -219,7 +219,6 @@ void ShenandoahBarrierSetAssembler::satb_write_barrier_pre(MacroAssembler* masm,
assert(pre_val != rax, "check this code");
}
- Address in_progress(thread, in_bytes(ShenandoahThreadLocalData::satb_mark_queue_active_offset()));
Address index(thread, in_bytes(ShenandoahThreadLocalData::satb_mark_queue_index_offset()));
Address buffer(thread, in_bytes(ShenandoahThreadLocalData::satb_mark_queue_buffer_offset()));
diff --git a/src/hotspot/cpu/x86/gc/z/zBarrierSetAssembler_x86.cpp b/src/hotspot/cpu/x86/gc/z/zBarrierSetAssembler_x86.cpp
index f5f0d6c884198..c30b139c7ef71 100644
--- a/src/hotspot/cpu/x86/gc/z/zBarrierSetAssembler_x86.cpp
+++ b/src/hotspot/cpu/x86/gc/z/zBarrierSetAssembler_x86.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2018, 2023, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2018, 2024, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -1213,7 +1213,6 @@ class ZSetupArguments {
#define __ masm->
void ZBarrierSetAssembler::generate_c2_load_barrier_stub(MacroAssembler* masm, ZLoadBarrierStubC2* stub) const {
- Assembler::InlineSkippedInstructionsCounter skipped_counter(masm);
BLOCK_COMMENT("ZLoadBarrierStubC2");
// Stub entry
@@ -1233,7 +1232,6 @@ void ZBarrierSetAssembler::generate_c2_load_barrier_stub(MacroAssembler* masm, Z
}
void ZBarrierSetAssembler::generate_c2_store_barrier_stub(MacroAssembler* masm, ZStoreBarrierStubC2* stub) const {
- Assembler::InlineSkippedInstructionsCounter skipped_counter(masm);
BLOCK_COMMENT("ZStoreBarrierStubC2");
// Stub entry
diff --git a/src/hotspot/cpu/x86/globals_x86.hpp b/src/hotspot/cpu/x86/globals_x86.hpp
index 0f01eb3a0c43e..d73a8d0317107 100644
--- a/src/hotspot/cpu/x86/globals_x86.hpp
+++ b/src/hotspot/cpu/x86/globals_x86.hpp
@@ -232,8 +232,10 @@ define_pd_global(intx, InitArrayShortSize, 8*BytesPerLong);
\
product(bool, IntelJccErratumMitigation, true, DIAGNOSTIC, \
"Turn off JVM mitigations related to Intel micro code " \
- "mitigations for the Intel JCC erratum")
-
+ "mitigations for the Intel JCC erratum") \
+ \
+ product(bool, UseAPX, false, EXPERIMENTAL, \
+ "Use Advanced Performance Extensions on x86") \
// end of ARCH_FLAGS
#endif // CPU_X86_GLOBALS_X86_HPP
diff --git a/src/hotspot/cpu/x86/macroAssembler_x86.cpp b/src/hotspot/cpu/x86/macroAssembler_x86.cpp
index d20f34e27b871..851b89a0a060f 100644
--- a/src/hotspot/cpu/x86/macroAssembler_x86.cpp
+++ b/src/hotspot/cpu/x86/macroAssembler_x86.cpp
@@ -1150,20 +1150,20 @@ void MacroAssembler::addpd(XMMRegister dst, AddressLiteral src, Register rscratc
// Stub code is generated once and never copied.
// NMethods can't use this because they get copied and we can't force alignment > 32 bytes.
void MacroAssembler::align64() {
- align(64, (unsigned long long) pc());
+ align(64, (uint)(uintptr_t)pc());
}
void MacroAssembler::align32() {
- align(32, (unsigned long long) pc());
+ align(32, (uint)(uintptr_t)pc());
}
-void MacroAssembler::align(int modulus) {
+void MacroAssembler::align(uint modulus) {
// 8273459: Ensure alignment is possible with current segment alignment
- assert(modulus <= CodeEntryAlignment, "Alignment must be <= CodeEntryAlignment");
+ assert(modulus <= (uintx)CodeEntryAlignment, "Alignment must be <= CodeEntryAlignment");
align(modulus, offset());
}
-void MacroAssembler::align(int modulus, int target) {
+void MacroAssembler::align(uint modulus, uint target) {
if (target % modulus != 0) {
nop(modulus - (target % modulus));
}
@@ -6983,7 +6983,7 @@ void MacroAssembler::multiply_128_x_128_bmi2_loop(Register y, Register z,
* rsi: y
* rcx: ylen
* r8: z
- * r11: zlen
+ * r11: tmp0
* r12: tmp1
* r13: tmp2
* r14: tmp3
@@ -6991,11 +6991,12 @@ void MacroAssembler::multiply_128_x_128_bmi2_loop(Register y, Register z,
* rbx: tmp5
*
*/
-void MacroAssembler::multiply_to_len(Register x, Register xlen, Register y, Register ylen, Register z, Register zlen,
+void MacroAssembler::multiply_to_len(Register x, Register xlen, Register y, Register ylen, Register z, Register tmp0,
Register tmp1, Register tmp2, Register tmp3, Register tmp4, Register tmp5) {
ShortBranchVerifier sbv(this);
- assert_different_registers(x, xlen, y, ylen, z, zlen, tmp1, tmp2, tmp3, tmp4, tmp5, rdx);
+ assert_different_registers(x, xlen, y, ylen, z, tmp0, tmp1, tmp2, tmp3, tmp4, tmp5, rdx);
+ push(tmp0);
push(tmp1);
push(tmp2);
push(tmp3);
@@ -7003,7 +7004,6 @@ void MacroAssembler::multiply_to_len(Register x, Register xlen, Register y, Regi
push(tmp5);
push(xlen);
- push(zlen);
const Register idx = tmp1;
const Register kdx = tmp2;
@@ -7012,7 +7012,7 @@ void MacroAssembler::multiply_to_len(Register x, Register xlen, Register y, Regi
const Register y_idx = tmp4;
const Register carry = tmp5;
const Register product = xlen;
- const Register x_xstart = zlen; // reuse register
+ const Register x_xstart = tmp0;
// First Loop.
//
@@ -7028,9 +7028,9 @@ void MacroAssembler::multiply_to_len(Register x, Register xlen, Register y, Regi
// z[xstart] = (int)carry;
//
- movl(idx, ylen); // idx = ylen;
- movl(kdx, zlen); // kdx = xlen+ylen;
- xorq(carry, carry); // carry = 0;
+ movl(idx, ylen); // idx = ylen;
+ lea(kdx, Address(xlen, ylen)); // kdx = xlen+ylen;
+ xorq(carry, carry); // carry = 0;
Label L_done;
@@ -7134,7 +7134,6 @@ void MacroAssembler::multiply_to_len(Register x, Register xlen, Register y, Regi
bind(L_done);
- pop(zlen);
pop(xlen);
pop(tmp5);
@@ -7142,6 +7141,7 @@ void MacroAssembler::multiply_to_len(Register x, Register xlen, Register y, Regi
pop(tmp3);
pop(tmp2);
pop(tmp1);
+ pop(tmp0);
}
void MacroAssembler::vectorized_mismatch(Register obja, Register objb, Register length, Register log2_array_indxscale,
diff --git a/src/hotspot/cpu/x86/macroAssembler_x86.hpp b/src/hotspot/cpu/x86/macroAssembler_x86.hpp
index e045572a5cdbb..fd989c7bd229d 100644
--- a/src/hotspot/cpu/x86/macroAssembler_x86.hpp
+++ b/src/hotspot/cpu/x86/macroAssembler_x86.hpp
@@ -213,8 +213,8 @@ class MacroAssembler: public Assembler {
// Alignment
void align32();
void align64();
- void align(int modulus);
- void align(int modulus, int target);
+ void align(uint modulus);
+ void align(uint modulus, uint target);
void post_call_nop();
// A 5 byte nop that is safe for patching (see patch_verified_entry)
@@ -1549,6 +1549,8 @@ class MacroAssembler: public Assembler {
Assembler::evpsrlvd(dst, mask, nds, src, merge, vector_len);
}
}
+
+ using Assembler::evpsrlq;
void evpsrlq(XMMRegister dst, KRegister mask, XMMRegister nds, XMMRegister src, bool merge, int vector_len, bool is_varshift) {
if (!is_varshift) {
Assembler::evpsrlq(dst, mask, nds, src, merge, vector_len);
@@ -1570,6 +1572,7 @@ class MacroAssembler: public Assembler {
Assembler::evpsravd(dst, mask, nds, src, merge, vector_len);
}
}
+ using Assembler::evpsraq;
void evpsraq(XMMRegister dst, KRegister mask, XMMRegister nds, XMMRegister src, bool merge, int vector_len, bool is_varshift) {
if (!is_varshift) {
Assembler::evpsraq(dst, mask, nds, src, merge, vector_len);
@@ -2015,7 +2018,7 @@ class MacroAssembler: public Assembler {
Register yz_idx, Register idx, Register jdx,
Register carry, Register product,
Register carry2);
- void multiply_to_len(Register x, Register xlen, Register y, Register ylen, Register z, Register zlen,
+ void multiply_to_len(Register x, Register xlen, Register y, Register ylen, Register z, Register tmp0,
Register tmp1, Register tmp2, Register tmp3, Register tmp4, Register tmp5);
void square_rshift(Register x, Register len, Register z, Register tmp1, Register tmp3,
Register tmp4, Register tmp5, Register rdxReg, Register raxReg);
diff --git a/src/hotspot/cpu/x86/relocInfo_x86.cpp b/src/hotspot/cpu/x86/relocInfo_x86.cpp
index db806419b51d4..d7fddf838ac8e 100644
--- a/src/hotspot/cpu/x86/relocInfo_x86.cpp
+++ b/src/hotspot/cpu/x86/relocInfo_x86.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 1998, 2023, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1998, 2024, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -36,9 +36,8 @@
#include "utilities/checkedCast.hpp"
-void Relocation::pd_set_data_value(address x, intptr_t o, bool verify_only) {
+void Relocation::pd_set_data_value(address x, bool verify_only) {
#ifdef AMD64
- x += o;
typedef Assembler::WhichOperand WhichOperand;
WhichOperand which = (WhichOperand) format(); // that is, disp32 or imm, call32, narrow oop
assert(which == Assembler::disp32_operand ||
@@ -80,9 +79,9 @@ void Relocation::pd_set_data_value(address x, intptr_t o, bool verify_only) {
}
#else
if (verify_only) {
- guarantee(*pd_address_in_code() == (x + o), "instructions must match");
+ guarantee(*pd_address_in_code() == x, "instructions must match");
} else {
- *pd_address_in_code() = x + o;
+ *pd_address_in_code() = x;
}
#endif // AMD64
}
diff --git a/src/hotspot/cpu/x86/stubGenerator_x86_64.cpp b/src/hotspot/cpu/x86/stubGenerator_x86_64.cpp
index 63226a560f4df..577c56cb7a268 100644
--- a/src/hotspot/cpu/x86/stubGenerator_x86_64.cpp
+++ b/src/hotspot/cpu/x86/stubGenerator_x86_64.cpp
@@ -3044,10 +3044,8 @@ address StubGenerator::generate_updateBytesCRC32C(bool is_pclmulqdq_supported) {
* c_rarg3 - y length
* not Win64
* c_rarg4 - z address
- * c_rarg5 - z length
* Win64
* rsp+40 - z address
- * rsp+48 - z length
*/
address StubGenerator::generate_multiplyToLen() {
__ align(CodeEntryAlignment);
@@ -3061,9 +3059,9 @@ address StubGenerator::generate_multiplyToLen() {
const Register y = rsi;
const Register ylen = rcx;
const Register z = r8;
- const Register zlen = r11;
// Next registers will be saved on stack in multiply_to_len().
+ const Register tmp0 = r11;
const Register tmp1 = r12;
const Register tmp2 = r13;
const Register tmp3 = r14;
@@ -3073,21 +3071,17 @@ address StubGenerator::generate_multiplyToLen() {
BLOCK_COMMENT("Entry:");
__ enter(); // required for proper stackwalking of RuntimeStub frame
-#ifndef _WIN64
- __ movptr(zlen, r9); // Save r9 in r11 - zlen
-#endif
setup_arg_regs(4); // x => rdi, xlen => rsi, y => rdx
- // ylen => rcx, z => r8, zlen => r11
+ // ylen => rcx, z => r8
// r9 and r10 may be used to save non-volatile registers
#ifdef _WIN64
- // last 2 arguments (#4, #5) are on stack on Win64
+ // last argument (#4) is on stack on Win64
__ movptr(z, Address(rsp, 6 * wordSize));
- __ movptr(zlen, Address(rsp, 7 * wordSize));
#endif
__ movptr(xlen, rsi);
__ movptr(y, rdx);
- __ multiply_to_len(x, xlen, y, ylen, z, zlen, tmp1, tmp2, tmp3, tmp4, tmp5);
+ __ multiply_to_len(x, xlen, y, ylen, z, tmp0, tmp1, tmp2, tmp3, tmp4, tmp5);
restore_arg_regs();
@@ -4255,6 +4249,11 @@ void StubGenerator::generate_compiler_stubs() {
StubRoutines::_poly1305_processBlocks = generate_poly1305_processBlocks();
}
+ if (UseIntPolyIntrinsics) {
+ StubRoutines::_intpoly_montgomeryMult_P256 = generate_intpoly_montgomeryMult_P256();
+ StubRoutines::_intpoly_assign = generate_intpoly_assign();
+ }
+
if (UseMD5Intrinsics) {
StubRoutines::_md5_implCompress = generate_md5_implCompress(false, "md5_implCompress");
StubRoutines::_md5_implCompressMB = generate_md5_implCompress(true, "md5_implCompressMB");
diff --git a/src/hotspot/cpu/x86/stubGenerator_x86_64.hpp b/src/hotspot/cpu/x86/stubGenerator_x86_64.hpp
index 725932b9e0310..02435bd172c47 100644
--- a/src/hotspot/cpu/x86/stubGenerator_x86_64.hpp
+++ b/src/hotspot/cpu/x86/stubGenerator_x86_64.hpp
@@ -483,6 +483,9 @@ class StubGenerator: public StubCodeGenerator {
const XMMRegister P2L, const XMMRegister P2H,
const XMMRegister YTMP1, const Register rscratch);
+ address generate_intpoly_montgomeryMult_P256();
+ address generate_intpoly_assign();
+
// BASE64 stubs
address base64_shuffle_addr();
diff --git a/src/hotspot/cpu/x86/stubGenerator_x86_64_poly.cpp b/src/hotspot/cpu/x86/stubGenerator_x86_64_poly1305.cpp
similarity index 100%
rename from src/hotspot/cpu/x86/stubGenerator_x86_64_poly.cpp
rename to src/hotspot/cpu/x86/stubGenerator_x86_64_poly1305.cpp
diff --git a/src/hotspot/cpu/x86/stubGenerator_x86_64_poly_mont.cpp b/src/hotspot/cpu/x86/stubGenerator_x86_64_poly_mont.cpp
new file mode 100644
index 0000000000000..25ee68072492c
--- /dev/null
+++ b/src/hotspot/cpu/x86/stubGenerator_x86_64_poly_mont.cpp
@@ -0,0 +1,376 @@
+/*
+ * Copyright (c) 2024, Intel Corporation. All rights reserved.
+ *
+ * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
+ *
+ * This code is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 only, as
+ * published by the Free Software Foundation.
+ *
+ * This code is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * version 2 for more details (a copy is included in the LICENSE file that
+ * accompanied this code).
+ *
+ * You should have received a copy of the GNU General Public License version
+ * 2 along with this work; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
+ * or visit www.oracle.com if you need additional information or have any
+ * questions.
+ *
+ */
+
+#include "precompiled.hpp"
+#include "macroAssembler_x86.hpp"
+#include "stubGenerator_x86_64.hpp"
+
+#define __ _masm->
+
+ATTRIBUTE_ALIGNED(64) uint64_t MODULUS_P256[] = {
+ 0x000fffffffffffffULL, 0x00000fffffffffffULL,
+ 0x0000000000000000ULL, 0x0000001000000000ULL,
+ 0x0000ffffffff0000ULL, 0x0000000000000000ULL,
+ 0x0000000000000000ULL, 0x0000000000000000ULL
+};
+static address modulus_p256() {
+ return (address)MODULUS_P256;
+}
+
+ATTRIBUTE_ALIGNED(64) uint64_t P256_MASK52[] = {
+ 0x000fffffffffffffULL, 0x000fffffffffffffULL,
+ 0x000fffffffffffffULL, 0x000fffffffffffffULL,
+ 0xffffffffffffffffULL, 0xffffffffffffffffULL,
+ 0xffffffffffffffffULL, 0xffffffffffffffffULL,
+};
+static address p256_mask52() {
+ return (address)P256_MASK52;
+}
+
+ATTRIBUTE_ALIGNED(64) uint64_t SHIFT1R[] = {
+ 0x0000000000000001ULL, 0x0000000000000002ULL,
+ 0x0000000000000003ULL, 0x0000000000000004ULL,
+ 0x0000000000000005ULL, 0x0000000000000006ULL,
+ 0x0000000000000007ULL, 0x0000000000000000ULL,
+};
+static address shift_1R() {
+ return (address)SHIFT1R;
+}
+
+ATTRIBUTE_ALIGNED(64) uint64_t SHIFT1L[] = {
+ 0x0000000000000007ULL, 0x0000000000000000ULL,
+ 0x0000000000000001ULL, 0x0000000000000002ULL,
+ 0x0000000000000003ULL, 0x0000000000000004ULL,
+ 0x0000000000000005ULL, 0x0000000000000006ULL,
+};
+static address shift_1L() {
+ return (address)SHIFT1L;
+}
+
+/**
+ * Unrolled Word-by-Word Montgomery Multiplication
+ * r = a * b * 2^-260 (mod P)
+ *
+ * Reference [1]: Shay Gueron and Vlad Krasnov
+ * "Fast Prime Field Elliptic Curve Cryptography with 256 Bit Primes"
+ * See Figure 5. "Algorithm 2: Word-by-Word Montgomery Multiplication for a Montgomery
+ * Friendly modulus p". Note: Step 6. Skipped; Instead use numAdds to reuse existing overflow
+ * logic.
+ *
+ * Pseudocode:
+ *
+ * +--+--+--+--+--+--+--+--+
+ * M = load(*modulus_p256) | 0| 0| 0|m5|m4|m3|m2|m1|
+ * +--+--+--+--+--+--+--+--+
+ * A = load(*aLimbs) | 0| 0| 0|a5|a4|a3|a2|a1|
+ * +--+--+--+--+--+--+--+--+
+ * Acc1 = 0 | 0| 0| 0| 0| 0| 0| 0| 0|
+ * +--+--+--+--+--+--+--+--+
+ * ---- for i = 0 to 4
+ * +--+--+--+--+--+--+--+--+
+ * Acc2 = 0 | 0| 0| 0| 0| 0| 0| 0| 0|
+ * +--+--+--+--+--+--+--+--+
+ * B = replicate(bLimbs[i]) |bi|bi|bi|bi|bi|bi|bi|bi|
+ * +--+--+--+--+--+--+--+--+
+ * +--+--+--+--+--+--+--+--+
+ * Acc1+=| 0| 0| 0|c5|c4|c3|c2|c1|
+ * *| 0| 0| 0|a5|a4|a3|a2|a1|
+ * Acc1 += A * B |bi|bi|bi|bi|bi|bi|bi|bi|
+ * +--+--+--+--+--+--+--+--+
+ * Acc2+=| 0| 0| 0| 0| 0| 0| 0| 0|
+ * *h| 0| 0| 0|a5|a4|a3|a2|a1|
+ * Acc2 += A *h B |bi|bi|bi|bi|bi|bi|bi|bi|
+ * +--+--+--+--+--+--+--+--+
+ * N = replicate(Acc1[0]) |n0|n0|n0|n0|n0|n0|n0|n0|
+ * +--+--+--+--+--+--+--+--+
+ * +--+--+--+--+--+--+--+--+
+ * Acc1+=| 0| 0| 0|c5|c4|c3|c2|c1|
+ * *| 0| 0| 0|m5|m4|m3|m2|m1|
+ * Acc1 += M * N |n0|n0|n0|n0|n0|n0|n0|n0| Note: 52 low bits of Acc1[0] == 0 due to Montgomery!
+ * +--+--+--+--+--+--+--+--+
+ * Acc2+=| 0| 0| 0|d5|d4|d3|d2|d1|
+ * *h| 0| 0| 0|m5|m4|m3|m2|m1|
+ * Acc2 += M *h N |n0|n0|n0|n0|n0|n0|n0|n0|
+ * +--+--+--+--+--+--+--+--+
+ * if (i == 4) break;
+ * // Combine high/low partial sums Acc1 + Acc2
+ * +--+--+--+--+--+--+--+--+
+ * carry = Acc1[0] >> 52 | 0| 0| 0| 0| 0| 0| 0|c1|
+ * +--+--+--+--+--+--+--+--+
+ * Acc2[0] += carry
+ * +--+--+--+--+--+--+--+--+
+ * Acc1 = Acc1 shift one q element>> | 0| 0| 0| 0|c5|c4|c3|c2|
+ * +--+--+--+--+--+--+--+--+
+ * Acc1 = Acc1 + Acc2
+ * ---- done
+ * // Last Carry round: Combine high/low partial sums Acc1 + Acc1 + Acc2
+ * carry = Acc1 >> 52
+ * Acc1 = Acc1 shift one q element >>
+ * Acc1 = mask52(Acc1)
+ * Acc2 += carry
+ * Acc1 = Acc1 + Acc2
+ * output to rLimbs
+ */
+void montgomeryMultiply(const Register aLimbs, const Register bLimbs, const Register rLimbs, const Register tmp, MacroAssembler* _masm) {
+ Register t0 = tmp;
+ Register rscratch = tmp;
+
+ // Inputs
+ XMMRegister A = xmm0;
+ XMMRegister B = xmm1;
+ XMMRegister T = xmm2;
+
+ // Intermediates
+ XMMRegister Acc1 = xmm10;
+ XMMRegister Acc2 = xmm11;
+ XMMRegister N = xmm12;
+ XMMRegister carry = xmm13;
+
+ // // Constants
+ XMMRegister modulus = xmm20;
+ XMMRegister shift1L = xmm21;
+ XMMRegister shift1R = xmm22;
+ XMMRegister mask52 = xmm23;
+ KRegister limb0 = k1;
+ KRegister allLimbs = k2;
+
+ __ mov64(t0, 0x1);
+ __ kmovql(limb0, t0);
+ __ mov64(t0, 0x1f);
+ __ kmovql(allLimbs, t0);
+ __ evmovdquq(shift1L, allLimbs, ExternalAddress(shift_1L()), false, Assembler::AVX_512bit, rscratch);
+ __ evmovdquq(shift1R, allLimbs, ExternalAddress(shift_1R()), false, Assembler::AVX_512bit, rscratch);
+ __ evmovdquq(mask52, allLimbs, ExternalAddress(p256_mask52()), false, Assembler::AVX_512bit, rscratch);
+
+ // M = load(*modulus_p256)
+ __ evmovdquq(modulus, allLimbs, ExternalAddress(modulus_p256()), false, Assembler::AVX_512bit, rscratch);
+
+ // A = load(*aLimbs); masked evmovdquq() can be slow. Instead load full 256bit, and compbine with 64bit
+ __ evmovdquq(A, Address(aLimbs, 8), Assembler::AVX_256bit);
+ __ evpermq(A, allLimbs, shift1L, A, false, Assembler::AVX_512bit);
+ __ movq(T, Address(aLimbs, 0));
+ __ evporq(A, A, T, Assembler::AVX_512bit);
+
+ // Acc1 = 0
+ __ vpxorq(Acc1, Acc1, Acc1, Assembler::AVX_512bit);
+ for (int i = 0; i< 5; i++) {
+ // Acc2 = 0
+ __ vpxorq(Acc2, Acc2, Acc2, Assembler::AVX_512bit);
+
+ // B = replicate(bLimbs[i])
+ __ vpbroadcastq(B, Address(bLimbs, i*8), Assembler::AVX_512bit);
+
+ // Acc1 += A * B
+ __ evpmadd52luq(Acc1, A, B, Assembler::AVX_512bit);
+
+ // Acc2 += A *h B
+ __ evpmadd52huq(Acc2, A, B, Assembler::AVX_512bit);
+
+ // N = replicate(Acc1[0])
+ __ vpbroadcastq(N, Acc1, Assembler::AVX_512bit);
+
+ // Acc1 += M * N
+ __ evpmadd52luq(Acc1, modulus, N, Assembler::AVX_512bit);
+
+ // Acc2 += M *h N
+ __ evpmadd52huq(Acc2, modulus, N, Assembler::AVX_512bit);
+
+ if (i == 4) break;
+
+ // Combine high/low partial sums Acc1 + Acc2
+
+ // carry = Acc1[0] >> 52
+ __ evpsrlq(carry, limb0, Acc1, 52, true, Assembler::AVX_512bit);
+
+ // Acc2[0] += carry
+ __ evpaddq(Acc2, limb0, carry, Acc2, true, Assembler::AVX_512bit);
+
+ // Acc1 = Acc1 shift one q element >>
+ __ evpermq(Acc1, allLimbs, shift1R, Acc1, false, Assembler::AVX_512bit);
+
+ // Acc1 = Acc1 + Acc2
+ __ vpaddq(Acc1, Acc1, Acc2, Assembler::AVX_512bit);
+ }
+
+ // Last Carry round: Combine high/low partial sums Acc1 + Acc1 + Acc2
+ // carry = Acc1 >> 52
+ __ evpsrlq(carry, allLimbs, Acc1, 52, true, Assembler::AVX_512bit);
+
+ // Acc1 = Acc1 shift one q element >>
+ __ evpermq(Acc1, allLimbs, shift1R, Acc1, false, Assembler::AVX_512bit);
+
+ // Acc1 = mask52(Acc1)
+ __ evpandq(Acc1, Acc1, mask52, Assembler::AVX_512bit); // Clear top 12 bits
+
+ // Acc2 += carry
+ __ evpaddq(Acc2, allLimbs, carry, Acc2, true, Assembler::AVX_512bit);
+
+ // Acc1 = Acc1 + Acc2
+ __ vpaddq(Acc1, Acc1, Acc2, Assembler::AVX_512bit);
+
+ // output to rLimbs (1 + 4 limbs)
+ __ movq(Address(rLimbs, 0), Acc1);
+ __ evpermq(Acc1, k0, shift1R, Acc1, true, Assembler::AVX_512bit);
+ __ evmovdquq(Address(rLimbs, 8), k0, Acc1, true, Assembler::AVX_256bit);
+}
+
+address StubGenerator::generate_intpoly_montgomeryMult_P256() {
+ __ align(CodeEntryAlignment);
+ StubCodeMark mark(this, "StubRoutines", "intpoly_montgomeryMult_P256");
+ address start = __ pc();
+ __ enter();
+
+ // Register Map
+ const Register aLimbs = c_rarg0; // rdi | rcx
+ const Register bLimbs = c_rarg1; // rsi | rdx
+ const Register rLimbs = c_rarg2; // rdx | r8
+ const Register tmp = r9;
+
+ montgomeryMultiply(aLimbs, bLimbs, rLimbs, tmp, _masm);
+ __ mov64(rax, 0x1); // Return 1 (Fig. 5, Step 6 [1] skipped in montgomeryMultiply)
+
+ __ leave();
+ __ ret(0);
+ return start;
+}
+
+// A = B if select
+// Must be:
+// - constant time (i.e. no branches)
+// - no-side channel (i.e. all memory must always be accessed, and in same order)
+void assign_avx(XMMRegister A, Address aAddr, XMMRegister B, Address bAddr, KRegister select, int vector_len, MacroAssembler* _masm) {
+ __ evmovdquq(A, aAddr, vector_len);
+ __ evmovdquq(B, bAddr, vector_len);
+ __ evmovdquq(A, select, B, true, vector_len);
+ __ evmovdquq(aAddr, A, vector_len);
+}
+
+void assign_scalar(Address aAddr, Address bAddr, Register select, Register tmp, MacroAssembler* _masm) {
+ // Original java:
+ // long dummyLimbs = maskValue & (a[i] ^ b[i]);
+ // a[i] = dummyLimbs ^ a[i];
+
+ __ movq(tmp, aAddr);
+ __ xorq(tmp, bAddr);
+ __ andq(tmp, select);
+ __ xorq(aAddr, tmp);
+}
+
+address StubGenerator::generate_intpoly_assign() {
+ // KNOWN Lengths:
+ // MontgomeryIntPolynP256: 5 = 4 + 1
+ // IntegerPolynomial1305: 5 = 4 + 1
+ // IntegerPolynomial25519: 10 = 8 + 2
+ // IntegerPolynomialP256: 10 = 8 + 2
+ // Curve25519OrderField: 10 = 8 + 2
+ // Curve25519OrderField: 10 = 8 + 2
+ // P256OrderField: 10 = 8 + 2
+ // IntegerPolynomialP384: 14 = 8 + 4 + 2
+ // P384OrderField: 14 = 8 + 4 + 2
+ // IntegerPolynomial448: 16 = 8 + 8
+ // Curve448OrderField: 16 = 8 + 8
+ // Curve448OrderField: 16 = 8 + 8
+ // IntegerPolynomialP521: 19 = 8 + 8 + 2 + 1
+ // P521OrderField: 19 = 8 + 8 + 2 + 1
+ // Special Cases 5, 10, 14, 16, 19
+
+ __ align(CodeEntryAlignment);
+ StubCodeMark mark(this, "StubRoutines", "intpoly_assign");
+ address start = __ pc();
+ __ enter();
+
+ // Inputs
+ const Register set = c_rarg0;
+ const Register aLimbs = c_rarg1;
+ const Register bLimbs = c_rarg2;
+ const Register length = c_rarg3;
+ XMMRegister A = xmm0;
+ XMMRegister B = xmm1;
+
+ Register tmp = r9;
+ KRegister select = k1;
+ Label L_Length5, L_Length10, L_Length14, L_Length16, L_Length19, L_DefaultLoop, L_Done;
+
+ __ negq(set);
+ __ kmovql(select, set);
+
+ // NOTE! Crypto code cannot branch on user input. However; allowed to branch on number of limbs;
+ // Number of limbs is a constant in each IntegerPolynomial (i.e. this side-channel branch leaks
+ // number of limbs which is not a secret)
+ __ cmpl(length, 5);
+ __ jcc(Assembler::equal, L_Length5);
+ __ cmpl(length, 10);
+ __ jcc(Assembler::equal, L_Length10);
+ __ cmpl(length, 14);
+ __ jcc(Assembler::equal, L_Length14);
+ __ cmpl(length, 16);
+ __ jcc(Assembler::equal, L_Length16);
+ __ cmpl(length, 19);
+ __ jcc(Assembler::equal, L_Length19);
+
+ // Default copy loop (UNLIKELY)
+ __ cmpl(length, 0);
+ __ jcc(Assembler::lessEqual, L_Done);
+ __ bind(L_DefaultLoop);
+ assign_scalar(Address(aLimbs, 0), Address(bLimbs, 0), set, tmp, _masm);
+ __ subl(length, 1);
+ __ lea(aLimbs, Address(aLimbs,8));
+ __ lea(bLimbs, Address(bLimbs,8));
+ __ cmpl(length, 0);
+ __ jcc(Assembler::greater, L_DefaultLoop);
+ __ jmp(L_Done);
+
+ __ bind(L_Length5); // 1 + 4
+ assign_scalar(Address(aLimbs, 0), Address(bLimbs, 0), set, tmp, _masm);
+ assign_avx(A, Address(aLimbs, 8), B, Address(bLimbs, 8), select, Assembler::AVX_256bit, _masm);
+ __ jmp(L_Done);
+
+ __ bind(L_Length10); // 2 + 8
+ assign_avx(A, Address(aLimbs, 0), B, Address(bLimbs, 0), select, Assembler::AVX_128bit, _masm);
+ assign_avx(A, Address(aLimbs, 16), B, Address(bLimbs, 16), select, Assembler::AVX_512bit, _masm);
+ __ jmp(L_Done);
+
+ __ bind(L_Length14); // 2 + 4 + 8
+ assign_avx(A, Address(aLimbs, 0), B, Address(bLimbs, 0), select, Assembler::AVX_128bit, _masm);
+ assign_avx(A, Address(aLimbs, 16), B, Address(bLimbs, 16), select, Assembler::AVX_256bit, _masm);
+ assign_avx(A, Address(aLimbs, 48), B, Address(bLimbs, 48), select, Assembler::AVX_512bit, _masm);
+ __ jmp(L_Done);
+
+ __ bind(L_Length16); // 8 + 8
+ assign_avx(A, Address(aLimbs, 0), B, Address(bLimbs, 0), select, Assembler::AVX_512bit, _masm);
+ assign_avx(A, Address(aLimbs, 64), B, Address(bLimbs, 64), select, Assembler::AVX_512bit, _masm);
+ __ jmp(L_Done);
+
+ __ bind(L_Length19); // 1 + 2 + 8 + 8
+ assign_scalar(Address(aLimbs, 0), Address(bLimbs, 0), set, tmp, _masm);
+ assign_avx(A, Address(aLimbs, 8), B, Address(bLimbs, 8), select, Assembler::AVX_128bit, _masm);
+ assign_avx(A, Address(aLimbs, 24), B, Address(bLimbs, 24), select, Assembler::AVX_512bit, _masm);
+ assign_avx(A, Address(aLimbs, 88), B, Address(bLimbs, 88), select, Assembler::AVX_512bit, _masm);
+
+ __ bind(L_Done);
+ __ leave();
+ __ ret(0);
+ return start;
+}
diff --git a/src/hotspot/cpu/x86/vm_version_x86.cpp b/src/hotspot/cpu/x86/vm_version_x86.cpp
index 0bffb1aee13f8..f5389d0ef9016 100644
--- a/src/hotspot/cpu/x86/vm_version_x86.cpp
+++ b/src/hotspot/cpu/x86/vm_version_x86.cpp
@@ -1002,6 +1002,14 @@ void VM_Version::get_processor_features() {
}
}
+ // APX support not enabled yet
+ if (UseAPX) {
+ if (!FLAG_IS_DEFAULT(UseAPX)) {
+ warning("APX is not supported on this CPU.");
+ }
+ FLAG_SET_DEFAULT(UseAPX, false);
+ }
+
if (FLAG_IS_DEFAULT(IntelJccErratumMitigation)) {
_has_intel_jcc_erratum = compute_has_intel_jcc_erratum();
} else {
@@ -1366,6 +1374,18 @@ void VM_Version::get_processor_features() {
FLAG_SET_DEFAULT(UsePoly1305Intrinsics, false);
}
+#ifdef _LP64
+ if (supports_avx512ifma() && supports_avx512vlbw()) {
+ if (FLAG_IS_DEFAULT(UseIntPolyIntrinsics)) {
+ FLAG_SET_DEFAULT(UseIntPolyIntrinsics, true);
+ }
+ } else
+#endif
+ if (UseIntPolyIntrinsics) {
+ warning("Intrinsics for Polynomial crypto functions not available on this CPU.");
+ FLAG_SET_DEFAULT(UseIntPolyIntrinsics, false);
+ }
+
#ifdef _LP64
if (FLAG_IS_DEFAULT(UseMultiplyToLenIntrinsic)) {
UseMultiplyToLenIntrinsic = true;
diff --git a/src/hotspot/cpu/x86/x86.ad b/src/hotspot/cpu/x86/x86.ad
index 8c9bbfc35922a..d90ba901ac0a8 100644
--- a/src/hotspot/cpu/x86/x86.ad
+++ b/src/hotspot/cpu/x86/x86.ad
@@ -1749,6 +1749,7 @@ bool Matcher::match_rule_supported_vector(int opcode, int vlen, BasicType bt) {
return false;
}
break;
+ case Op_ClearArray:
case Op_VectorMaskGen:
case Op_VectorCmpMasked:
if (!is_LP64 || !VM_Version::supports_avx512bw()) {
@@ -7235,11 +7236,10 @@ instruct vxor_mem(vec dst, vec src, memory mem) %{
// --------------------------------- VectorCast --------------------------------------
instruct vcastBtoX(vec dst, vec src) %{
+ predicate(VM_Version::supports_avx512vl() || Matcher::vector_element_basic_type(n) != T_DOUBLE);
match(Set dst (VectorCastB2X src));
format %{ "vector_cast_b2x $dst,$src\t!" %}
ins_encode %{
- assert(UseAVX > 0, "required");
-
BasicType to_elem_bt = Matcher::vector_element_basic_type(this);
int vlen_enc = vector_length_encoding(this);
__ vconvert_b2x(to_elem_bt, $dst$$XMMRegister, $src$$XMMRegister, vlen_enc);
@@ -7247,6 +7247,17 @@ instruct vcastBtoX(vec dst, vec src) %{
ins_pipe( pipe_slow );
%}
+instruct vcastBtoD(legVec dst, legVec src) %{
+ predicate(!VM_Version::supports_avx512vl() && Matcher::vector_element_basic_type(n) == T_DOUBLE);
+ match(Set dst (VectorCastB2X src));
+ format %{ "vector_cast_b2x $dst,$src\t!" %}
+ ins_encode %{
+ int vlen_enc = vector_length_encoding(this);
+ __ vconvert_b2x(T_DOUBLE, $dst$$XMMRegister, $src$$XMMRegister, vlen_enc);
+ %}
+ ins_pipe( pipe_slow );
+%}
+
instruct castStoX(vec dst, vec src) %{
predicate((UseAVX <= 2 || !VM_Version::supports_avx512vlbw()) &&
Matcher::vector_length(n->in(1)) <= 8 && // src
diff --git a/src/hotspot/cpu/x86/x86_32.ad b/src/hotspot/cpu/x86/x86_32.ad
index 39c09efc49ce3..240e9b10323bd 100644
--- a/src/hotspot/cpu/x86/x86_32.ad
+++ b/src/hotspot/cpu/x86/x86_32.ad
@@ -13558,6 +13558,8 @@ instruct Ret() %{
// Also known as an 'interprocedural jump'.
// Target of jump will eventually return to caller.
// TailJump below removes the return address.
+// Don't use ebp for 'jump_target' because a MachEpilogNode has already been
+// emitted just above the TailCall which has reset ebp to the caller state.
instruct TailCalljmpInd(eRegP_no_EBP jump_target, eBXRegP method_ptr) %{
match(TailCall jump_target method_ptr);
ins_cost(300);
diff --git a/src/hotspot/cpu/x86/x86_64.ad b/src/hotspot/cpu/x86/x86_64.ad
index 288212d6d13de..f3ad721dfcb9d 100644
--- a/src/hotspot/cpu/x86/x86_64.ad
+++ b/src/hotspot/cpu/x86/x86_64.ad
@@ -12094,7 +12094,7 @@ instruct partialSubtypeCheckConstSuper(rsi_RegP sub, rax_RegP super_reg, immP su
effect(KILL cr, TEMP temp1, TEMP temp2, TEMP temp3, TEMP temp4);
ins_cost(700); // smaller than the next version
- format %{ "partialSubtypeCheck $result, $sub, super" %}
+ format %{ "partialSubtypeCheck $result, $sub, $super_reg, $super_con" %}
ins_encode %{
u1 super_klass_slot = ((Klass*)$super_con$$constant)->hash_slot();
@@ -12469,6 +12469,8 @@ instruct Ret()
// Also known as an 'interprocedural jump'.
// Target of jump will eventually return to caller.
// TailJump below removes the return address.
+// Don't use rbp for 'jump_target' because a MachEpilogNode has already been
+// emitted just above the TailCall which has reset rbp to the caller state.
instruct TailCalljmpInd(no_rbp_RegP jump_target, rbx_RegP method_ptr)
%{
match(TailCall jump_target method_ptr);
diff --git a/src/hotspot/cpu/zero/relocInfo_zero.cpp b/src/hotspot/cpu/zero/relocInfo_zero.cpp
index 1e9fdc1081d48..b926f20cfe7d8 100644
--- a/src/hotspot/cpu/zero/relocInfo_zero.cpp
+++ b/src/hotspot/cpu/zero/relocInfo_zero.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2003, 2023, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2003, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright 2007, 2009, 2010, 2011 Red Hat, Inc.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@@ -30,7 +30,7 @@
#include "oops/oop.inline.hpp"
#include "runtime/safepoint.hpp"
-void Relocation::pd_set_data_value(address x, intptr_t o, bool verify_only) {
+void Relocation::pd_set_data_value(address x, bool verify_only) {
ShouldNotCallThis();
}
diff --git a/src/hotspot/os/aix/os_aix.cpp b/src/hotspot/os/aix/os_aix.cpp
index 862d8e8b0f4e2..f0a984d3d1fe0 100644
--- a/src/hotspot/os/aix/os_aix.cpp
+++ b/src/hotspot/os/aix/os_aix.cpp
@@ -1805,7 +1805,7 @@ bool os::remove_stack_guard_pages(char* addr, size_t size) {
void os::pd_realign_memory(char *addr, size_t bytes, size_t alignment_hint) {
}
-void os::pd_free_memory(char *addr, size_t bytes, size_t alignment_hint, MEMFLAGS flag) {
+void os::pd_free_memory(char *addr, size_t bytes, size_t alignment_hint) {
}
size_t os::pd_pretouch_memory(void* first, void* last, size_t page_size) {
@@ -1847,7 +1847,7 @@ bool os::numa_get_group_ids_for_range(const void** addresses, int* lgrp_ids, siz
}
// Reserves and attaches a shared memory segment.
-char* os::pd_reserve_memory(size_t bytes, bool exec, MEMFLAGS flag) {
+char* os::pd_reserve_memory(size_t bytes, bool exec) {
// Always round to os::vm_page_size(), which may be larger than 4K.
bytes = align_up(bytes, os::vm_page_size());
@@ -1996,7 +1996,7 @@ void os::large_page_init() {
return; // Nothing to do. See query_multipage_support and friends.
}
-char* os::pd_reserve_memory_special(size_t bytes, size_t alignment, size_t page_size, char* req_addr, bool exec, MEMFLAGS flag) {
+char* os::pd_reserve_memory_special(size_t bytes, size_t alignment, size_t page_size, char* req_addr, bool exec) {
fatal("os::reserve_memory_special should not be called on AIX.");
return nullptr;
}
@@ -2015,7 +2015,7 @@ bool os::can_commit_large_page_memory() {
return false;
}
-char* os::pd_attempt_map_memory_to_file_at(char* requested_addr, size_t bytes, int file_desc, MEMFLAGS flag) {
+char* os::pd_attempt_map_memory_to_file_at(char* requested_addr, size_t bytes, int file_desc) {
assert(file_desc >= 0, "file_desc is not valid");
char* result = nullptr;
@@ -2033,7 +2033,7 @@ char* os::pd_attempt_map_memory_to_file_at(char* requested_addr, size_t bytes, i
// Reserve memory at an arbitrary address, only if that area is
// available (and not reserved for something else).
-char* os::pd_attempt_reserve_memory_at(char* requested_addr, size_t bytes, bool exec, MEMFLAGS flag) {
+char* os::pd_attempt_reserve_memory_at(char* requested_addr, size_t bytes, bool exec) {
char* addr = nullptr;
// Always round to os::vm_page_size(), which may be larger than 4K.
diff --git a/src/hotspot/os/bsd/defs.S.inc b/src/hotspot/os/bsd/defs.S.inc
new file mode 100644
index 0000000000000..0d6950631a5cd
--- /dev/null
+++ b/src/hotspot/os/bsd/defs.S.inc
@@ -0,0 +1,40 @@
+#
+# Copyright (c) 2024, Oracle and/or its affiliates. All rights reserved.
+# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
+#
+# This code is free software; you can redistribute it and/or modify it
+# under the terms of the GNU General Public License version 2 only, as
+# published by the Free Software Foundation.
+#
+# This code is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+# version 2 for more details (a copy is included in the LICENSE file that
+# accompanied this code).
+#
+# You should have received a copy of the GNU General Public License version
+# 2 along with this work; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
+# or visit www.oracle.com if you need additional information or have any
+# questions.
+#
+
+#ifdef __APPLE__
+ # macOS prefixes symbols with _
+ #define SYMBOL(s) _ ## s
+
+ #define DECLARE_FUNC(func) \
+ .globl SYMBOL(func) ; \
+ .private_extern SYMBOL(func) ; \
+ SYMBOL(func)
+#else
+ #define SYMBOL(s) s
+
+ #define DECLARE_FUNC(func) \
+ .globl func ; \
+ .hidden func ; \
+ .type func, %function ; \
+ func
+#endif
diff --git a/src/hotspot/os/bsd/gc/x/xPhysicalMemoryBacking_bsd.cpp b/src/hotspot/os/bsd/gc/x/xPhysicalMemoryBacking_bsd.cpp
index ea7a128cd13a0..2c64c3788d34d 100644
--- a/src/hotspot/os/bsd/gc/x/xPhysicalMemoryBacking_bsd.cpp
+++ b/src/hotspot/os/bsd/gc/x/xPhysicalMemoryBacking_bsd.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2019, 2024, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2019, 2020, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -78,7 +78,7 @@ XPhysicalMemoryBacking::XPhysicalMemoryBacking(size_t max_capacity) :
_initialized(false) {
// Reserve address space for backing memory
- _base = (uintptr_t)os::reserve_memory(max_capacity, !ExecMem, mtJavaHeap);
+ _base = (uintptr_t)os::reserve_memory(max_capacity);
if (_base == 0) {
// Failed
log_error_pd(gc)("Failed to reserve address space for backing memory");
diff --git a/src/hotspot/os/bsd/gc/z/zPhysicalMemoryBacking_bsd.cpp b/src/hotspot/os/bsd/gc/z/zPhysicalMemoryBacking_bsd.cpp
index 5bc901d0c0ba2..29825a9eab291 100644
--- a/src/hotspot/os/bsd/gc/z/zPhysicalMemoryBacking_bsd.cpp
+++ b/src/hotspot/os/bsd/gc/z/zPhysicalMemoryBacking_bsd.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2019, 2024, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2019, 2023, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -79,7 +79,7 @@ ZPhysicalMemoryBacking::ZPhysicalMemoryBacking(size_t max_capacity)
_initialized(false) {
// Reserve address space for backing memory
- _base = (uintptr_t)os::reserve_memory(max_capacity, !ExecMem, mtJavaHeap);
+ _base = (uintptr_t)os::reserve_memory(max_capacity);
if (_base == 0) {
// Failed
log_error_pd(gc)("Failed to reserve address space for backing memory");
@@ -103,7 +103,7 @@ bool ZPhysicalMemoryBacking::commit_inner(zoffset offset, size_t length) const {
assert(is_aligned(length, os::vm_page_size()), "Invalid length");
log_trace(gc, heap)("Committing memory: " SIZE_FORMAT "M-" SIZE_FORMAT "M (" SIZE_FORMAT "M)",
- untype(offset) / M, untype(offset) + length / M, length / M);
+ untype(offset) / M, untype(to_zoffset_end(offset, length)) / M, length / M);
const uintptr_t addr = _base + untype(offset);
const void* const res = mmap((void*)addr, length, PROT_READ | PROT_WRITE, MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
@@ -150,7 +150,7 @@ size_t ZPhysicalMemoryBacking::uncommit(zoffset offset, size_t length) const {
assert(is_aligned(length, os::vm_page_size()), "Invalid length");
log_trace(gc, heap)("Uncommitting memory: " SIZE_FORMAT "M-" SIZE_FORMAT "M (" SIZE_FORMAT "M)",
- untype(offset) / M, untype(offset) + length / M, length / M);
+ untype(offset) / M, untype(to_zoffset_end(offset, length)) / M, length / M);
const uintptr_t start = _base + untype(offset);
const void* const res = mmap((void*)start, length, PROT_NONE, MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE | MAP_NORESERVE, -1, 0);
diff --git a/src/hotspot/os/bsd/globals_bsd.hpp b/src/hotspot/os/bsd/globals_bsd.hpp
index 66fae6a49d239..850d491a11fa4 100644
--- a/src/hotspot/os/bsd/globals_bsd.hpp
+++ b/src/hotspot/os/bsd/globals_bsd.hpp
@@ -35,7 +35,7 @@
range, \
constraint) \
\
- AARCH64_ONLY(develop(bool, AssertWXAtThreadSync, false, \
+ AARCH64_ONLY(develop(bool, AssertWXAtThreadSync, true, \
"Conservatively check W^X thread state at possible safepoint" \
"or handshake"))
diff --git a/src/hotspot/os/bsd/os_bsd.cpp b/src/hotspot/os/bsd/os_bsd.cpp
index 95f8016e2e990..e5b6c74ce2f1b 100644
--- a/src/hotspot/os/bsd/os_bsd.cpp
+++ b/src/hotspot/os/bsd/os_bsd.cpp
@@ -1668,7 +1668,7 @@ void os::pd_commit_memory_or_exit(char* addr, size_t size,
void os::pd_realign_memory(char *addr, size_t bytes, size_t alignment_hint) {
}
-void os::pd_free_memory(char *addr, size_t bytes, size_t alignment_hint, MEMFLAGS flag) {
+void os::pd_free_memory(char *addr, size_t bytes, size_t alignment_hint) {
::madvise(addr, bytes, MADV_DONTNEED);
}
@@ -1766,13 +1766,13 @@ bool os::pd_uncommit_memory(char* addr, size_t size, bool exec) {
}
bool os::pd_create_stack_guard_pages(char* addr, size_t size) {
- return os::commit_memory(addr, size, !ExecMem, mtThreadStack);
+ return os::commit_memory(addr, size, !ExecMem);
}
// If this is a growable mapping, remove the guard pages entirely by
// munmap()ping them. If not, just call uncommit_memory().
bool os::remove_stack_guard_pages(char* addr, size_t size) {
- return os::uncommit_memory(addr, size, !ExecMem, mtThreadStack);
+ return os::uncommit_memory(addr, size);
}
// 'requested_addr' is only treated as a hint, the return value may or
@@ -1809,7 +1809,7 @@ static int anon_munmap(char * addr, size_t size) {
}
}
-char* os::pd_reserve_memory(size_t bytes, bool exec, MEMFLAGS flag) {
+char* os::pd_reserve_memory(size_t bytes, bool exec) {
return anon_mmap(nullptr /* addr */, bytes, exec);
}
@@ -1869,7 +1869,7 @@ void os::large_page_init() {
}
-char* os::pd_reserve_memory_special(size_t bytes, size_t alignment, size_t page_size, char* req_addr, bool exec, MEMFLAGS flag) {
+char* os::pd_reserve_memory_special(size_t bytes, size_t alignment, size_t page_size, char* req_addr, bool exec) {
fatal("os::reserve_memory_special should not be called on BSD.");
return nullptr;
}
@@ -1888,9 +1888,9 @@ bool os::can_commit_large_page_memory() {
return false;
}
-char* os::pd_attempt_map_memory_to_file_at(char* requested_addr, size_t bytes, int file_desc, MEMFLAGS flag) {
+char* os::pd_attempt_map_memory_to_file_at(char* requested_addr, size_t bytes, int file_desc) {
assert(file_desc >= 0, "file_desc is not valid");
- char* result = pd_attempt_reserve_memory_at(requested_addr, bytes, !ExecMem, flag);
+ char* result = pd_attempt_reserve_memory_at(requested_addr, bytes, !ExecMem);
if (result != nullptr) {
if (replace_existing_mapping_with_file_mapping(result, bytes, file_desc) == nullptr) {
vm_exit_during_initialization(err_msg("Error in mapping Java heap at the given filesystem directory"));
@@ -1902,7 +1902,7 @@ char* os::pd_attempt_map_memory_to_file_at(char* requested_addr, size_t bytes, i
// Reserve memory at an arbitrary address, only if that area is
// available (and not reserved for something else).
-char* os::pd_attempt_reserve_memory_at(char* requested_addr, size_t bytes, bool exec, MEMFLAGS flag) {
+char* os::pd_attempt_reserve_memory_at(char* requested_addr, size_t bytes, bool exec) {
// Assert only that the size is a multiple of the page size, since
// that's all that mmap requires, and since that's all we really know
// about at this low abstraction level. If we need higher alignment,
diff --git a/src/hotspot/os/linux/cgroupSubsystem_linux.cpp b/src/hotspot/os/linux/cgroupSubsystem_linux.cpp
index 9053e8eac302a..be9ff1440c02d 100644
--- a/src/hotspot/os/linux/cgroupSubsystem_linux.cpp
+++ b/src/hotspot/os/linux/cgroupSubsystem_linux.cpp
@@ -556,21 +556,162 @@ jlong CgroupSubsystem::memory_limit_in_bytes() {
return mem_limit;
}
-jlong CgroupSubsystem::limit_from_str(char* limit_str) {
+bool CgroupController::read_string(const char* filename, char* buf, size_t buf_size) {
+ assert(buf != nullptr, "buffer must not be null");
+ assert(filename != nullptr, "filename must be given");
+ char* s_path = subsystem_path();
+ if (s_path == nullptr) {
+ log_debug(os, container)("read_string: subsystem path is null");
+ return false;
+ }
+
+ stringStream file_path;
+ file_path.print_raw(s_path);
+ file_path.print_raw(filename);
+
+ if (file_path.size() > MAXPATHLEN) {
+ log_debug(os, container)("File path too long %s, %s", file_path.base(), filename);
+ return false;
+ }
+ const char* absolute_path = file_path.freeze();
+ log_trace(os, container)("Path to %s is %s", filename, absolute_path);
+
+ FILE* fp = os::fopen(absolute_path, "r");
+ if (fp == nullptr) {
+ log_debug(os, container)("Open of file %s failed, %s", absolute_path, os::strerror(errno));
+ return false;
+ }
+
+ // Read a single line into the provided buffer.
+ // At most buf_size - 1 characters.
+ char* line = fgets(buf, buf_size, fp);
+ fclose(fp);
+ if (line == nullptr) {
+ log_debug(os, container)("Empty file %s", absolute_path);
+ return false;
+ }
+ size_t len = strlen(line);
+ assert(len <= buf_size - 1, "At most buf_size - 1 bytes can be read");
+ if (line[len - 1] == '\n') {
+ line[len - 1] = '\0'; // trim trailing new line
+ }
+ return true;
+}
+
+bool CgroupController::read_number(const char* filename, julong* result) {
+ char buf[1024];
+ bool is_ok = read_string(filename, buf, 1024);
+ if (!is_ok) {
+ return false;
+ }
+ int matched = sscanf(buf, JULONG_FORMAT, result);
+ if (matched == 1) {
+ return true;
+ }
+ return false;
+}
+
+bool CgroupController::read_number_handle_max(const char* filename, jlong* result) {
+ char buf[1024];
+ bool is_ok = read_string(filename, buf, 1024);
+ if (!is_ok) {
+ return false;
+ }
+ jlong val = limit_from_str(buf);
+ if (val == OSCONTAINER_ERROR) {
+ return false;
+ }
+ *result = val;
+ return true;
+}
+
+bool CgroupController::read_numerical_key_value(const char* filename, const char* key, julong* result) {
+ assert(key != nullptr, "key must be given");
+ assert(result != nullptr, "result pointer must not be null");
+ assert(filename != nullptr, "file to search in must be given");
+ char* s_path = subsystem_path();
+ if (s_path == nullptr) {
+ log_debug(os, container)("read_numerical_key_value: subsystem path is null");
+ return false;
+ }
+
+ stringStream file_path;
+ file_path.print_raw(s_path);
+ file_path.print_raw(filename);
+
+ if (file_path.size() > MAXPATHLEN) {
+ log_debug(os, container)("File path too long %s, %s", file_path.base(), filename);
+ return false;
+ }
+ const char* absolute_path = file_path.freeze();
+ log_trace(os, container)("Path to %s is %s", filename, absolute_path);
+ FILE* fp = os::fopen(absolute_path, "r");
+ if (fp == nullptr) {
+ log_debug(os, container)("Open of file %s failed, %s", absolute_path, os::strerror(errno));
+ return false;
+ }
+
+ const int buf_len = MAXPATHLEN+1;
+ char buf[buf_len];
+ char* line = fgets(buf, buf_len, fp);
+ bool found_match = false;
+ // File consists of multiple lines in a "key value"
+ // fashion, we have to find the key.
+ const size_t key_len = strlen(key);
+ for (; line != nullptr; line = fgets(buf, buf_len, fp)) {
+ char after_key = line[key_len];
+ if (strncmp(line, key, key_len) == 0
+ && isspace(after_key) != 0
+ && after_key != '\n') {
+ // Skip key, skip space
+ const char* value_substr = line + key_len + 1;
+ int matched = sscanf(value_substr, JULONG_FORMAT, result);
+ found_match = matched == 1;
+ if (found_match) {
+ break;
+ }
+ }
+ }
+ fclose(fp);
+ if (found_match) {
+ return true;
+ }
+ log_debug(os, container)("Type %s (key == %s) not found in file %s", JULONG_FORMAT,
+ key, absolute_path);
+ return false;
+}
+
+bool CgroupController::read_numerical_tuple_value(const char* filename, bool use_first, jlong* result) {
+ char buf[1024];
+ bool is_ok = read_string(filename, buf, 1024);
+ if (!is_ok) {
+ return false;
+ }
+ char token[1024];
+ const int matched = sscanf(buf, (use_first ? "%1023s %*s" : "%*s %1023s"), token);
+ if (matched != 1) {
+ return false;
+ }
+ jlong val = limit_from_str(token);
+ if (val == OSCONTAINER_ERROR) {
+ return false;
+ }
+ *result = val;
+ return true;
+}
+
+jlong CgroupController::limit_from_str(char* limit_str) {
if (limit_str == nullptr) {
return OSCONTAINER_ERROR;
}
// Unlimited memory in cgroups is the literal string 'max' for
// some controllers, for example the pids controller.
if (strcmp("max", limit_str) == 0) {
- os::free(limit_str);
return (jlong)-1;
}
julong limit;
if (sscanf(limit_str, JULONG_FORMAT, &limit) != 1) {
- os::free(limit_str);
return OSCONTAINER_ERROR;
}
- os::free(limit_str);
return (jlong)limit;
}
diff --git a/src/hotspot/os/linux/cgroupSubsystem_linux.hpp b/src/hotspot/os/linux/cgroupSubsystem_linux.hpp
index 7943d2fe9de45..6c17ff4508d55 100644
--- a/src/hotspot/os/linux/cgroupSubsystem_linux.hpp
+++ b/src/hotspot/os/linux/cgroupSubsystem_linux.hpp
@@ -69,148 +69,91 @@
#define MEMORY_IDX 3
#define PIDS_IDX 4
-typedef char * cptr;
-
-class CgroupController: public CHeapObj {
- public:
- virtual char *subsystem_path() = 0;
-};
-
-PRAGMA_DIAG_PUSH
-PRAGMA_FORMAT_NONLITERAL_IGNORED
-// Parses a subsystem's file, looking for a matching line.
-// If key is null, then the first line will be matched with scan_fmt.
-// If key isn't null, then each line will be matched, looking for something that matches "$key $scan_fmt".
-// The matching value will be assigned to returnval.
-// scan_fmt uses scanf() syntax.
-// Return value: 0 on match, OSCONTAINER_ERROR on error.
-template int subsystem_file_line_contents(CgroupController* c,
- const char *filename,
- const char *key,
- const char *scan_fmt,
- T returnval) {
- if (c == nullptr) {
- log_debug(os, container)("subsystem_file_line_contents: CgroupController* is null");
- return OSCONTAINER_ERROR;
- }
- if (c->subsystem_path() == nullptr) {
- log_debug(os, container)("subsystem_file_line_contents: subsystem path is null");
- return OSCONTAINER_ERROR;
- }
-
- stringStream file_path;
- file_path.print_raw(c->subsystem_path());
- file_path.print_raw(filename);
-
- if (file_path.size() > (MAXPATHLEN-1)) {
- log_debug(os, container)("File path too long %s, %s", file_path.base(), filename);
- return OSCONTAINER_ERROR;
- }
- const char* absolute_path = file_path.freeze();
- log_trace(os, container)("Path to %s is %s", filename, absolute_path);
-
- FILE* fp = os::fopen(absolute_path, "r");
- if (fp == nullptr) {
- log_debug(os, container)("Open of file %s failed, %s", absolute_path, os::strerror(errno));
- return OSCONTAINER_ERROR;
- }
-
- const int buf_len = MAXPATHLEN+1;
- char buf[buf_len];
- char* line = fgets(buf, buf_len, fp);
- if (line == nullptr) {
- log_debug(os, container)("Empty file %s", absolute_path);
- fclose(fp);
- return OSCONTAINER_ERROR;
- }
-
- bool found_match = false;
- if (key == nullptr) {
- // File consists of a single line according to caller, with only a value
- int matched = sscanf(line, scan_fmt, returnval);
- found_match = matched == 1;
- } else {
- // File consists of multiple lines in a "key value"
- // fashion, we have to find the key.
- const int key_len = (int)strlen(key);
- for (; line != nullptr; line = fgets(buf, buf_len, fp)) {
- char* key_substr = strstr(line, key);
- char after_key = line[key_len];
- if (key_substr == line
- && isspace(after_key) != 0
- && after_key != '\n') {
- // Skip key, skip space
- const char* value_substr = line + key_len + 1;
- int matched = sscanf(value_substr, scan_fmt, returnval);
- found_match = matched == 1;
- if (found_match) {
- break;
- }
- }
- }
- }
- fclose(fp);
- if (found_match) {
- return 0;
- }
- log_debug(os, container)("Type %s (key == %s) not found in file %s", scan_fmt,
- (key == nullptr ? "null" : key), absolute_path);
- return OSCONTAINER_ERROR;
-}
-PRAGMA_DIAG_POP
-
-// log_fmt can be different than scan_fmt. For example
-// cpu_period() for cgv2 uses log_fmt='%d' and scan_fmt='%*s %d'
-#define GET_CONTAINER_INFO(return_type, subsystem, filename, \
- logstring, log_fmt, scan_fmt, variable) \
- return_type variable; \
-{ \
- int err; \
- err = subsystem_file_line_contents(subsystem, \
- filename, \
- nullptr, \
- scan_fmt, \
- &variable); \
- if (err != 0) { \
- log_trace(os, container)(logstring "%d", OSCONTAINER_ERROR); \
- return (return_type) OSCONTAINER_ERROR; \
- } \
- \
- log_trace(os, container)(logstring log_fmt, variable); \
+#define CONTAINER_READ_NUMBER_CHECKED(controller, filename, log_string, retval) \
+{ \
+ bool is_ok; \
+ is_ok = controller->read_number(filename, &retval); \
+ if (!is_ok) { \
+ log_trace(os, container)(log_string " failed: %d", OSCONTAINER_ERROR); \
+ return OSCONTAINER_ERROR; \
+ } \
+ log_trace(os, container)(log_string " is: " JULONG_FORMAT, retval); \
}
-#define GET_CONTAINER_INFO_CPTR(return_type, subsystem, filename, \
- logstring, scan_fmt, variable, bufsize) \
- char variable[bufsize]; \
-{ \
- int err; \
- err = subsystem_file_line_contents(subsystem, \
- filename, \
- nullptr, \
- scan_fmt, \
- variable); \
- if (err != 0) \
- return (return_type) nullptr; \
- \
- log_trace(os, container)(logstring, variable); \
+#define CONTAINER_READ_NUMBER_CHECKED_MAX(controller, filename, log_string, retval) \
+{ \
+ bool is_ok; \
+ is_ok = controller->read_number_handle_max(filename, &retval); \
+ if (!is_ok) { \
+ log_trace(os, container)(log_string " failed: %d", OSCONTAINER_ERROR); \
+ return OSCONTAINER_ERROR; \
+ } \
+ log_trace(os, container)(log_string " is: " JLONG_FORMAT, retval); \
}
-#define GET_CONTAINER_INFO_LINE(return_type, controller, filename, \
- matchline, logstring, scan_fmt, variable) \
- return_type variable; \
-{ \
- int err; \
- err = subsystem_file_line_contents(controller, \
- filename, \
- matchline, \
- scan_fmt, \
- &variable); \
- if (err != 0) \
- return (return_type) OSCONTAINER_ERROR; \
- \
- log_trace(os, container)(logstring, variable); \
+#define CONTAINER_READ_STRING_CHECKED(controller, filename, log_string, retval, buf_size) \
+{ \
+ bool is_ok; \
+ is_ok = controller->read_string(filename, retval, buf_size); \
+ if (!is_ok) { \
+ log_trace(os, container)(log_string " failed: %d", OSCONTAINER_ERROR); \
+ return nullptr; \
+ } \
+ log_trace(os, container)(log_string " is: %s", retval); \
}
+class CgroupController: public CHeapObj {
+ public:
+ virtual char* subsystem_path() = 0;
+
+ /* Read a numerical value as unsigned long
+ *
+ * returns: false if any error occurred. true otherwise and
+ * the parsed value is set in the provided julong pointer.
+ */
+ bool read_number(const char* filename, julong* result);
+
+ /* Convenience method to deal with numbers as well as the string 'max'
+ * in interface files. Otherwise same as read_number().
+ *
+ * returns: false if any error occurred. true otherwise and
+ * the parsed value (which might be negative) is being set in
+ * the provided jlong pointer.
+ */
+ bool read_number_handle_max(const char* filename, jlong* result);
+
+ /* Read a string of at most buf_size - 1 characters from the interface file.
+ * The provided buffer must be at least buf_size in size so as to account
+ * for the null terminating character. Callers must ensure that the buffer
+ * is appropriately in-scope and of sufficient size.
+ *
+ * returns: false if any error occured. true otherwise and the passed
+ * in buffer will contain the first buf_size - 1 characters of the string
+ * or up to the first new line character ('\n') whichever comes first.
+ */
+ bool read_string(const char* filename, char* buf, size_t buf_size);
+
+ /* Read a tuple value as a number. Tuple is: ' '.
+ * Handles 'max' (for unlimited) for any tuple value. This is handy for
+ * parsing interface files like cpu.max which contain such tuples.
+ *
+ * returns: false if any error occurred. true otherwise and the parsed
+ * value of the appropriate tuple entry set in the provided jlong pointer.
+ */
+ bool read_numerical_tuple_value(const char* filename, bool use_first, jlong* result);
+
+ /* Read a numerical value from a multi-line interface file. The matched line is
+ * determined by the provided 'key'. The associated numerical value is being set
+ * via the passed in julong pointer. Example interface file 'memory.stat'
+ *
+ * returns: false if any error occurred. true otherwise and the parsed value is
+ * being set in the provided julong pointer.
+ */
+ bool read_numerical_key_value(const char* filename, const char* key, julong* result);
+
+ private:
+ static jlong limit_from_str(char* limit_str);
+};
class CachedMetric : public CHeapObj{
private:
@@ -255,7 +198,6 @@ class CgroupSubsystem: public CHeapObj {
public:
jlong memory_limit_in_bytes();
int active_processor_count();
- jlong limit_from_str(char* limit_str);
virtual int cpu_quota() = 0;
virtual int cpu_period() = 0;
diff --git a/src/hotspot/os/linux/cgroupV1Subsystem_linux.cpp b/src/hotspot/os/linux/cgroupV1Subsystem_linux.cpp
index b83da9099f80d..72adaa23b8132 100644
--- a/src/hotspot/os/linux/cgroupV1Subsystem_linux.cpp
+++ b/src/hotspot/os/linux/cgroupV1Subsystem_linux.cpp
@@ -75,9 +75,9 @@ void CgroupV1Controller::set_subsystem_path(char *cgroup_path) {
* OSCONTAINER_ERROR for not supported
*/
jlong CgroupV1MemoryController::uses_mem_hierarchy() {
- GET_CONTAINER_INFO(jlong, this, "/memory.use_hierarchy",
- "Use Hierarchy is: ", JLONG_FORMAT, JLONG_FORMAT, use_hierarchy);
- return use_hierarchy;
+ julong use_hierarchy;
+ CONTAINER_READ_NUMBER_CHECKED(this, "/memory.use_hierarchy", "Use Hierarchy", use_hierarchy);
+ return (jlong)use_hierarchy;
}
void CgroupV1MemoryController::set_subsystem_path(char *cgroup_path) {
@@ -89,15 +89,20 @@ void CgroupV1MemoryController::set_subsystem_path(char *cgroup_path) {
}
jlong CgroupV1Subsystem::read_memory_limit_in_bytes() {
- GET_CONTAINER_INFO(julong, _memory->controller(), "/memory.limit_in_bytes",
- "Memory Limit is: ", JULONG_FORMAT, JULONG_FORMAT, memlimit);
-
+ julong memlimit;
+ CONTAINER_READ_NUMBER_CHECKED(_memory->controller(), "/memory.limit_in_bytes", "Memory Limit", memlimit);
if (memlimit >= os::Linux::physical_memory()) {
log_trace(os, container)("Non-Hierarchical Memory Limit is: Unlimited");
CgroupV1MemoryController* mem_controller = reinterpret_cast(_memory->controller());
if (mem_controller->is_hierarchical()) {
- GET_CONTAINER_INFO_LINE(julong, _memory->controller(), "/memory.stat", "hierarchical_memory_limit",
- "Hierarchical Memory Limit is: " JULONG_FORMAT, JULONG_FORMAT, hier_memlimit)
+ julong hier_memlimit;
+ bool is_ok = _memory->controller()->read_numerical_key_value("/memory.stat",
+ "hierarchical_memory_limit",
+ &hier_memlimit);
+ if (!is_ok) {
+ return OSCONTAINER_ERROR;
+ }
+ log_trace(os, container)("Hierarchical Memory Limit is: " JULONG_FORMAT, hier_memlimit);
if (hier_memlimit >= os::Linux::physical_memory()) {
log_trace(os, container)("Hierarchical Memory Limit is: Unlimited");
} else {
@@ -125,16 +130,22 @@ jlong CgroupV1Subsystem::read_memory_limit_in_bytes() {
*/
jlong CgroupV1Subsystem::read_mem_swap() {
julong host_total_memsw;
- GET_CONTAINER_INFO(julong, _memory->controller(), "/memory.memsw.limit_in_bytes",
- "Memory and Swap Limit is: ", JULONG_FORMAT, JULONG_FORMAT, memswlimit);
+ julong hier_memswlimit;
+ julong memswlimit;
+ CONTAINER_READ_NUMBER_CHECKED(_memory->controller(), "/memory.memsw.limit_in_bytes", "Memory and Swap Limit", memswlimit);
host_total_memsw = os::Linux::host_swap() + os::Linux::physical_memory();
if (memswlimit >= host_total_memsw) {
log_trace(os, container)("Non-Hierarchical Memory and Swap Limit is: Unlimited");
CgroupV1MemoryController* mem_controller = reinterpret_cast(_memory->controller());
if (mem_controller->is_hierarchical()) {
const char* matchline = "hierarchical_memsw_limit";
- GET_CONTAINER_INFO_LINE(julong, _memory->controller(), "/memory.stat", matchline,
- "Hierarchical Memory and Swap Limit is : " JULONG_FORMAT, JULONG_FORMAT, hier_memswlimit)
+ bool is_ok = _memory->controller()->read_numerical_key_value("/memory.stat",
+ matchline,
+ &hier_memswlimit);
+ if (!is_ok) {
+ return OSCONTAINER_ERROR;
+ }
+ log_trace(os, container)("Hierarchical Memory and Swap Limit is: " JULONG_FORMAT, hier_memswlimit);
if (hier_memswlimit >= host_total_memsw) {
log_trace(os, container)("Hierarchical Memory and Swap Limit is: Unlimited");
} else {
@@ -168,29 +179,34 @@ jlong CgroupV1Subsystem::memory_and_swap_limit_in_bytes() {
return memory_swap;
}
+static inline
+jlong memory_swap_usage_impl(CgroupController* ctrl) {
+ julong memory_swap_usage;
+ CONTAINER_READ_NUMBER_CHECKED(ctrl, "/memory.memsw.usage_in_bytes", "mem swap usage", memory_swap_usage);
+ return (jlong)memory_swap_usage;
+}
+
jlong CgroupV1Subsystem::memory_and_swap_usage_in_bytes() {
jlong memory_sw_limit = memory_and_swap_limit_in_bytes();
jlong memory_limit = CgroupSubsystem::memory_limit_in_bytes();
if (memory_sw_limit > 0 && memory_limit > 0) {
jlong delta_swap = memory_sw_limit - memory_limit;
if (delta_swap > 0) {
- GET_CONTAINER_INFO(julong, _memory->controller(), "/memory.memsw.usage_in_bytes",
- "mem swap usage is: ", JULONG_FORMAT, JULONG_FORMAT, memory_swap_usage);
- return (jlong)memory_swap_usage;
+ return memory_swap_usage_impl(_memory->controller());
}
}
return memory_usage_in_bytes();
}
jlong CgroupV1Subsystem::read_mem_swappiness() {
- GET_CONTAINER_INFO(julong, _memory->controller(), "/memory.swappiness",
- "Swappiness is: ", JULONG_FORMAT, JULONG_FORMAT, swappiness);
- return swappiness;
+ julong swappiness;
+ CONTAINER_READ_NUMBER_CHECKED(_memory->controller(), "/memory.swappiness", "Swappiness", swappiness);
+ return (jlong)swappiness;
}
jlong CgroupV1Subsystem::memory_soft_limit_in_bytes() {
- GET_CONTAINER_INFO(julong, _memory->controller(), "/memory.soft_limit_in_bytes",
- "Memory Soft Limit is: ", JULONG_FORMAT, JULONG_FORMAT, memsoftlimit);
+ julong memsoftlimit;
+ CONTAINER_READ_NUMBER_CHECKED(_memory->controller(), "/memory.soft_limit_in_bytes", "Memory Soft Limit", memsoftlimit);
if (memsoftlimit >= os::Linux::physical_memory()) {
log_trace(os, container)("Memory Soft Limit is: Unlimited");
return (jlong)-1;
@@ -209,9 +225,9 @@ jlong CgroupV1Subsystem::memory_soft_limit_in_bytes() {
* OSCONTAINER_ERROR for not supported
*/
jlong CgroupV1Subsystem::memory_usage_in_bytes() {
- GET_CONTAINER_INFO(jlong, _memory->controller(), "/memory.usage_in_bytes",
- "Memory Usage is: ", JLONG_FORMAT, JLONG_FORMAT, memusage);
- return memusage;
+ julong memusage;
+ CONTAINER_READ_NUMBER_CHECKED(_memory->controller(), "/memory.usage_in_bytes", "Memory Usage", memusage);
+ return (jlong)memusage;
}
/* memory_max_usage_in_bytes
@@ -223,32 +239,44 @@ jlong CgroupV1Subsystem::memory_usage_in_bytes() {
* OSCONTAINER_ERROR for not supported
*/
jlong CgroupV1Subsystem::memory_max_usage_in_bytes() {
- GET_CONTAINER_INFO(jlong, _memory->controller(), "/memory.max_usage_in_bytes",
- "Maximum Memory Usage is: ", JLONG_FORMAT, JLONG_FORMAT, memmaxusage);
- return memmaxusage;
+ julong memmaxusage;
+ CONTAINER_READ_NUMBER_CHECKED(_memory->controller(), "/memory.max_usage_in_bytes", "Maximum Memory Usage", memmaxusage);
+ return (jlong)memmaxusage;
}
jlong CgroupV1Subsystem::rss_usage_in_bytes() {
- GET_CONTAINER_INFO_LINE(julong, _memory->controller(), "/memory.stat",
- "rss", JULONG_FORMAT, JULONG_FORMAT, rss);
- return rss;
+ julong rss;
+ bool is_ok = _memory->controller()->read_numerical_key_value("/memory.stat",
+ "rss",
+ &rss);
+ if (!is_ok) {
+ return OSCONTAINER_ERROR;
+ }
+ log_trace(os, container)("RSS usage is: " JULONG_FORMAT, rss);
+ return (jlong)rss;
}
jlong CgroupV1Subsystem::cache_usage_in_bytes() {
- GET_CONTAINER_INFO_LINE(julong, _memory->controller(), "/memory.stat",
- "cache", JULONG_FORMAT, JULONG_FORMAT, cache);
+ julong cache;
+ bool is_ok = _memory->controller()->read_numerical_key_value("/memory.stat",
+ "cache",
+ &cache);
+ if (!is_ok) {
+ return OSCONTAINER_ERROR;
+ }
+ log_trace(os, container)("Cache usage is: " JULONG_FORMAT, cache);
return cache;
}
jlong CgroupV1Subsystem::kernel_memory_usage_in_bytes() {
- GET_CONTAINER_INFO(jlong, _memory->controller(), "/memory.kmem.usage_in_bytes",
- "Kernel Memory Usage is: ", JLONG_FORMAT, JLONG_FORMAT, kmem_usage);
- return kmem_usage;
+ julong kmem_usage;
+ CONTAINER_READ_NUMBER_CHECKED(_memory->controller(), "/memory.kmem.usage_in_bytes", "Kernel Memory Usage", kmem_usage);
+ return (jlong)kmem_usage;
}
jlong CgroupV1Subsystem::kernel_memory_limit_in_bytes() {
- GET_CONTAINER_INFO(julong, _memory->controller(), "/memory.kmem.limit_in_bytes",
- "Kernel Memory Limit is: ", JULONG_FORMAT, JULONG_FORMAT, kmem_limit);
+ julong kmem_limit;
+ CONTAINER_READ_NUMBER_CHECKED(_memory->controller(), "/memory.kmem.limit_in_bytes", "Kernel Memory Limit", kmem_limit);
if (kmem_limit >= os::Linux::physical_memory()) {
return (jlong)-1;
}
@@ -256,9 +284,9 @@ jlong CgroupV1Subsystem::kernel_memory_limit_in_bytes() {
}
jlong CgroupV1Subsystem::kernel_memory_max_usage_in_bytes() {
- GET_CONTAINER_INFO(jlong, _memory->controller(), "/memory.kmem.max_usage_in_bytes",
- "Maximum Kernel Memory Usage is: ", JLONG_FORMAT, JLONG_FORMAT, kmem_max_usage);
- return kmem_max_usage;
+ julong kmem_max_usage;
+ CONTAINER_READ_NUMBER_CHECKED(_memory->controller(), "/memory.kmem.max_usage_in_bytes", "Maximum Kernel Memory Usage", kmem_max_usage);
+ return (jlong)kmem_max_usage;
}
void CgroupV1Subsystem::print_version_specific_info(outputStream* st) {
@@ -271,15 +299,15 @@ void CgroupV1Subsystem::print_version_specific_info(outputStream* st) {
OSContainer::print_container_helper(st, kmem_max_usage, "kernel_memory_limit_in_bytes");
}
-char * CgroupV1Subsystem::cpu_cpuset_cpus() {
- GET_CONTAINER_INFO_CPTR(cptr, _cpuset, "/cpuset.cpus",
- "cpuset.cpus is: %s", "%1023s", cpus, 1024);
+char* CgroupV1Subsystem::cpu_cpuset_cpus() {
+ char cpus[1024];
+ CONTAINER_READ_STRING_CHECKED(_cpuset, "/cpuset.cpus", "cpuset.cpus", cpus, 1024);
return os::strdup(cpus);
}
-char * CgroupV1Subsystem::cpu_cpuset_memory_nodes() {
- GET_CONTAINER_INFO_CPTR(cptr, _cpuset, "/cpuset.mems",
- "cpuset.mems is: %s", "%1023s", mems, 1024);
+char* CgroupV1Subsystem::cpu_cpuset_memory_nodes() {
+ char mems[1024];
+ CONTAINER_READ_STRING_CHECKED(_cpuset, "/cpuset.mems", "cpuset.mems", mems, 1024);
return os::strdup(mems);
}
@@ -294,15 +322,24 @@ char * CgroupV1Subsystem::cpu_cpuset_memory_nodes() {
* OSCONTAINER_ERROR for not supported
*/
int CgroupV1Subsystem::cpu_quota() {
- GET_CONTAINER_INFO(int, _cpu->controller(), "/cpu.cfs_quota_us",
- "CPU Quota is: ", "%d", "%d", quota);
- return quota;
+ julong quota;
+ bool is_ok = _cpu->controller()->
+ read_number("/cpu.cfs_quota_us", "a);
+ if (!is_ok) {
+ log_trace(os, container)("CPU Quota failed: %d", OSCONTAINER_ERROR);
+ return OSCONTAINER_ERROR;
+ }
+ // cast to int since the read value might be negative
+ // and we want to avoid logging -1 as a large unsigned value.
+ int quota_int = (int)quota;
+ log_trace(os, container)("CPU Quota is: %d", quota_int);
+ return quota_int;
}
int CgroupV1Subsystem::cpu_period() {
- GET_CONTAINER_INFO(int, _cpu->controller(), "/cpu.cfs_period_us",
- "CPU Period is: ", "%d", "%d", period);
- return period;
+ julong period;
+ CONTAINER_READ_NUMBER_CHECKED(_cpu->controller(), "/cpu.cfs_period_us", "CPU Period", period);
+ return (int)period;
}
/* cpu_shares
@@ -316,19 +353,13 @@ int CgroupV1Subsystem::cpu_period() {
* OSCONTAINER_ERROR for not supported
*/
int CgroupV1Subsystem::cpu_shares() {
- GET_CONTAINER_INFO(int, _cpu->controller(), "/cpu.shares",
- "CPU Shares is: ", "%d", "%d", shares);
+ julong shares;
+ CONTAINER_READ_NUMBER_CHECKED(_cpu->controller(), "/cpu.shares", "CPU Shares", shares);
+ int shares_int = (int)shares;
// Convert 1024 to no shares setup
- if (shares == 1024) return -1;
-
- return shares;
-}
-
+ if (shares_int == 1024) return -1;
-char* CgroupV1Subsystem::pids_max_val() {
- GET_CONTAINER_INFO_CPTR(cptr, _pids, "/pids.max",
- "Maximum number of tasks is: %s", "%1023s", pidsmax, 1024);
- return os::strdup(pidsmax);
+ return shares_int;
}
/* pids_max
@@ -342,8 +373,9 @@ char* CgroupV1Subsystem::pids_max_val() {
*/
jlong CgroupV1Subsystem::pids_max() {
if (_pids == nullptr) return OSCONTAINER_ERROR;
- char * pidsmax_str = pids_max_val();
- return limit_from_str(pidsmax_str);
+ jlong pids_max;
+ CONTAINER_READ_NUMBER_CHECKED_MAX(_pids, "/pids.max", "Maximum number of tasks", pids_max);
+ return pids_max;
}
/* pids_current
@@ -356,7 +388,7 @@ jlong CgroupV1Subsystem::pids_max() {
*/
jlong CgroupV1Subsystem::pids_current() {
if (_pids == nullptr) return OSCONTAINER_ERROR;
- GET_CONTAINER_INFO(jlong, _pids, "/pids.current",
- "Current number of tasks is: ", JLONG_FORMAT, JLONG_FORMAT, pids_current);
- return pids_current;
+ julong pids_current;
+ CONTAINER_READ_NUMBER_CHECKED(_pids, "/pids.current", "Current number of tasks", pids_current);
+ return (jlong)pids_current;
}
diff --git a/src/hotspot/os/linux/cgroupV1Subsystem_linux.hpp b/src/hotspot/os/linux/cgroupV1Subsystem_linux.hpp
index 3205973961a92..254b17de0ba8c 100644
--- a/src/hotspot/os/linux/cgroupV1Subsystem_linux.hpp
+++ b/src/hotspot/os/linux/cgroupV1Subsystem_linux.hpp
@@ -114,8 +114,6 @@ class CgroupV1Subsystem: public CgroupSubsystem {
CgroupV1Controller* _cpuacct = nullptr;
CgroupV1Controller* _pids = nullptr;
- char * pids_max_val();
-
jlong read_mem_swappiness();
jlong read_mem_swap();
diff --git a/src/hotspot/os/linux/cgroupV2Subsystem_linux.cpp b/src/hotspot/os/linux/cgroupV2Subsystem_linux.cpp
index d7c6464caa0fa..1ba7c1e69d5b3 100644
--- a/src/hotspot/os/linux/cgroupV2Subsystem_linux.cpp
+++ b/src/hotspot/os/linux/cgroupV2Subsystem_linux.cpp
@@ -35,10 +35,11 @@
* OSCONTAINER_ERROR for not supported
*/
int CgroupV2Subsystem::cpu_shares() {
- GET_CONTAINER_INFO(int, _unified, "/cpu.weight",
- "Raw value for CPU Shares is: ", "%d", "%d", shares);
+ julong shares;
+ CONTAINER_READ_NUMBER_CHECKED(_unified, "/cpu.weight", "Raw value for CPU Shares", shares);
+ int shares_int = (int)shares;
// Convert default value of 100 to no shares setup
- if (shares == 100) {
+ if (shares_int == 100) {
log_debug(os, container)("CPU Shares is: %d", -1);
return -1;
}
@@ -50,7 +51,7 @@ int CgroupV2Subsystem::cpu_shares() {
// Use the inverse of (x == OCI value, y == cgroupsv2 value):
// ((262142 * y - 1)/9999) + 2 = x
//
- int x = 262142 * shares - 1;
+ int x = 262142 * shares_int - 1;
double frac = x/9999.0;
x = ((int)frac) + 2;
log_trace(os, container)("Scaled CPU shares value is: %d", x);
@@ -83,33 +84,37 @@ int CgroupV2Subsystem::cpu_shares() {
* OSCONTAINER_ERROR for not supported
*/
int CgroupV2Subsystem::cpu_quota() {
- char * cpu_quota_str = cpu_quota_val();
- int limit = (int)limit_from_str(cpu_quota_str);
+ jlong quota_val;
+ bool is_ok = _unified->read_numerical_tuple_value("/cpu.max", true /* use_first */, "a_val);
+ if (!is_ok) {
+ return OSCONTAINER_ERROR;
+ }
+ int limit = (int)quota_val;
log_trace(os, container)("CPU Quota is: %d", limit);
return limit;
}
-char * CgroupV2Subsystem::cpu_cpuset_cpus() {
- GET_CONTAINER_INFO_CPTR(cptr, _unified, "/cpuset.cpus",
- "cpuset.cpus is: %s", "%1023s", cpus, 1024);
+char* CgroupV2Subsystem::cpu_cpuset_cpus() {
+ char cpus[1024];
+ CONTAINER_READ_STRING_CHECKED(_unified, "/cpuset.cpus", "cpuset.cpus", cpus, 1024);
return os::strdup(cpus);
}
-char* CgroupV2Subsystem::cpu_quota_val() {
- GET_CONTAINER_INFO_CPTR(cptr, _unified, "/cpu.max",
- "Raw value for CPU quota is: %s", "%1023s %*d", quota, 1024);
- return os::strdup(quota);
-}
-
-char * CgroupV2Subsystem::cpu_cpuset_memory_nodes() {
- GET_CONTAINER_INFO_CPTR(cptr, _unified, "/cpuset.mems",
- "cpuset.mems is: %s", "%1023s", mems, 1024);
+char* CgroupV2Subsystem::cpu_cpuset_memory_nodes() {
+ char mems[1024];
+ CONTAINER_READ_STRING_CHECKED(_unified, "/cpuset.mems", "cpuset.mems", mems, 1024);
return os::strdup(mems);
}
int CgroupV2Subsystem::cpu_period() {
- GET_CONTAINER_INFO(int, _unified, "/cpu.max",
- "CPU Period is: ", "%d", "%*s %d", period);
+ jlong period_val;
+ bool is_ok = _unified->read_numerical_tuple_value("/cpu.max", false /* use_first */, &period_val);
+ if (!is_ok) {
+ log_trace(os, container)("CPU Period failed: %d", OSCONTAINER_ERROR);
+ return OSCONTAINER_ERROR;
+ }
+ int period = (int)period_val;
+ log_trace(os, container)("CPU Period is: %d", period);
return period;
}
@@ -123,14 +128,15 @@ int CgroupV2Subsystem::cpu_period() {
* OSCONTAINER_ERROR for not supported
*/
jlong CgroupV2Subsystem::memory_usage_in_bytes() {
- GET_CONTAINER_INFO(jlong, _unified, "/memory.current",
- "Memory Usage is: ", JLONG_FORMAT, JLONG_FORMAT, memusage);
- return memusage;
+ julong memusage;
+ CONTAINER_READ_NUMBER_CHECKED(_unified, "/memory.current", "Memory Usage", memusage);
+ return (jlong)memusage;
}
jlong CgroupV2Subsystem::memory_soft_limit_in_bytes() {
- char* mem_soft_limit_str = mem_soft_limit_val();
- return limit_from_str(mem_soft_limit_str);
+ jlong mem_soft_limit;
+ CONTAINER_READ_NUMBER_CHECKED_MAX(_unified, "/memory.low", "Memory Soft Limit", mem_soft_limit);
+ return mem_soft_limit;
}
jlong CgroupV2Subsystem::memory_max_usage_in_bytes() {
@@ -140,21 +146,25 @@ jlong CgroupV2Subsystem::memory_max_usage_in_bytes() {
}
jlong CgroupV2Subsystem::rss_usage_in_bytes() {
- GET_CONTAINER_INFO_LINE(julong, _memory->controller(), "/memory.stat",
- "anon", JULONG_FORMAT, JULONG_FORMAT, rss);
- return rss;
+ julong rss;
+ bool is_ok = _memory->controller()->
+ read_numerical_key_value("/memory.stat", "anon", &rss);
+ if (!is_ok) {
+ return OSCONTAINER_ERROR;
+ }
+ log_trace(os, container)("RSS usage is: " JULONG_FORMAT, rss);
+ return (jlong)rss;
}
jlong CgroupV2Subsystem::cache_usage_in_bytes() {
- GET_CONTAINER_INFO_LINE(julong, _memory->controller(), "/memory.stat",
- "file", JULONG_FORMAT, JULONG_FORMAT, cache);
- return cache;
-}
-
-char* CgroupV2Subsystem::mem_soft_limit_val() {
- GET_CONTAINER_INFO_CPTR(cptr, _unified, "/memory.low",
- "Memory Soft Limit is: %s", "%1023s", mem_soft_limit_str, 1024);
- return os::strdup(mem_soft_limit_str);
+ julong cache;
+ bool is_ok = _memory->controller()->
+ read_numerical_key_value("/memory.stat", "file", &cache);
+ if (!is_ok) {
+ return OSCONTAINER_ERROR;
+ }
+ log_trace(os, container)("Cache usage is: " JULONG_FORMAT, cache);
+ return (jlong)cache;
}
// Note that for cgroups v2 the actual limits set for swap and
@@ -163,14 +173,15 @@ char* CgroupV2Subsystem::mem_soft_limit_val() {
// compound value we need to sum the two values. Setting a swap limit
// without also setting a memory limit is not allowed.
jlong CgroupV2Subsystem::memory_and_swap_limit_in_bytes() {
- char* mem_swp_limit_str = mem_swp_limit_val();
- if (mem_swp_limit_str == nullptr) {
+ jlong swap_limit;
+ bool is_ok = _memory->controller()->read_number_handle_max("/memory.swap.max", &swap_limit);
+ if (!is_ok) {
// Some container tests rely on this trace logging to happen.
- log_trace(os, container)("Memory and Swap Limit is: %d", OSCONTAINER_ERROR);
+ log_trace(os, container)("Swap Limit failed: %d", OSCONTAINER_ERROR);
// swap disabled at kernel level, treat it as no swap
return read_memory_limit_in_bytes();
}
- jlong swap_limit = limit_from_str(mem_swp_limit_str);
+ log_trace(os, container)("Swap Limit is: " JLONG_FORMAT, swap_limit);
if (swap_limit >= 0) {
jlong memory_limit = read_memory_limit_in_bytes();
assert(memory_limit >= 0, "swap limit without memory limit?");
@@ -183,24 +194,23 @@ jlong CgroupV2Subsystem::memory_and_swap_limit_in_bytes() {
jlong CgroupV2Subsystem::memory_and_swap_usage_in_bytes() {
jlong memory_usage = memory_usage_in_bytes();
if (memory_usage >= 0) {
- char* mem_swp_current_str = mem_swp_current_val();
- jlong swap_current = limit_from_str(mem_swp_current_str);
+ jlong swap_current = mem_swp_current_val();
return memory_usage + (swap_current >= 0 ? swap_current : 0);
}
return memory_usage; // not supported or unlimited case
}
-char* CgroupV2Subsystem::mem_swp_limit_val() {
- GET_CONTAINER_INFO_CPTR(cptr, _unified, "/memory.swap.max",
- "Memory and Swap Limit is: %s", "%1023s", mem_swp_limit_str, 1024);
- return os::strdup(mem_swp_limit_str);
+jlong CgroupV2Subsystem::mem_swp_limit_val() {
+ jlong swap_limit;
+ CONTAINER_READ_NUMBER_CHECKED_MAX(_unified, "/memory.swap.max", "Swap Limit", swap_limit);
+ return swap_limit;
}
// memory.swap.current : total amount of swap currently used by the cgroup and its descendants
-char* CgroupV2Subsystem::mem_swp_current_val() {
- GET_CONTAINER_INFO_CPTR(cptr, _unified, "/memory.swap.current",
- "Swap currently used is: %s", "%1023s", mem_swp_current_str, 1024);
- return os::strdup(mem_swp_current_str);
+jlong CgroupV2Subsystem::mem_swp_current_val() {
+ julong swap_current;
+ CONTAINER_READ_NUMBER_CHECKED(_unified, "/memory.swap.current", "Swap currently used", swap_current);
+ return (jlong)swap_current;
}
/* memory_limit_in_bytes
@@ -212,30 +222,14 @@ char* CgroupV2Subsystem::mem_swp_current_val() {
* -1 for unlimited, OSCONTAINER_ERROR for an error
*/
jlong CgroupV2Subsystem::read_memory_limit_in_bytes() {
- char * mem_limit_str = mem_limit_val();
- jlong limit = limit_from_str(mem_limit_str);
- if (log_is_enabled(Trace, os, container)) {
- if (limit == -1) {
- log_trace(os, container)("Memory Limit is: Unlimited");
- } else {
- log_trace(os, container)("Memory Limit is: " JLONG_FORMAT, limit);
- }
- }
- return limit;
-}
-
-char* CgroupV2Subsystem::mem_limit_val() {
- GET_CONTAINER_INFO_CPTR(cptr, _unified, "/memory.max",
- "Raw value for memory limit is: %s", "%1023s", mem_limit_str, 1024);
- return os::strdup(mem_limit_str);
+ jlong memory_limit;
+ CONTAINER_READ_NUMBER_CHECKED_MAX(_unified, "/memory.max", "Memory Limit", memory_limit);
+ return memory_limit;
}
void CgroupV2Subsystem::print_version_specific_info(outputStream* st) {
- char* mem_swp_current_str = mem_swp_current_val();
- jlong swap_current = limit_from_str(mem_swp_current_str);
-
- char* mem_swp_limit_str = mem_swp_limit_val();
- jlong swap_limit = limit_from_str(mem_swp_limit_str);
+ jlong swap_current = mem_swp_current_val();
+ jlong swap_limit = mem_swp_limit_val();
OSContainer::print_container_helper(st, swap_current, "memory_swap_current_in_bytes");
OSContainer::print_container_helper(st, swap_limit, "memory_swap_max_limit_in_bytes");
@@ -250,12 +244,6 @@ char* CgroupV2Controller::construct_path(char* mount_path, char *cgroup_path) {
return os::strdup(ss.base());
}
-char* CgroupV2Subsystem::pids_max_val() {
- GET_CONTAINER_INFO_CPTR(cptr, _unified, "/pids.max",
- "Maximum number of tasks is: %s", "%1023s", pidsmax, 1024);
- return os::strdup(pidsmax);
-}
-
/* pids_max
*
* Return the maximum number of tasks available to the process
@@ -266,8 +254,9 @@ char* CgroupV2Subsystem::pids_max_val() {
* OSCONTAINER_ERROR for not supported
*/
jlong CgroupV2Subsystem::pids_max() {
- char * pidsmax_str = pids_max_val();
- return limit_from_str(pidsmax_str);
+ jlong pids_max;
+ CONTAINER_READ_NUMBER_CHECKED_MAX(_unified, "/pids.max", "Maximum number of tasks", pids_max);
+ return pids_max;
}
/* pids_current
@@ -279,7 +268,7 @@ jlong CgroupV2Subsystem::pids_max() {
* OSCONTAINER_ERROR for not supported
*/
jlong CgroupV2Subsystem::pids_current() {
- GET_CONTAINER_INFO(jlong, _unified, "/pids.current",
- "Current number of tasks is: ", JLONG_FORMAT, JLONG_FORMAT, pids_current);
+ julong pids_current;
+ CONTAINER_READ_NUMBER_CHECKED(_unified, "/pids.current", "Current number of tasks", pids_current);
return pids_current;
}
diff --git a/src/hotspot/os/linux/cgroupV2Subsystem_linux.hpp b/src/hotspot/os/linux/cgroupV2Subsystem_linux.hpp
index 978c193c6027e..a004c8f75b984 100644
--- a/src/hotspot/os/linux/cgroupV2Subsystem_linux.hpp
+++ b/src/hotspot/os/linux/cgroupV2Subsystem_linux.hpp
@@ -56,12 +56,8 @@ class CgroupV2Subsystem: public CgroupSubsystem {
CachingCgroupController* _memory = nullptr;
CachingCgroupController* _cpu = nullptr;
- char *mem_limit_val();
- char *mem_swp_limit_val();
- char *mem_swp_current_val();
- char *mem_soft_limit_val();
- char *cpu_quota_val();
- char *pids_max_val();
+ jlong mem_swp_limit_val();
+ jlong mem_swp_current_val();
public:
CgroupV2Subsystem(CgroupController * unified) {
diff --git a/src/hotspot/os/linux/gc/z/zPhysicalMemoryBacking_linux.cpp b/src/hotspot/os/linux/gc/z/zPhysicalMemoryBacking_linux.cpp
index ff891509365a0..76f9d90cd7198 100644
--- a/src/hotspot/os/linux/gc/z/zPhysicalMemoryBacking_linux.cpp
+++ b/src/hotspot/os/linux/gc/z/zPhysicalMemoryBacking_linux.cpp
@@ -597,7 +597,7 @@ ZErrno ZPhysicalMemoryBacking::fallocate(bool punch_hole, zoffset offset, size_t
bool ZPhysicalMemoryBacking::commit_inner(zoffset offset, size_t length) const {
log_trace(gc, heap)("Committing memory: " SIZE_FORMAT "M-" SIZE_FORMAT "M (" SIZE_FORMAT "M)",
- untype(offset) / M, untype(offset + length) / M, length / M);
+ untype(offset) / M, untype(to_zoffset_end(offset, length)) / M, length / M);
retry:
const ZErrno err = fallocate(false /* punch_hole */, offset, length);
@@ -697,7 +697,7 @@ size_t ZPhysicalMemoryBacking::commit(zoffset offset, size_t length) const {
size_t ZPhysicalMemoryBacking::uncommit(zoffset offset, size_t length) const {
log_trace(gc, heap)("Uncommitting memory: " SIZE_FORMAT "M-" SIZE_FORMAT "M (" SIZE_FORMAT "M)",
- untype(offset) / M, untype(offset + length) / M, length / M);
+ untype(offset) / M, untype(to_zoffset_end(offset, length)) / M, length / M);
const ZErrno err = fallocate(true /* punch_hole */, offset, length);
if (err) {
diff --git a/src/hotspot/os/linux/os_linux.cpp b/src/hotspot/os/linux/os_linux.cpp
index 3181ad82dab9f..20d3c99cd3eac 100644
--- a/src/hotspot/os/linux/os_linux.cpp
+++ b/src/hotspot/os/linux/os_linux.cpp
@@ -3023,14 +3023,14 @@ void os::pd_realign_memory(char *addr, size_t bytes, size_t alignment_hint) {
}
}
-void os::pd_free_memory(char *addr, size_t bytes, size_t alignment_hint, MEMFLAGS flag) {
+void os::pd_free_memory(char *addr, size_t bytes, size_t alignment_hint) {
// This method works by doing an mmap over an existing mmaping and effectively discarding
// the existing pages. However it won't work for SHM-based large pages that cannot be
// uncommitted at all. We don't do anything in this case to avoid creating a segment with
// small pages on top of the SHM segment. This method always works for small pages, so we
// allow that in any case.
if (alignment_hint <= os::vm_page_size() || can_commit_large_page_memory()) {
- commit_memory(addr, bytes, alignment_hint, !ExecMem, flag);
+ commit_memory(addr, bytes, alignment_hint, !ExecMem);
}
}
@@ -3637,7 +3637,7 @@ bool os::pd_create_stack_guard_pages(char* addr, size_t size) {
}
}
- return os::commit_memory(addr, size, !ExecMem, mtThreadStack);
+ return os::commit_memory(addr, size, !ExecMem);
}
// If this is a growable mapping, remove the guard pages entirely by
@@ -3653,7 +3653,7 @@ bool os::remove_stack_guard_pages(char* addr, size_t size) {
return ::munmap(addr, size) == 0;
}
- return os::uncommit_memory(addr, size, !ExecMem, mtThreadStack);
+ return os::uncommit_memory(addr, size);
}
// 'requested_addr' is only treated as a hint, the return value may or
@@ -3757,7 +3757,7 @@ static int anon_munmap(char * addr, size_t size) {
return 1;
}
-char* os::pd_reserve_memory(size_t bytes, bool exec, MEMFLAGS flag) {
+char* os::pd_reserve_memory(size_t bytes, bool exec) {
return anon_mmap(nullptr, bytes);
}
@@ -4214,7 +4214,7 @@ static char* reserve_memory_special_huge_tlbfs(size_t bytes,
}
char* os::pd_reserve_memory_special(size_t bytes, size_t alignment, size_t page_size,
- char* req_addr, bool exec, MEMFLAGS flag) {
+ char* req_addr, bool exec) {
assert(UseLargePages, "only for large pages");
char* const addr = reserve_memory_special_huge_tlbfs(bytes, alignment, page_size, req_addr, exec);
@@ -4249,9 +4249,9 @@ bool os::can_commit_large_page_memory() {
return UseTransparentHugePages;
}
-char* os::pd_attempt_map_memory_to_file_at(char* requested_addr, size_t bytes, int file_desc, MEMFLAGS flag) {
+char* os::pd_attempt_map_memory_to_file_at(char* requested_addr, size_t bytes, int file_desc) {
assert(file_desc >= 0, "file_desc is not valid");
- char* result = pd_attempt_reserve_memory_at(requested_addr, bytes, !ExecMem, flag);
+ char* result = pd_attempt_reserve_memory_at(requested_addr, bytes, !ExecMem);
if (result != nullptr) {
if (replace_existing_mapping_with_file_mapping(result, bytes, file_desc) == nullptr) {
vm_exit_during_initialization(err_msg("Error in mapping Java heap at the given filesystem directory"));
@@ -4263,7 +4263,7 @@ char* os::pd_attempt_map_memory_to_file_at(char* requested_addr, size_t bytes, i
// Reserve memory at an arbitrary address, only if that area is
// available (and not reserved for something else).
-char* os::pd_attempt_reserve_memory_at(char* requested_addr, size_t bytes, bool exec, MEMFLAGS flag) {
+char* os::pd_attempt_reserve_memory_at(char* requested_addr, size_t bytes, bool exec) {
// Assert only that the size is a multiple of the page size, since
// that's all that mmap requires, and since that's all we really know
// about at this low abstraction level. If we need higher alignment,
@@ -4655,7 +4655,7 @@ static void workaround_expand_exec_shield_cs_limit() {
*/
char* hint = (char*)(os::Linux::initial_thread_stack_bottom() -
(StackOverflow::stack_guard_zone_size() + page_size));
- char* codebuf = os::attempt_reserve_memory_at(hint, page_size, !ExecMem, mtInternal);
+ char* codebuf = os::attempt_reserve_memory_at(hint, page_size);
if (codebuf == nullptr) {
// JDK-8197429: There may be a stack gap of one megabyte between
@@ -4663,13 +4663,15 @@ static void workaround_expand_exec_shield_cs_limit() {
// Linux kernel workaround for CVE-2017-1000364. If we failed to
// map our codebuf, try again at an address one megabyte lower.
hint -= 1 * M;
- codebuf = os::attempt_reserve_memory_at(hint, page_size, !ExecMem, mtInternal);
+ codebuf = os::attempt_reserve_memory_at(hint, page_size);
}
- if ((codebuf == nullptr) || (!os::commit_memory(codebuf, page_size, ExecMem, mtInternal))) {
+ if ((codebuf == nullptr) || (!os::commit_memory(codebuf, page_size, true))) {
return; // No matter, we tried, best effort.
}
+ MemTracker::record_virtual_memory_type((address)codebuf, mtInternal);
+
log_info(os)("[CS limit NX emulation work-around, exec code at: %p]", codebuf);
// Some code to exec: the 'ret' instruction
diff --git a/src/hotspot/os/linux/systemMemoryBarrier_linux.cpp b/src/hotspot/os/linux/systemMemoryBarrier_linux.cpp
index 892d825b40cdc..a6b26928520e6 100644
--- a/src/hotspot/os/linux/systemMemoryBarrier_linux.cpp
+++ b/src/hotspot/os/linux/systemMemoryBarrier_linux.cpp
@@ -41,6 +41,8 @@
#define SYS_membarrier 365
#elif defined(AARCH64)
#define SYS_membarrier 283
+ #elif defined(ARM32)
+ #define SYS_membarrier 389
#elif defined(ALPHA)
#define SYS_membarrier 517
#else
diff --git a/src/hotspot/os/posix/defs.S.inc b/src/hotspot/os/posix/defs.S.inc
new file mode 100644
index 0000000000000..ec8c49946993e
--- /dev/null
+++ b/src/hotspot/os/posix/defs.S.inc
@@ -0,0 +1,31 @@
+#
+# Copyright (c) 2024, Oracle and/or its affiliates. All rights reserved.
+# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
+#
+# This code is free software; you can redistribute it and/or modify it
+# under the terms of the GNU General Public License version 2 only, as
+# published by the Free Software Foundation.
+#
+# This code is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+# version 2 for more details (a copy is included in the LICENSE file that
+# accompanied this code).
+#
+# You should have received a copy of the GNU General Public License version
+# 2 along with this work; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
+# or visit www.oracle.com if you need additional information or have any
+# questions.
+#
+
+# Used to get a reference to a symbol. (Overridden by macOS.)
+#define SYMBOL(s) s
+
+#define DECLARE_FUNC(func) \
+ .globl func ; \
+ .hidden func ; \
+ .type func, %function ; \
+ func
diff --git a/src/hotspot/os/posix/os_posix.cpp b/src/hotspot/os/posix/os_posix.cpp
index db3aef14543f7..1e7473eea1dc1 100644
--- a/src/hotspot/os/posix/os_posix.cpp
+++ b/src/hotspot/os/posix/os_posix.cpp
@@ -395,9 +395,9 @@ static char* chop_extra_memory(size_t size, size_t alignment, char* extra_base,
// Multiple threads can race in this code, and can remap over each other with MAP_FIXED,
// so on posix, unmap the section at the start and at the end of the chunk that we mapped
// rather than unmapping and remapping the whole chunk to get requested alignment.
-char* os::reserve_memory_aligned(size_t size, size_t alignment, bool exec, MEMFLAGS flag) {
+char* os::reserve_memory_aligned(size_t size, size_t alignment, bool exec) {
size_t extra_size = calculate_aligned_extra_size(size, alignment);
- char* extra_base = os::reserve_memory(extra_size, exec, flag);
+ char* extra_base = os::reserve_memory(extra_size, exec);
if (extra_base == nullptr) {
return nullptr;
}
@@ -421,21 +421,10 @@ char* os::map_memory_to_file_aligned(size_t size, size_t alignment, int file_des
if (replace_existing_mapping_with_file_mapping(aligned_base, size, file_desc) == nullptr) {
vm_exit_during_initialization(err_msg("Error in mapping Java heap at the given filesystem directory"));
}
- MemTracker::record_virtual_memory_commit((address)aligned_base, size, CALLER_PC, flag);
+ MemTracker::record_virtual_memory_commit((address)aligned_base, size, CALLER_PC);
return aligned_base;
}
-int os::vsnprintf(char* buf, size_t len, const char* fmt, va_list args) {
- // All supported POSIX platforms provide C99 semantics.
- ALLOW_C_FUNCTION(::vsnprintf, int result = ::vsnprintf(buf, len, fmt, args);)
- // If an encoding error occurred (result < 0) then it's not clear
- // whether the buffer is NUL terminated, so ensure it is.
- if ((result < 0) && (len > 0)) {
- buf[len - 1] = '\0';
- }
- return result;
-}
-
int os::get_fileno(FILE* fp) {
return NOT_AIX(::)fileno(fp);
}
diff --git a/src/hotspot/os/posix/perfMemory_posix.cpp b/src/hotspot/os/posix/perfMemory_posix.cpp
index 2ed33c3fb9c91..4339a21ae4e92 100644
--- a/src/hotspot/os/posix/perfMemory_posix.cpp
+++ b/src/hotspot/os/posix/perfMemory_posix.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2001, 2024, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2001, 2023, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2012, 2021 SAP SE. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@@ -65,14 +65,14 @@ static char* backing_store_file_name = nullptr; // name of the backing store
static char* create_standard_memory(size_t size) {
// allocate an aligned chuck of memory
- char* mapAddress = os::reserve_memory(size, !ExecMem, mtInternal);
+ char* mapAddress = os::reserve_memory(size);
if (mapAddress == nullptr) {
return nullptr;
}
// commit memory
- if (!os::commit_memory(mapAddress, size, !ExecMem, mtInternal)) {
+ if (!os::commit_memory(mapAddress, size, !ExecMem)) {
if (PrintMiscellaneous && Verbose) {
warning("Could not commit PerfData memory\n");
}
diff --git a/src/hotspot/os/windows/gc/z/zPhysicalMemoryBacking_windows.cpp b/src/hotspot/os/windows/gc/z/zPhysicalMemoryBacking_windows.cpp
index 9a38352d1d72d..181d0fada5975 100644
--- a/src/hotspot/os/windows/gc/z/zPhysicalMemoryBacking_windows.cpp
+++ b/src/hotspot/os/windows/gc/z/zPhysicalMemoryBacking_windows.cpp
@@ -225,14 +225,14 @@ void ZPhysicalMemoryBacking::warn_commit_limits(size_t max_capacity) const {
size_t ZPhysicalMemoryBacking::commit(zoffset offset, size_t length) {
log_trace(gc, heap)("Committing memory: " SIZE_FORMAT "M-" SIZE_FORMAT "M (" SIZE_FORMAT "M)",
- untype(offset) / M, (untype(offset) + length) / M, length / M);
+ untype(offset) / M, untype(to_zoffset_end(offset, length)) / M, length / M);
return _impl->commit(offset, length);
}
size_t ZPhysicalMemoryBacking::uncommit(zoffset offset, size_t length) {
log_trace(gc, heap)("Uncommitting memory: " SIZE_FORMAT "M-" SIZE_FORMAT "M (" SIZE_FORMAT "M)",
- untype(offset) / M, (untype(offset) + length) / M, length / M);
+ untype(offset) / M, untype(to_zoffset_end(offset, length)) / M, length / M);
return _impl->uncommit(offset, length);
}
diff --git a/src/hotspot/os/windows/os_windows.cpp b/src/hotspot/os/windows/os_windows.cpp
index a99e67d417c5b..b95209b3f9bf7 100644
--- a/src/hotspot/os/windows/os_windows.cpp
+++ b/src/hotspot/os/windows/os_windows.cpp
@@ -1739,17 +1739,6 @@ void os::get_summary_os_info(char* buf, size_t buflen) {
if (nl != nullptr) *nl = '\0';
}
-int os::vsnprintf(char* buf, size_t len, const char* fmt, va_list args) {
- // Starting with Visual Studio 2015, vsnprint is C99 compliant.
- ALLOW_C_FUNCTION(::vsnprintf, int result = ::vsnprintf(buf, len, fmt, args);)
- // If an encoding error occurred (result < 0) then it's not clear
- // whether the buffer is NUL terminated, so ensure it is.
- if ((result < 0) && (len > 0)) {
- buf[len - 1] = '\0';
- }
- return result;
-}
-
static inline time_t get_mtime(const char* filename) {
struct stat st;
int ret = os::stat(filename, &st);
@@ -2755,7 +2744,7 @@ LONG WINAPI topLevelExceptionFilter(struct _EXCEPTION_POINTERS* exceptionInfo) {
addr = (address)((uintptr_t)addr &
(~((uintptr_t)os::vm_page_size() - (uintptr_t)1)));
os::commit_memory((char *)addr, thread->stack_base() - addr,
- !ExecMem, mtThreadStack);
+ !ExecMem);
return EXCEPTION_CONTINUE_EXECUTION;
}
#endif
@@ -3117,9 +3106,8 @@ static bool numa_interleaving_init() {
// Reasons for doing this:
// * UseLargePagesIndividualAllocation was set (normally only needed on WS2003 but possible to be set otherwise)
// * UseNUMAInterleaving requires a separate node for each piece
-static char* allocate_pages_individually(size_t bytes, char* addr, DWORD alloc_type,
+static char* allocate_pages_individually(size_t bytes, char* addr, DWORD flags,
DWORD prot,
- MEMFLAGS flag,
bool should_inject_error = false) {
char * p_buf;
// note: at setup time we guaranteed that NUMAInterleaveGranularity was aligned up to a page size
@@ -3143,7 +3131,7 @@ static char* allocate_pages_individually(size_t bytes, char* addr, DWORD alloc_t
PAGE_READWRITE);
// If reservation failed, return null
if (p_buf == nullptr) return nullptr;
- MemTracker::record_virtual_memory_reserve((address)p_buf, size_of_reserve, CALLER_PC, flag);
+ MemTracker::record_virtual_memory_reserve((address)p_buf, size_of_reserve, CALLER_PC);
os::release_memory(p_buf, bytes + chunk_size);
// we still need to round up to a page boundary (in case we are using large pages)
@@ -3185,13 +3173,13 @@ static char* allocate_pages_individually(size_t bytes, char* addr, DWORD alloc_t
if (!UseNUMAInterleaving) {
p_new = (char *) virtualAlloc(next_alloc_addr,
bytes_to_rq,
- alloc_type,
+ flags,
prot);
} else {
// get the next node to use from the used_node_list
assert(numa_node_list_holder.get_count() > 0, "Multiple NUMA nodes expected");
DWORD node = numa_node_list_holder.get_node_list_entry(count % numa_node_list_holder.get_count());
- p_new = (char *)virtualAllocExNuma(hProc, next_alloc_addr, bytes_to_rq, alloc_type, prot, node);
+ p_new = (char *)virtualAllocExNuma(hProc, next_alloc_addr, bytes_to_rq, flags, prot, node);
}
}
@@ -3204,7 +3192,7 @@ static char* allocate_pages_individually(size_t bytes, char* addr, DWORD alloc_t
// need to create a dummy 'reserve' record to match
// the release.
MemTracker::record_virtual_memory_reserve((address)p_buf,
- bytes_to_release, CALLER_PC, flag);
+ bytes_to_release, CALLER_PC);
os::release_memory(p_buf, bytes_to_release);
}
#ifdef ASSERT
@@ -3221,10 +3209,10 @@ static char* allocate_pages_individually(size_t bytes, char* addr, DWORD alloc_t
}
// Although the memory is allocated individually, it is returned as one.
// NMT records it as one block.
- if ((alloc_type & MEM_COMMIT) != 0) {
- MemTracker::record_virtual_memory_reserve_and_commit((address)p_buf, bytes, CALLER_PC, flag);
+ if ((flags & MEM_COMMIT) != 0) {
+ MemTracker::record_virtual_memory_reserve_and_commit((address)p_buf, bytes, CALLER_PC);
} else {
- MemTracker::record_virtual_memory_reserve((address)p_buf, bytes, CALLER_PC, flag);
+ MemTracker::record_virtual_memory_reserve((address)p_buf, bytes, CALLER_PC);
}
// made it this far, success
@@ -3352,7 +3340,7 @@ char* os::replace_existing_mapping_with_file_mapping(char* base, size_t size, in
// Multiple threads can race in this code but it's not possible to unmap small sections of
// virtual space to get requested alignment, like posix-like os's.
// Windows prevents multiple thread from remapping over each other so this loop is thread-safe.
-static char* map_or_reserve_memory_aligned(size_t size, size_t alignment, int file_desc, MEMFLAGS flag) {
+static char* map_or_reserve_memory_aligned(size_t size, size_t alignment, int file_desc, MEMFLAGS flag = mtNone) {
assert(is_aligned(alignment, os::vm_allocation_granularity()),
"Alignment must be a multiple of allocation granularity (page size)");
assert(is_aligned(size, os::vm_allocation_granularity()),
@@ -3366,7 +3354,7 @@ static char* map_or_reserve_memory_aligned(size_t size, size_t alignment, int fi
for (int attempt = 0; attempt < max_attempts && aligned_base == nullptr; attempt ++) {
char* extra_base = file_desc != -1 ? os::map_memory_to_file(extra_size, file_desc, flag) :
- os::reserve_memory(extra_size, !ExecMem, flag);
+ os::reserve_memory(extra_size, false, flag);
if (extra_base == nullptr) {
return nullptr;
}
@@ -3383,7 +3371,7 @@ static char* map_or_reserve_memory_aligned(size_t size, size_t alignment, int fi
// Attempt to map, into the just vacated space, the slightly smaller aligned area.
// Which may fail, hence the loop.
aligned_base = file_desc != -1 ? os::attempt_map_memory_to_file_at(aligned_base, size, file_desc, flag) :
- os::attempt_reserve_memory_at(aligned_base, size, !ExecMem, flag);
+ os::attempt_reserve_memory_at(aligned_base, size, false, flag);
}
assert(aligned_base != nullptr, "Did not manage to re-map after %d attempts?", max_attempts);
@@ -3391,22 +3379,22 @@ static char* map_or_reserve_memory_aligned(size_t size, size_t alignment, int fi
return aligned_base;
}
-char* os::reserve_memory_aligned(size_t size, size_t alignment, bool exec, MEMFLAGS flag) {
+char* os::reserve_memory_aligned(size_t size, size_t alignment, bool exec) {
// exec can be ignored
- return map_or_reserve_memory_aligned(size, alignment, -1 /* file_desc */, flag);
+ return map_or_reserve_memory_aligned(size, alignment, -1 /* file_desc */);
}
char* os::map_memory_to_file_aligned(size_t size, size_t alignment, int fd, MEMFLAGS flag) {
return map_or_reserve_memory_aligned(size, alignment, fd, flag);
}
-char* os::pd_reserve_memory(size_t bytes, bool exec, MEMFLAGS flag) {
- return pd_attempt_reserve_memory_at(nullptr /* addr */, bytes, exec, flag);
+char* os::pd_reserve_memory(size_t bytes, bool exec) {
+ return pd_attempt_reserve_memory_at(nullptr /* addr */, bytes, exec);
}
// Reserve memory at an arbitrary address, only if that area is
// available (and not reserved for something else).
-char* os::pd_attempt_reserve_memory_at(char* addr, size_t bytes, bool exec, MEMFLAGS flag) {
+char* os::pd_attempt_reserve_memory_at(char* addr, size_t bytes, bool exec) {
assert((size_t)addr % os::vm_allocation_granularity() == 0,
"reserve alignment");
assert(bytes % os::vm_page_size() == 0, "reserve page size");
@@ -3421,7 +3409,7 @@ char* os::pd_attempt_reserve_memory_at(char* addr, size_t bytes, bool exec, MEMF
if (Verbose && PrintMiscellaneous) reserveTimer.start();
// in numa interleaving, we have to allocate pages individually
// (well really chunks of NUMAInterleaveGranularity size)
- res = allocate_pages_individually(bytes, addr, MEM_RESERVE, PAGE_READWRITE, flag);
+ res = allocate_pages_individually(bytes, addr, MEM_RESERVE, PAGE_READWRITE);
if (res == nullptr) {
warning("NUMA page allocation failed");
}
@@ -3442,7 +3430,7 @@ size_t os::vm_min_address() {
return _vm_min_address_default;
}
-char* os::pd_attempt_map_memory_to_file_at(char* requested_addr, size_t bytes, int file_desc, MEMFLAGS flag) {
+char* os::pd_attempt_map_memory_to_file_at(char* requested_addr, size_t bytes, int file_desc) {
assert(file_desc >= 0, "file_desc is not valid");
return map_memory_to_file(requested_addr, bytes, file_desc);
}
@@ -3458,13 +3446,13 @@ bool os::can_commit_large_page_memory() {
return false;
}
-static char* reserve_large_pages_individually(size_t size, char* req_addr, bool exec, MEMFLAGS flag) {
+static char* reserve_large_pages_individually(size_t size, char* req_addr, bool exec) {
log_debug(pagesize)("Reserving large pages individually.");
const DWORD prot = exec ? PAGE_EXECUTE_READWRITE : PAGE_READWRITE;
- const DWORD alloc_type = MEM_RESERVE | MEM_COMMIT | MEM_LARGE_PAGES;
+ const DWORD flags = MEM_RESERVE | MEM_COMMIT | MEM_LARGE_PAGES;
- char * p_buf = allocate_pages_individually(size, req_addr, alloc_type, prot, flag, LargePagesIndividualAllocationInjectError);
+ char * p_buf = allocate_pages_individually(size, req_addr, flags, prot, LargePagesIndividualAllocationInjectError);
if (p_buf == nullptr) {
// give an appropriate warning message
if (UseNUMAInterleaving) {
@@ -3488,12 +3476,12 @@ static char* reserve_large_pages_single_range(size_t size, char* req_addr, bool
return (char *) virtualAlloc(req_addr, size, flags, prot);
}
-static char* reserve_large_pages(size_t size, char* req_addr, bool exec, MEMFLAGS flag) {
+static char* reserve_large_pages(size_t size, char* req_addr, bool exec) {
// with large pages, there are two cases where we need to use Individual Allocation
// 1) the UseLargePagesIndividualAllocation flag is set (set by default on WS2003)
// 2) NUMA Interleaving is enabled, in which case we use a different node for each page
if (UseLargePagesIndividualAllocation || UseNUMAInterleaving) {
- return reserve_large_pages_individually(size, req_addr, exec, flag);
+ return reserve_large_pages_individually(size, req_addr, exec);
}
return reserve_large_pages_single_range(size, req_addr, exec);
}
@@ -3510,7 +3498,7 @@ static char* find_aligned_address(size_t size, size_t alignment) {
return aligned_addr;
}
-static char* reserve_large_pages_aligned(size_t size, size_t alignment, bool exec, MEMFLAGS flag) {
+static char* reserve_large_pages_aligned(size_t size, size_t alignment, bool exec) {
log_debug(pagesize)("Reserving large pages at an aligned address, alignment=" SIZE_FORMAT "%s",
byte_size_in_exact_unit(alignment), exact_unit_for_byte_size(alignment));
@@ -3523,7 +3511,7 @@ static char* reserve_large_pages_aligned(size_t size, size_t alignment, bool exe
char* aligned_address = find_aligned_address(size, alignment);
// Try to do the large page reservation using the aligned address.
- aligned_address = reserve_large_pages(size, aligned_address, exec, flag);
+ aligned_address = reserve_large_pages(size, aligned_address, exec);
if (aligned_address != nullptr) {
// Reservation at the aligned address succeeded.
guarantee(is_aligned(aligned_address, alignment), "Must be aligned");
@@ -3536,7 +3524,7 @@ static char* reserve_large_pages_aligned(size_t size, size_t alignment, bool exe
}
char* os::pd_reserve_memory_special(size_t bytes, size_t alignment, size_t page_size, char* addr,
- bool exec, MEMFLAGS flag) {
+ bool exec) {
assert(UseLargePages, "only for large pages");
assert(page_size == os::large_page_size(), "Currently only support one large page size on Windows");
assert(is_aligned(addr, alignment), "Must be");
@@ -3552,11 +3540,11 @@ char* os::pd_reserve_memory_special(size_t bytes, size_t alignment, size_t page_
// ensure that the requested alignment is met. When there is a requested address
// this solves it self, since it must be properly aligned already.
if (addr == nullptr && alignment > page_size) {
- return reserve_large_pages_aligned(bytes, alignment, exec, flag);
+ return reserve_large_pages_aligned(bytes, alignment, exec);
}
// No additional requirements, just reserve the large pages.
- return reserve_large_pages(bytes, addr, exec, flag);
+ return reserve_large_pages(bytes, addr, exec);
}
bool os::pd_release_memory_special(char* base, size_t bytes) {
@@ -3722,11 +3710,11 @@ bool os::pd_release_memory(char* addr, size_t bytes) {
}
bool os::pd_create_stack_guard_pages(char* addr, size_t size) {
- return os::commit_memory(addr, size, !ExecMem, mtThreadStack);
+ return os::commit_memory(addr, size, !ExecMem);
}
bool os::remove_stack_guard_pages(char* addr, size_t size) {
- return os::uncommit_memory(addr, size, !ExecMem, mtThreadStack);
+ return os::uncommit_memory(addr, size);
}
static bool protect_pages_individually(char* addr, size_t bytes, unsigned int p, DWORD *old_status) {
@@ -3777,7 +3765,7 @@ bool os::protect_memory(char* addr, size_t bytes, ProtType prot,
// memory, not a big deal anyway, as bytes less or equal than 64K
if (!is_committed) {
commit_memory_or_exit(addr, bytes, prot == MEM_PROT_RWX,
- mtInternal, "cannot commit protection page");
+ "cannot commit protection page");
}
// One cannot use os::guard_memory() here, as on Win32 guard page
// have different (one-shot) semantics, from MSDN on PAGE_GUARD:
@@ -3817,7 +3805,7 @@ bool os::unguard_memory(char* addr, size_t bytes) {
}
void os::pd_realign_memory(char *addr, size_t bytes, size_t alignment_hint) { }
-void os::pd_free_memory(char *addr, size_t bytes, size_t alignment_hint, MEMFLAGS flag) { }
+void os::pd_free_memory(char *addr, size_t bytes, size_t alignment_hint) { }
size_t os::pd_pretouch_memory(void* first, void* last, size_t page_size) {
return page_size;
@@ -5105,6 +5093,9 @@ char* os::pd_map_memory(int fd, const char* file_name, size_t file_offset,
return nullptr;
}
+ // Record virtual memory allocation
+ MemTracker::record_virtual_memory_reserve_and_commit((address)addr, bytes, CALLER_PC);
+
DWORD bytes_read;
OVERLAPPED overlapped;
overlapped.Offset = (DWORD)file_offset;
diff --git a/src/hotspot/os/windows/perfMemory_windows.cpp b/src/hotspot/os/windows/perfMemory_windows.cpp
index 0e5b5fedbcd51..a71101731fb98 100644
--- a/src/hotspot/os/windows/perfMemory_windows.cpp
+++ b/src/hotspot/os/windows/perfMemory_windows.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2001, 2024, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2001, 2023, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -55,14 +55,14 @@ typedef BOOL (WINAPI *SetSecurityDescriptorControlFnPtr)(
static char* create_standard_memory(size_t size) {
// allocate an aligned chuck of memory
- char* mapAddress = os::reserve_memory(size, !ExecMem, mtInternal);
+ char* mapAddress = os::reserve_memory(size);
if (mapAddress == nullptr) {
return nullptr;
}
// commit memory
- if (!os::commit_memory(mapAddress, size, !ExecMem, mtInternal)) {
+ if (!os::commit_memory(mapAddress, size, !ExecMem)) {
if (PrintMiscellaneous && Verbose) {
warning("Could not commit PerfData memory\n");
}
diff --git a/src/hotspot/os_cpu/bsd_aarch64/copy_bsd_aarch64.S b/src/hotspot/os_cpu/bsd_aarch64/copy_bsd_aarch64.S
index 187cd20ddbdad..a2099f39c708d 100644
--- a/src/hotspot/os_cpu/bsd_aarch64/copy_bsd_aarch64.S
+++ b/src/hotspot/os_cpu/bsd_aarch64/copy_bsd_aarch64.S
@@ -1,6 +1,7 @@
/*
* Copyright (c) 2016, Linaro Ltd. All rights reserved.
* Copyright (c) 2021, Azul Systems, Inc. All rights reserved.
+ * Copyright (c) 2024, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -23,15 +24,7 @@
*
*/
-#define CFUNC(x) _##x
-
- .global CFUNC(_Copy_conjoint_words)
- .global CFUNC(_Copy_disjoint_words)
-
-#ifdef __APPLE__
- .private_extern CFUNC(_Copy_conjoint_words)
- .private_extern CFUNC(_Copy_disjoint_words)
-#endif
+#include "defs.S.inc"
s .req x0
d .req x1
@@ -46,7 +39,7 @@ t6 .req x9
t7 .req x10
.align 6
-CFUNC(_Copy_disjoint_words):
+DECLARE_FUNC(_Copy_disjoint_words):
// Ensure 2 word aligned
tbz s, #3, fwd_copy_aligned
ldr t0, [s], #8
@@ -144,10 +137,10 @@ fwd_copy_drain:
ret
.align 6
-CFUNC(_Copy_conjoint_words):
+DECLARE_FUNC(_Copy_conjoint_words):
sub t0, d, s
cmp t0, count, lsl #3
- bhs CFUNC(_Copy_disjoint_words)
+ bhs SYMBOL(_Copy_disjoint_words)
add s, s, count, lsl #3
add d, d, count, lsl #3
diff --git a/src/hotspot/os_cpu/bsd_aarch64/defs.S.inc b/src/hotspot/os_cpu/bsd_aarch64/defs.S.inc
new file mode 100644
index 0000000000000..717568380a909
--- /dev/null
+++ b/src/hotspot/os_cpu/bsd_aarch64/defs.S.inc
@@ -0,0 +1,40 @@
+#
+# Copyright (c) 2024, Oracle and/or its affiliates. All rights reserved.
+# DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
+#
+# This code is free software; you can redistribute it and/or modify it
+# under the terms of the GNU General Public License version 2 only, as
+# published by the Free Software Foundation.
+#
+# This code is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+# version 2 for more details (a copy is included in the LICENSE file that
+# accompanied this code).
+#
+# You should have received a copy of the GNU General Public License version
+# 2 along with this work; if not, write to the Free Software Foundation,
+# Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
+# or visit www.oracle.com if you need additional information or have any
+# questions.
+#
+
+#ifdef __APPLE__
+ # macOS prefixes symbols with _
+ #define SYMBOL(s) _ ## s
+
+ #define DECLARE_FUNC(func) \
+ .globl SYMBOL(func) %% \
+ .private_extern SYMBOL(func) %% \
+ SYMBOL(func)
+#else
+ #define SYMBOL(s) s
+
+ #define DECLARE_FUNC(func) \
+ .globl func %% \
+ .hidden func %% \
+ .type func, %function %% \
+ func
+#endif
diff --git a/src/hotspot/os_cpu/bsd_aarch64/safefetch_bsd_aarch64.S b/src/hotspot/os_cpu/bsd_aarch64/safefetch_bsd_aarch64.S
index b9b6df9b23aa0..7cb2b69d955df 100644
--- a/src/hotspot/os_cpu/bsd_aarch64/safefetch_bsd_aarch64.S
+++ b/src/hotspot/os_cpu/bsd_aarch64/safefetch_bsd_aarch64.S
@@ -23,30 +23,7 @@
*
*/
-#ifdef __APPLE__
-# Darwin uses _ prefixed global symbols
-#define SYMBOL(s) _ ## s
-#define ELF_TYPE(name, description)
-#else
-#define SYMBOL(s) s
-#define ELF_TYPE(name, description) .type name,description
-#endif
-
- .global SYMBOL(SafeFetchN_impl)
- .global SYMBOL(_SafeFetchN_fault)
- .global SYMBOL(_SafeFetchN_continuation)
- .global SYMBOL(SafeFetch32_impl)
- .global SYMBOL(_SafeFetch32_fault)
- .global SYMBOL(_SafeFetch32_continuation)
-
-#ifdef __APPLE__
- .private_extern SYMBOL(SafeFetchN_impl)
- .private_extern SYMBOL(_SafeFetchN_fault)
- .private_extern SYMBOL(_SafeFetchN_continuation)
- .private_extern SYMBOL(SafeFetch32_impl)
- .private_extern SYMBOL(_SafeFetch32_fault)
- .private_extern SYMBOL(_SafeFetch32_continuation)
-#endif
+#include "defs.S.inc"
# Support for int SafeFetch32(int* address, int defaultval);
#
@@ -55,12 +32,11 @@
# needed to align function start to 4 byte
.align 6
- ELF_TYPE(SafeFetch32_impl,@function)
-SYMBOL(SafeFetch32_impl):
-SYMBOL(_SafeFetch32_fault):
+DECLARE_FUNC(SafeFetch32_impl):
+DECLARE_FUNC(_SafeFetch32_fault):
ldr w0, [x0]
ret
-SYMBOL(_SafeFetch32_continuation):
+DECLARE_FUNC(_SafeFetch32_continuation):
mov x0, x1
ret
@@ -70,11 +46,10 @@ SYMBOL(_SafeFetch32_continuation):
# x0 : defaultval
.align 6
- ELF_TYPE(SafeFetchN_impl,@function)
-SYMBOL(SafeFetchN_impl):
-SYMBOL(_SafeFetchN_fault):
+DECLARE_FUNC(SafeFetchN_impl):
+DECLARE_FUNC(_SafeFetchN_fault):
ldr x0, [x0]
ret
-SYMBOL(_SafeFetchN_continuation):
+DECLARE_FUNC(_SafeFetchN_continuation):
mov x0, x1
ret
diff --git a/src/hotspot/os_cpu/bsd_x86/bsd_x86_32.S b/src/hotspot/os_cpu/bsd_x86/bsd_x86_32.S
index 5cad379df3f2b..7d8892bcd8798 100644
--- a/src/hotspot/os_cpu/bsd_x86/bsd_x86_32.S
+++ b/src/hotspot/os_cpu/bsd_x86/bsd_x86_32.S
@@ -21,18 +21,7 @@
# questions.
#
-
-#ifdef __APPLE__
-# Darwin uses _ prefixed global symbols
-#define SYMBOL(s) _ ## s
-#define ELF_TYPE(name, description)
-#else
-#define SYMBOL(s) s
-#define ELF_TYPE(name, description) .type name,description
-#endif
-
- .globl SYMBOL(fixcw)
- .globl SYMBOL(SpinPause)
+#include "defs.S.inc"
# NOTE WELL! The _Copy functions are called directly
# from server-compiler-generated code via CallLeafNoFP,
@@ -40,46 +29,18 @@
# point or use it in the same manner as does the server
# compiler.
- .globl SYMBOL(_Copy_arrayof_conjoint_bytes)
- .globl SYMBOL(_Copy_conjoint_jshorts_atomic)
- .globl SYMBOL(_Copy_arrayof_conjoint_jshorts)
- .globl SYMBOL(_Copy_conjoint_jints_atomic)
- .globl SYMBOL(_Copy_arrayof_conjoint_jints)
- .globl SYMBOL(_Copy_conjoint_jlongs_atomic)
- .globl SYMBOL(_mmx_Copy_arrayof_conjoint_jshorts)
-
- .globl SYMBOL(_Atomic_cmpxchg_long)
- .globl SYMBOL(_Atomic_move_long)
-
-#ifdef __APPLE__
- .private_extern SYMBOL(fixcw)
- .private_extern SYMBOL(SpinPause)
- .private_extern SYMBOL(_Copy_arrayof_conjoint_bytes)
- .private_extern SYMBOL(_Copy_conjoint_jshorts_atomic)
- .private_extern SYMBOL(_Copy_arrayof_conjoint_jshorts)
- .private_extern SYMBOL(_Copy_conjoint_jints_atomic)
- .private_extern SYMBOL(_Copy_arrayof_conjoint_jints)
- .private_extern SYMBOL(_Copy_conjoint_jlongs_atomic)
- .private_extern SYMBOL(_mmx_Copy_arrayof_conjoint_jshorts)
- .private_extern SYMBOL(_Atomic_cmpxchg_long)
- .private_extern SYMBOL(_Atomic_move_long)
-#endif
-
.text
-# Support for void os::Solaris::init_thread_fpu_state() in os_solaris_i486.cpp
# Set fpu to 53 bit precision. This happens too early to use a stub.
-# ported from solaris_x86_32.s
.p2align 4,,15
-SYMBOL(fixcw):
+DECLARE_FUNC(fixcw):
pushl $0x27f
fldcw 0(%esp)
popl %eax
ret
- ELF_TYPE(SpinPause,@function)
.p2align 4,,15
-SYMBOL(SpinPause):
+DECLARE_FUNC(SpinPause):
rep
nop
movl $1, %eax
@@ -90,8 +51,7 @@ SYMBOL(SpinPause):
# size_t count)
#
.p2align 4,,15
- ELF_TYPE(_Copy_arrayof_conjoint_bytes,@function)
-SYMBOL(_Copy_arrayof_conjoint_bytes):
+DECLARE_FUNC(_Copy_arrayof_conjoint_bytes):
pushl %esi
movl 4+12(%esp),%ecx # count
pushl %edi
@@ -178,8 +138,7 @@ acb_CopyLeft:
# void* to,
# size_t count)
.p2align 4,,15
- ELF_TYPE(_Copy_conjoint_jshorts_atomic,@function)
-SYMBOL(_Copy_conjoint_jshorts_atomic):
+DECLARE_FUNC(_Copy_conjoint_jshorts_atomic):
pushl %esi
movl 4+12(%esp),%ecx # count
pushl %edi
@@ -265,8 +224,7 @@ cs_CopyLeft:
# void* to,
# size_t count)
.p2align 4,,15
- ELF_TYPE(_Copy_arrayof_conjoint_jshorts,@function)
-SYMBOL(_Copy_arrayof_conjoint_jshorts):
+DECLARE_FUNC(_Copy_arrayof_conjoint_jshorts):
pushl %esi
movl 4+12(%esp),%ecx # count
pushl %edi
@@ -342,10 +300,8 @@ acs_CopyLeft:
# Equivalent to
# arrayof_conjoint_jints
.p2align 4,,15
- ELF_TYPE(_Copy_conjoint_jints_atomic,@function)
- ELF_TYPE(_Copy_arrayof_conjoint_jints,@function)
-SYMBOL(_Copy_conjoint_jints_atomic):
-SYMBOL(_Copy_arrayof_conjoint_jints):
+DECLARE_FUNC(_Copy_conjoint_jints_atomic):
+DECLARE_FUNC(_Copy_arrayof_conjoint_jints):
pushl %esi
movl 4+12(%esp),%ecx # count
pushl %edi
@@ -417,8 +373,7 @@ ci_CopyLeft:
# }
# }
.p2align 4,,15
- ELF_TYPE(_Copy_conjoint_jlongs_atomic,@function)
-SYMBOL(_Copy_conjoint_jlongs_atomic):
+DECLARE_FUNC(_Copy_conjoint_jlongs_atomic):
movl 4+8(%esp),%ecx # count
movl 4+0(%esp),%eax # from
movl 4+4(%esp),%edx # to
@@ -446,8 +401,7 @@ cla_CopyLeft:
# void* to,
# size_t count)
.p2align 4,,15
- ELF_TYPE(_mmx_Copy_arrayof_conjoint_jshorts,@function)
-SYMBOL(_mmx_Copy_arrayof_conjoint_jshorts):
+DECLARE_FUNC(_mmx_Copy_arrayof_conjoint_jshorts):
pushl %esi
movl 4+12(%esp),%ecx
pushl %edi
@@ -544,8 +498,7 @@ mmx_acs_CopyLeft:
# int64_t exchange_value)
#
.p2align 4,,15
- ELF_TYPE(_Atomic_cmpxchg_long,@function)
-SYMBOL(_Atomic_cmpxchg_long):
+DECLARE_FUNC(_Atomic_cmpxchg_long):
# 8(%esp) : return PC
pushl %ebx # 4(%esp) : old %ebx
pushl %edi # 0(%esp) : old %edi
@@ -564,8 +517,7 @@ SYMBOL(_Atomic_cmpxchg_long):
# Support for int64_t Atomic::load and Atomic::store.
# void _Atomic_move_long(const volatile int64_t* src, volatile int64_t* dst)
.p2align 4,,15
- ELF_TYPE(_Atomic_move_long,@function)
-SYMBOL(_Atomic_move_long):
+DECLARE_FUNC(_Atomic_move_long):
movl 4(%esp), %eax # src
fildll (%eax)
movl 8(%esp), %eax # dest
diff --git a/src/hotspot/os_cpu/bsd_x86/bsd_x86_64.S b/src/hotspot/os_cpu/bsd_x86/bsd_x86_64.S
index 5e2addc4e6f43..d664b929559cd 100644
--- a/src/hotspot/os_cpu/bsd_x86/bsd_x86_64.S
+++ b/src/hotspot/os_cpu/bsd_x86/bsd_x86_64.S
@@ -21,14 +21,7 @@
# questions.
#
-#ifdef __APPLE__
-# Darwin uses _ prefixed global symbols
-#define SYMBOL(s) _ ## s
-#define ELF_TYPE(name, description)
-#else
-#define SYMBOL(s) s
-#define ELF_TYPE(name, description) .type name,description
-#endif
+#include "defs.S.inc"
# NOTE WELL! The _Copy functions are called directly
# from server-compiler-generated code via CallLeafNoFP,
@@ -36,31 +29,10 @@
# point or use it in the same manner as does the server
# compiler.
- .globl SYMBOL(SpinPause)
- .globl SYMBOL(_Copy_arrayof_conjoint_bytes)
- .globl SYMBOL(_Copy_arrayof_conjoint_jshorts)
- .globl SYMBOL(_Copy_conjoint_jshorts_atomic)
- .globl SYMBOL(_Copy_arrayof_conjoint_jints)
- .globl SYMBOL(_Copy_conjoint_jints_atomic)
- .globl SYMBOL(_Copy_arrayof_conjoint_jlongs)
- .globl SYMBOL(_Copy_conjoint_jlongs_atomic)
-
-#ifdef __APPLE__
- .private_extern SYMBOL(SpinPause)
- .private_extern SYMBOL(_Copy_arrayof_conjoint_bytes)
- .private_extern SYMBOL(_Copy_arrayof_conjoint_jshorts)
- .private_extern SYMBOL(_Copy_conjoint_jshorts_atomic)
- .private_extern SYMBOL(_Copy_arrayof_conjoint_jints)
- .private_extern SYMBOL(_Copy_conjoint_jints_atomic)
- .private_extern SYMBOL(_Copy_arrayof_conjoint_jlongs)
- .private_extern SYMBOL(_Copy_conjoint_jlongs_atomic)
-#endif
-
.text
.p2align 4,,15
- ELF_TYPE(SpinPause,@function)
-SYMBOL(SpinPause):
+DECLARE_FUNC(SpinPause):
rep
nop
movq $1, %rax
@@ -74,8 +46,7 @@ SYMBOL(SpinPause):
# rdx - count, treated as ssize_t
#
.p2align 4,,15
- ELF_TYPE(_Copy_arrayof_conjoint_bytes,@function)
-SYMBOL(_Copy_arrayof_conjoint_bytes):
+DECLARE_FUNC(_Copy_arrayof_conjoint_bytes):
movq %rdx,%r8 # byte count
shrq $3,%rdx # qword count
cmpq %rdi,%rsi
@@ -176,10 +147,8 @@ acb_CopyLeft:
# rdx - count, treated as ssize_t
#
.p2align 4,,15
- ELF_TYPE(_Copy_arrayof_conjoint_jshorts,@function)
- ELF_TYPE(_Copy_conjoint_jshorts_atomic,@function)
-SYMBOL(_Copy_arrayof_conjoint_jshorts):
-SYMBOL(_Copy_conjoint_jshorts_atomic):
+DECLARE_FUNC(_Copy_arrayof_conjoint_jshorts):
+DECLARE_FUNC(_Copy_conjoint_jshorts_atomic):
movq %rdx,%r8 # word count
shrq $2,%rdx # qword count
cmpq %rdi,%rsi
@@ -266,10 +235,8 @@ acs_CopyLeft:
# rdx - count, treated as ssize_t
#
.p2align 4,,15
- ELF_TYPE(_Copy_arrayof_conjoint_jints,@function)
- ELF_TYPE(_Copy_conjoint_jints_atomic,@function)
-SYMBOL(_Copy_arrayof_conjoint_jints):
-SYMBOL(_Copy_conjoint_jints_atomic):
+DECLARE_FUNC(_Copy_arrayof_conjoint_jints):
+DECLARE_FUNC(_Copy_conjoint_jints_atomic):
movq %rdx,%r8 # dword count
shrq %rdx # qword count
cmpq %rdi,%rsi
@@ -345,10 +312,8 @@ aci_CopyLeft:
# rdx - count, treated as ssize_t
#
.p2align 4,,15
- ELF_TYPE(_Copy_arrayof_conjoint_jlongs,@function)
- ELF_TYPE(_Copy_conjoint_jlongs_atomic,@function)
-SYMBOL(_Copy_arrayof_conjoint_jlongs):
-SYMBOL(_Copy_conjoint_jlongs_atomic):
+DECLARE_FUNC(_Copy_arrayof_conjoint_jlongs):
+DECLARE_FUNC(_Copy_conjoint_jlongs_atomic):
cmpq %rdi,%rsi
leaq -8(%rdi,%rdx,8),%rax # from + count*8 - 8
jbe acl_CopyRight
diff --git a/src/hotspot/os_cpu/bsd_x86/safefetch_bsd_x86_64.S b/src/hotspot/os_cpu/bsd_x86/safefetch_bsd_x86_64.S
index 1697f6f03b581..36815c9b1fb5d 100644
--- a/src/hotspot/os_cpu/bsd_x86/safefetch_bsd_x86_64.S
+++ b/src/hotspot/os_cpu/bsd_x86/safefetch_bsd_x86_64.S
@@ -22,30 +22,7 @@
# questions.
#
-#ifdef __APPLE__
-# Darwin uses _ prefixed global symbols
-#define SYMBOL(s) _ ## s
-#define ELF_TYPE(name, description)
-#else
-#define SYMBOL(s) s
-#define ELF_TYPE(name, description) .type name,description
-#endif
-
- .globl SYMBOL(SafeFetch32_impl)
- .globl SYMBOL(SafeFetchN_impl)
- .globl SYMBOL(_SafeFetch32_fault)
- .globl SYMBOL(_SafeFetchN_fault)
- .globl SYMBOL(_SafeFetch32_continuation)
- .globl SYMBOL(_SafeFetchN_continuation)
-
-#ifdef __APPLE__
- .private_extern SYMBOL(SafeFetch32_impl)
- .private_extern SYMBOL(SafeFetchN_impl)
- .private_extern SYMBOL(_SafeFetch32_fault)
- .private_extern SYMBOL(_SafeFetchN_fault)
- .private_extern SYMBOL(_SafeFetch32_continuation)
- .private_extern SYMBOL(_SafeFetchN_continuation)
-#endif
+#include "defs.S.inc"
.text
@@ -53,12 +30,11 @@
#
# %rdi : address
# %esi : defaultval
- ELF_TYPE(SafeFetch32_impl,@function)
-SYMBOL(SafeFetch32_impl:)
-SYMBOL(_SafeFetch32_fault:)
+DECLARE_FUNC(SafeFetch32_impl):
+DECLARE_FUNC(_SafeFetch32_fault):
movl (%rdi), %eax
ret
-SYMBOL(_SafeFetch32_continuation:)
+DECLARE_FUNC(_SafeFetch32_continuation):
movl %esi, %eax
ret
@@ -66,11 +42,10 @@ SYMBOL(_SafeFetch32_continuation:)
#
# %rdi : address
# %rsi : defaultval
- ELF_TYPE(SafeFetchN_impl,@function)
-SYMBOL(SafeFetchN_impl:)
-SYMBOL(_SafeFetchN_fault:)
+DECLARE_FUNC(SafeFetchN_impl):
+DECLARE_FUNC(_SafeFetchN_fault):
movq (%rdi), %rax
ret
-SYMBOL(_SafeFetchN_continuation:)
+DECLARE_FUNC(_SafeFetchN_continuation):
movq %rsi, %rax
ret
diff --git a/src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.S b/src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.S
index e67206a9d497f..2724e78862de3 100644
--- a/src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.S
+++ b/src/hotspot/os_cpu/linux_aarch64/atomic_linux_aarch64.S
@@ -1,4 +1,5 @@
// Copyright (c) 2021, Red Hat Inc. All rights reserved.
+// Copyright (c) 2024, Oracle and/or its affiliates. All rights reserved.
// DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
//
// This code is free software; you can redistribute it and/or modify it
@@ -19,14 +20,12 @@
// or visit www.oracle.com if you need additional information or have any
// questions.
-
+#include "defs.S.inc"
.text
- .globl aarch64_atomic_fetch_add_8_default_impl
- .hidden aarch64_atomic_fetch_add_8_default_impl
.align 5
-aarch64_atomic_fetch_add_8_default_impl:
+DECLARE_FUNC(aarch64_atomic_fetch_add_8_default_impl):
#ifdef __ARM_FEATURE_ATOMICS
ldaddal x1, x2, [x0]
#else
@@ -40,10 +39,8 @@ aarch64_atomic_fetch_add_8_default_impl:
mov x0, x2
ret
- .globl aarch64_atomic_fetch_add_4_default_impl
- .hidden aarch64_atomic_fetch_add_4_default_impl
.align 5
-aarch64_atomic_fetch_add_4_default_impl:
+DECLARE_FUNC(aarch64_atomic_fetch_add_4_default_impl):
#ifdef __ARM_FEATURE_ATOMICS
ldaddal w1, w2, [x0]
#else
@@ -57,10 +54,8 @@ aarch64_atomic_fetch_add_4_default_impl:
mov w0, w2
ret
- .global aarch64_atomic_fetch_add_8_relaxed_default_impl
- .hidden aarch64_atomic_fetch_add_8_relaxed_default_impl
.align 5
-aarch64_atomic_fetch_add_8_relaxed_default_impl:
+DECLARE_FUNC(aarch64_atomic_fetch_add_8_relaxed_default_impl):
#ifdef __ARM_FEATURE_ATOMICS
ldadd x1, x2, [x0]
#else
@@ -73,10 +68,8 @@ aarch64_atomic_fetch_add_8_relaxed_default_impl:
mov x0, x2
ret
- .global aarch64_atomic_fetch_add_4_relaxed_default_impl
- .hidden aarch64_atomic_fetch_add_4_relaxed_default_impl
.align 5
-aarch64_atomic_fetch_add_4_relaxed_default_impl:
+DECLARE_FUNC(aarch64_atomic_fetch_add_4_relaxed_default_impl):
#ifdef __ARM_FEATURE_ATOMICS
ldadd w1, w2, [x0]
#else
@@ -89,10 +82,8 @@ aarch64_atomic_fetch_add_4_relaxed_default_impl:
mov w0, w2
ret
- .globl aarch64_atomic_xchg_4_default_impl
- .hidden aarch64_atomic_xchg_4_default_impl
.align 5
-aarch64_atomic_xchg_4_default_impl:
+DECLARE_FUNC(aarch64_atomic_xchg_4_default_impl):
#ifdef __ARM_FEATURE_ATOMICS
swpal w1, w2, [x0]
#else
@@ -105,10 +96,8 @@ aarch64_atomic_xchg_4_default_impl:
mov w0, w2
ret
- .globl aarch64_atomic_xchg_8_default_impl
- .hidden aarch64_atomic_xchg_8_default_impl
.align 5
-aarch64_atomic_xchg_8_default_impl:
+DECLARE_FUNC(aarch64_atomic_xchg_8_default_impl):
#ifdef __ARM_FEATURE_ATOMICS
swpal x1, x2, [x0]
#else
@@ -121,10 +110,8 @@ aarch64_atomic_xchg_8_default_impl:
mov x0, x2
ret
- .globl aarch64_atomic_cmpxchg_1_default_impl
- .hidden aarch64_atomic_cmpxchg_1_default_impl
.align 5
-aarch64_atomic_cmpxchg_1_default_impl:
+DECLARE_FUNC(aarch64_atomic_cmpxchg_1_default_impl):
#ifdef __ARM_FEATURE_ATOMICS
mov x3, x1
casalb w3, w2, [x0]
@@ -142,10 +129,8 @@ aarch64_atomic_cmpxchg_1_default_impl:
mov w0, w3
ret
- .globl aarch64_atomic_cmpxchg_4_default_impl
- .hidden aarch64_atomic_cmpxchg_4_default_impl
.align 5
-aarch64_atomic_cmpxchg_4_default_impl:
+DECLARE_FUNC(aarch64_atomic_cmpxchg_4_default_impl):
#ifdef __ARM_FEATURE_ATOMICS
mov x3, x1
casal w3, w2, [x0]
@@ -162,10 +147,8 @@ aarch64_atomic_cmpxchg_4_default_impl:
mov w0, w3
ret
- .globl aarch64_atomic_cmpxchg_8_default_impl
- .hidden aarch64_atomic_cmpxchg_8_default_impl
.align 5
-aarch64_atomic_cmpxchg_8_default_impl:
+DECLARE_FUNC(aarch64_atomic_cmpxchg_8_default_impl):
#ifdef __ARM_FEATURE_ATOMICS
mov x3, x1
casal x3, x2, [x0]
@@ -182,10 +165,8 @@ aarch64_atomic_cmpxchg_8_default_impl:
mov x0, x3
ret
- .globl aarch64_atomic_cmpxchg_4_release_default_impl
- .hidden aarch64_atomic_cmpxchg_4_release_default_impl
.align 5
-aarch64_atomic_cmpxchg_4_release_default_impl:
+DECLARE_FUNC(aarch64_atomic_cmpxchg_4_release_default_impl):
#ifdef __ARM_FEATURE_ATOMICS
mov x3, x1
casl w3, w2, [x0]
@@ -200,10 +181,8 @@ aarch64_atomic_cmpxchg_4_release_default_impl:
1: mov w0, w3
ret
- .globl aarch64_atomic_cmpxchg_8_release_default_impl
- .hidden aarch64_atomic_cmpxchg_8_release_default_impl
.align 5
-aarch64_atomic_cmpxchg_8_release_default_impl:
+DECLARE_FUNC(aarch64_atomic_cmpxchg_8_release_default_impl):
#ifdef __ARM_FEATURE_ATOMICS
mov x3, x1
casl x3, x2, [x0]
@@ -218,10 +197,8 @@ aarch64_atomic_cmpxchg_8_release_default_impl:
1: mov x0, x3
ret
- .globl aarch64_atomic_cmpxchg_4_seq_cst_default_impl
- .hidden aarch64_atomic_cmpxchg_4_seq_cst_default_impl
.align 5
-aarch64_atomic_cmpxchg_4_seq_cst_default_impl:
+DECLARE_FUNC(aarch64_atomic_cmpxchg_4_seq_cst_default_impl):
#ifdef __ARM_FEATURE_ATOMICS
mov x3, x1
casal w3, w2, [x0]
@@ -236,10 +213,8 @@ aarch64_atomic_cmpxchg_4_seq_cst_default_impl:
1: mov w0, w3
ret
- .globl aarch64_atomic_cmpxchg_8_seq_cst_default_impl
- .hidden aarch64_atomic_cmpxchg_8_seq_cst_default_impl
.align 5
-aarch64_atomic_cmpxchg_8_seq_cst_default_impl:
+DECLARE_FUNC(aarch64_atomic_cmpxchg_8_seq_cst_default_impl):
#ifdef __ARM_FEATURE_ATOMICS
mov x3, x1
casal x3, x2, [x0]
@@ -254,10 +229,8 @@ aarch64_atomic_cmpxchg_8_seq_cst_default_impl:
1: mov x0, x3
ret
-.globl aarch64_atomic_cmpxchg_1_relaxed_default_impl
-.hidden aarch64_atomic_cmpxchg_1_relaxed_default_impl
.align 5
-aarch64_atomic_cmpxchg_1_relaxed_default_impl:
+DECLARE_FUNC(aarch64_atomic_cmpxchg_1_relaxed_default_impl):
#ifdef __ARM_FEATURE_ATOMICS
mov x3, x1
casb w3, w2, [x0]
@@ -273,10 +246,8 @@ aarch64_atomic_cmpxchg_1_relaxed_default_impl:
1: mov w0, w3
ret
- .globl aarch64_atomic_cmpxchg_4_relaxed_default_impl
- .hidden aarch64_atomic_cmpxchg_4_relaxed_default_impl
.align 5
-aarch64_atomic_cmpxchg_4_relaxed_default_impl:
+DECLARE_FUNC(aarch64_atomic_cmpxchg_4_relaxed_default_impl):
#ifdef __ARM_FEATURE_ATOMICS
mov x3, x1
cas w3, w2, [x0]
@@ -291,10 +262,8 @@ aarch64_atomic_cmpxchg_4_relaxed_default_impl:
1: mov w0, w3
ret
- .globl aarch64_atomic_cmpxchg_8_relaxed_default_impl
- .hidden aarch64_atomic_cmpxchg_8_relaxed_default_impl
.align 5
-aarch64_atomic_cmpxchg_8_relaxed_default_impl:
+DECLARE_FUNC(aarch64_atomic_cmpxchg_8_relaxed_default_impl):
#ifdef __ARM_FEATURE_ATOMICS
mov x3, x1
cas x3, x2, [x0]
diff --git a/src/hotspot/os_cpu/linux_aarch64/copy_linux_aarch64.S b/src/hotspot/os_cpu/linux_aarch64/copy_linux_aarch64.S
index ade867ace016b..f9702d5e55419 100644
--- a/src/hotspot/os_cpu/linux_aarch64/copy_linux_aarch64.S
+++ b/src/hotspot/os_cpu/linux_aarch64/copy_linux_aarch64.S
@@ -1,5 +1,6 @@
/*
* Copyright (c) 2016, Linaro Ltd. All rights reserved.
+ * Copyright (c) 2024, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -21,11 +22,8 @@
* questions.
*
*/
- .global _Copy_conjoint_words
- .global _Copy_disjoint_words
- .hidden _Copy_conjoint_words
- .hidden _Copy_disjoint_words
+#include "defs.S.inc"
s .req x0
d .req x1
@@ -40,7 +38,7 @@ t6 .req x9
t7 .req x10
.align 6
-_Copy_disjoint_words:
+DECLARE_FUNC(_Copy_disjoint_words):
// Ensure 2 word aligned
tbz s, #3, fwd_copy_aligned
ldr t0, [s], #8
@@ -138,7 +136,7 @@ fwd_copy_drain:
ret
.align 6
-_Copy_conjoint_words:
+DECLARE_FUNC(_Copy_conjoint_words):
sub t0, d, s
cmp t0, count, lsl #3
bhs _Copy_disjoint_words
diff --git a/src/hotspot/os_cpu/linux_aarch64/safefetch_linux_aarch64.S b/src/hotspot/os_cpu/linux_aarch64/safefetch_linux_aarch64.S
index cfbd8f45f285f..82d989a9b8830 100644
--- a/src/hotspot/os_cpu/linux_aarch64/safefetch_linux_aarch64.S
+++ b/src/hotspot/os_cpu/linux_aarch64/safefetch_linux_aarch64.S
@@ -23,29 +23,17 @@
*
*/
- .globl SafeFetchN_impl
- .globl _SafeFetchN_fault
- .globl _SafeFetchN_continuation
- .globl SafeFetch32_impl
- .globl _SafeFetch32_fault
- .globl _SafeFetch32_continuation
-
- .hidden SafeFetchN_impl
- .hidden _SafeFetchN_fault
- .hidden _SafeFetchN_continuation
- .hidden SafeFetch32_impl
- .hidden _SafeFetch32_fault
- .hidden _SafeFetch32_continuation
+#include "defs.S.inc"
# Support for int SafeFetch32(int* address, int defaultval);
#
# x0 : address
# x1 : defaultval
-SafeFetch32_impl:
-_SafeFetch32_fault:
+DECLARE_FUNC(SafeFetch32_impl):
+DECLARE_FUNC(_SafeFetch32_fault):
ldr w0, [x0]
ret
-_SafeFetch32_continuation:
+DECLARE_FUNC(_SafeFetch32_continuation):
mov x0, x1
ret
@@ -53,10 +41,10 @@ _SafeFetch32_continuation:
#
# x1 : address
# x0 : defaultval
-SafeFetchN_impl:
-_SafeFetchN_fault:
+DECLARE_FUNC(SafeFetchN_impl):
+DECLARE_FUNC(_SafeFetchN_fault):
ldr x0, [x0]
ret
-_SafeFetchN_continuation:
+DECLARE_FUNC(_SafeFetchN_continuation):
mov x0, x1
ret
diff --git a/src/hotspot/os_cpu/linux_aarch64/threadLS_linux_aarch64.S b/src/hotspot/os_cpu/linux_aarch64/threadLS_linux_aarch64.S
index f9f5aab2a6bd8..ca31585871dd1 100644
--- a/src/hotspot/os_cpu/linux_aarch64/threadLS_linux_aarch64.S
+++ b/src/hotspot/os_cpu/linux_aarch64/threadLS_linux_aarch64.S
@@ -1,4 +1,5 @@
// Copyright (c) 2015, 2022, Red Hat Inc. All rights reserved.
+// Copyright (c) 2024, Oracle and/or its affiliates. All rights reserved.
// DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
//
// This code is free software; you can redistribute it and/or modify it
@@ -19,17 +20,15 @@
// or visit www.oracle.com if you need additional information or have any
// questions.
+#include "defs.S.inc"
+
// JavaThread::aarch64_get_thread_helper()
//
// Return the current thread pointer in x0.
// Clobber x1, flags.
// All other registers are preserved,
- .global _ZN10JavaThread25aarch64_get_thread_helperEv
- .hidden _ZN10JavaThread25aarch64_get_thread_helperEv
- .type _ZN10JavaThread25aarch64_get_thread_helperEv, %function
-
-_ZN10JavaThread25aarch64_get_thread_helperEv:
+DECLARE_FUNC(_ZN10JavaThread25aarch64_get_thread_helperEv):
hint #0x19 // paciasp
stp x29, x30, [sp, -16]!
adrp x0, :tlsdesc:_ZN6Thread12_thr_currentE
diff --git a/src/hotspot/os_cpu/linux_arm/linux_arm_32.S b/src/hotspot/os_cpu/linux_arm/linux_arm_32.S
index ad88c58ce78ce..a866858dae520 100644
--- a/src/hotspot/os_cpu/linux_arm/linux_arm_32.S
+++ b/src/hotspot/os_cpu/linux_arm/linux_arm_32.S
@@ -21,6 +21,7 @@
# questions.
#
+#include "defs.S.inc"
# NOTE WELL! The _Copy functions are called directly
# from server-compiler-generated code via CallLeafNoFP,
@@ -28,55 +29,24 @@
# point or use it in the same manner as does the server
# compiler.
- .globl SpinPause
- .hidden SpinPause
- .type SpinPause, %function
- .globl _Copy_arrayof_conjoint_bytes
- .hidden _Copy_arrayof_conjoint_bytes
- .type _Copy_arrayof_conjoint_bytes, %function
- .globl _Copy_disjoint_words
- .hidden _Copy_disjoint_words
- .type _Copy_disjoint_words, %function
- .globl _Copy_conjoint_words
- .hidden _Copy_conjoint_words
- .type _Copy_conjoint_words, %function
- .globl _Copy_conjoint_jshorts_atomic
- .hidden _Copy_conjoint_jshorts_atomic
- .type _Copy_conjoint_jshorts_atomic, %function
- .globl _Copy_arrayof_conjoint_jshorts
- .hidden _Copy_arrayof_conjoint_jshorts
- .type _Copy_arrayof_conjoint_jshorts, %function
- .globl _Copy_conjoint_jints_atomic
- .hidden _Copy_conjoint_jints_atomic
- .type _Copy_conjoint_jints_atomic, %function
- .globl _Copy_arrayof_conjoint_jints
- .hidden _Copy_arrayof_conjoint_jints
- .type _Copy_arrayof_conjoint_jints, %function
- .globl _Copy_conjoint_jlongs_atomic
- .hidden _Copy_conjoint_jlongs_atomic
- .type _Copy_conjoint_jlongs_atomic, %function
- .globl _Copy_arrayof_conjoint_jlongs
- .hidden _Copy_arrayof_conjoint_jlongs
- .type _Copy_arrayof_conjoint_jlongs, %function
-
from .req r0
to .req r1
.text
-SpinPause:
+DECLARE_FUNC(SpinPause):
bx LR
# Support for void Copy::arrayof_conjoint_bytes(void* from,
# void* to,
# size_t count)
-_Copy_arrayof_conjoint_bytes:
+DECLARE_FUNC(_Copy_arrayof_conjoint_bytes):
swi 0x9f0001
# Support for void Copy::disjoint_words(void* from,
# void* to,
# size_t count)
-_Copy_disjoint_words:
+DECLARE_FUNC(_Copy_disjoint_words):
stmdb sp!, {r3 - r9, ip}
cmp r2, #0
@@ -121,7 +91,7 @@ disjoint_words_finish:
# Support for void Copy::conjoint_words(void* from,
# void* to,
# size_t count)
-_Copy_conjoint_words:
+DECLARE_FUNC(_Copy_conjoint_words):
stmdb sp!, {r3 - r9, ip}
cmp r2, #0
@@ -201,7 +171,7 @@ conjoint_words_finish:
# Support for void Copy::conjoint_jshorts_atomic(void* from,
# void* to,
# size_t count)
-_Copy_conjoint_jshorts_atomic:
+DECLARE_FUNC(_Copy_conjoint_jshorts_atomic):
stmdb sp!, {r3 - r9, ip}
cmp r2, #0
@@ -425,21 +395,21 @@ conjoint_shorts_finish:
# Support for void Copy::arrayof_conjoint_jshorts(void* from,
# void* to,
# size_t count)
-_Copy_arrayof_conjoint_jshorts:
+DECLARE_FUNC(_Copy_arrayof_conjoint_jshorts):
swi 0x9f0001
# Support for void Copy::conjoint_jints_atomic(void* from,
# void* to,
# size_t count)
-_Copy_conjoint_jints_atomic:
-_Copy_arrayof_conjoint_jints:
+DECLARE_FUNC(_Copy_conjoint_jints_atomic):
+DECLARE_FUNC(_Copy_arrayof_conjoint_jints):
swi 0x9f0001
# Support for void Copy::conjoint_jlongs_atomic(jlong* from,
# jlong* to,
# size_t count)
-_Copy_conjoint_jlongs_atomic:
-_Copy_arrayof_conjoint_jlongs:
+DECLARE_FUNC(_Copy_conjoint_jlongs_atomic):
+DECLARE_FUNC(_Copy_arrayof_conjoint_jlongs):
stmdb sp!, {r3 - r9, ip}
cmp r2, #0
diff --git a/src/hotspot/os_cpu/linux_arm/safefetch_linux_arm.S b/src/hotspot/os_cpu/linux_arm/safefetch_linux_arm.S
index 07e90fa3079f2..5c61774a71d2c 100644
--- a/src/hotspot/os_cpu/linux_arm/safefetch_linux_arm.S
+++ b/src/hotspot/os_cpu/linux_arm/safefetch_linux_arm.S
@@ -23,24 +23,16 @@
*
*/
- .globl SafeFetch32_impl
- .globl _SafeFetch32_fault
- .globl _SafeFetch32_continuation
-
- .hidden SafeFetch32_impl
- .hidden _SafeFetch32_fault
- .hidden _SafeFetch32_continuation
-
- .type SafeFetch32_impl, %function
+#include "defs.S.inc"
# Support for int SafeFetch32(int* address, int defaultval);
#
# r0 : address
# r1 : defaultval
-SafeFetch32_impl:
-_SafeFetch32_fault:
+DECLARE_FUNC(SafeFetch32_impl):
+DECLARE_FUNC(_SafeFetch32_fault):
ldr r0, [r0]
bx lr
-_SafeFetch32_continuation:
+DECLARE_FUNC(_SafeFetch32_continuation):
mov r0, r1
bx lr
diff --git a/src/hotspot/os_cpu/linux_ppc/safefetch_linux_ppc.S b/src/hotspot/os_cpu/linux_ppc/safefetch_linux_ppc.S
index 8c96edf01b4d0..43d8a4e5e98a9 100644
--- a/src/hotspot/os_cpu/linux_ppc/safefetch_linux_ppc.S
+++ b/src/hotspot/os_cpu/linux_ppc/safefetch_linux_ppc.S
@@ -23,30 +23,18 @@
*
*/
- .globl SafeFetchN_impl
- .globl _SafeFetchN_fault
- .globl _SafeFetchN_continuation
- .globl SafeFetch32_impl
- .globl _SafeFetch32_fault
- .globl _SafeFetch32_continuation
-
- .hidden SafeFetchN_impl
- .hidden _SafeFetchN_fault
- .hidden _SafeFetchN_continuation
- .hidden SafeFetch32_impl
- .hidden _SafeFetch32_fault
- .hidden _SafeFetch32_continuation
+#include "defs.S.inc"
# Support for int SafeFetch32(int* address, int defaultval);
#
# r3 : address
# r4 : defaultval
# r3 : retval
-SafeFetch32_impl:
-_SafeFetch32_fault:
+DECLARE_FUNC(SafeFetch32_impl):
+DECLARE_FUNC(_SafeFetch32_fault):
lwa %r3, 0(%r3)
blr
-_SafeFetch32_continuation:
+DECLARE_FUNC(_SafeFetch32_continuation):
mr %r3, %r4
blr
@@ -55,10 +43,10 @@ _SafeFetch32_continuation:
# r3 : address
# r4 : defaultval
# r3 : retval
-SafeFetchN_impl:
-_SafeFetchN_fault:
+DECLARE_FUNC(SafeFetchN_impl):
+DECLARE_FUNC(_SafeFetchN_fault):
ld %r3, 0(%r3)
blr
-_SafeFetchN_continuation:
+DECLARE_FUNC(_SafeFetchN_continuation):
mr %r3, %r4
blr
diff --git a/src/hotspot/os_cpu/linux_riscv/safefetch_linux_riscv.S b/src/hotspot/os_cpu/linux_riscv/safefetch_linux_riscv.S
index 150df7567bdb8..8c325fe55ca91 100644
--- a/src/hotspot/os_cpu/linux_riscv/safefetch_linux_riscv.S
+++ b/src/hotspot/os_cpu/linux_riscv/safefetch_linux_riscv.S
@@ -23,30 +23,18 @@
*
*/
- .globl SafeFetchN_impl
- .globl _SafeFetchN_fault
- .globl _SafeFetchN_continuation
- .globl SafeFetch32_impl
- .globl _SafeFetch32_fault
- .globl _SafeFetch32_continuation
-
- .hidden SafeFetchN_impl
- .hidden _SafeFetchN_fault
- .hidden _SafeFetchN_continuation
- .hidden SafeFetch32_impl
- .hidden _SafeFetch32_fault
- .hidden _SafeFetch32_continuation
+#include "defs.S.inc"
# Support for int SafeFetch32(int* address, int defaultval);
#
# x10 (a0) : address
# x11 (a1) : defaultval
# x10 (a0) : retval
-SafeFetch32_impl:
-_SafeFetch32_fault:
+DECLARE_FUNC(SafeFetch32_impl):
+DECLARE_FUNC(_SafeFetch32_fault):
lw a0, 0(a0)
ret
-_SafeFetch32_continuation:
+DECLARE_FUNC(_SafeFetch32_continuation):
mv a0, a1
ret
@@ -55,10 +43,10 @@ _SafeFetch32_continuation:
# x10 (a0) : address
# x11 (a1) : defaultval
# x10 (a0) : retval
-SafeFetchN_impl:
-_SafeFetchN_fault:
+DECLARE_FUNC(SafeFetchN_impl):
+DECLARE_FUNC(_SafeFetchN_fault):
ld a0, 0(a0)
ret
-_SafeFetchN_continuation:
+DECLARE_FUNC(_SafeFetchN_continuation):
mv a0, a1
ret
diff --git a/src/hotspot/os_cpu/linux_s390/safefetch_linux_s390.S b/src/hotspot/os_cpu/linux_s390/safefetch_linux_s390.S
index 43d50c798e534..4ba9f7fb24874 100644
--- a/src/hotspot/os_cpu/linux_s390/safefetch_linux_s390.S
+++ b/src/hotspot/os_cpu/linux_s390/safefetch_linux_s390.S
@@ -23,30 +23,18 @@
*
*/
- .globl SafeFetchN_impl
- .globl _SafeFetchN_fault
- .globl _SafeFetchN_continuation
- .globl SafeFetch32_impl
- .globl _SafeFetch32_fault
- .globl _SafeFetch32_continuation
-
- .hidden SafeFetchN_impl
- .hidden _SafeFetchN_fault
- .hidden _SafeFetchN_continuation
- .hidden SafeFetch32_impl
- .hidden _SafeFetch32_fault
- .hidden _SafeFetch32_continuation
+#include "defs.S.inc"
# Support for int SafeFetch32(int* address, int defaultval);
#
# r2 : address
# r3 : defaultval
# r2 : retval
-SafeFetch32_impl:
-_SafeFetch32_fault:
+DECLARE_FUNC(SafeFetch32_impl):
+DECLARE_FUNC(_SafeFetch32_fault):
lgf %r2, 0(%r2)
br %r14
-_SafeFetch32_continuation:
+DECLARE_FUNC(_SafeFetch32_continuation):
lgr %r2, %r3
br %r14
@@ -55,10 +43,10 @@ _SafeFetch32_continuation:
# r2 : address
# r3 : defaultval
# r2 : retval
-SafeFetchN_impl:
-_SafeFetchN_fault:
+DECLARE_FUNC(SafeFetchN_impl):
+DECLARE_FUNC(_SafeFetchN_fault):
lg %r2, 0(%r2)
br %r14
-_SafeFetchN_continuation:
+DECLARE_FUNC(_SafeFetchN_continuation):
lgr %r2, %r3
br %r14
diff --git a/src/hotspot/os_cpu/linux_x86/linux_x86_32.S b/src/hotspot/os_cpu/linux_x86/linux_x86_32.S
index e23cd2b9164ae..43a9a38e57f1d 100644
--- a/src/hotspot/os_cpu/linux_x86/linux_x86_32.S
+++ b/src/hotspot/os_cpu/linux_x86/linux_x86_32.S
@@ -21,7 +21,7 @@
# questions.
#
- .globl SpinPause
+#include "defs.S.inc"
# NOTE WELL! The _Copy functions are called directly
# from server-compiler-generated code via CallLeafNoFP,
@@ -29,35 +29,10 @@
# point or use it in the same manner as does the server
# compiler.
- .globl _Copy_arrayof_conjoint_bytes
- .globl _Copy_conjoint_jshorts_atomic
- .globl _Copy_arrayof_conjoint_jshorts
- .globl _Copy_conjoint_jints_atomic
- .globl _Copy_arrayof_conjoint_jints
- .globl _Copy_conjoint_jlongs_atomic
- .globl _mmx_Copy_arrayof_conjoint_jshorts
-
- .globl _Atomic_cmpxchg_long
- .globl _Atomic_move_long
-
- .hidden SpinPause
-
- .hidden _Copy_arrayof_conjoint_bytes
- .hidden _Copy_conjoint_jshorts_atomic
- .hidden _Copy_arrayof_conjoint_jshorts
- .hidden _Copy_conjoint_jints_atomic
- .hidden _Copy_arrayof_conjoint_jints
- .hidden _Copy_conjoint_jlongs_atomic
- .hidden _mmx_Copy_arrayof_conjoint_jshorts
-
- .hidden _Atomic_cmpxchg_long
- .hidden _Atomic_move_long
-
.text
- .type SpinPause,@function
.p2align 4,,15
-SpinPause:
+DECLARE_FUNC(SpinPause):
rep
nop
movl $1, %eax
@@ -68,8 +43,7 @@ SpinPause:
# size_t count)
#
.p2align 4,,15
- .type _Copy_arrayof_conjoint_bytes,@function
-_Copy_arrayof_conjoint_bytes:
+DECLARE_FUNC(_Copy_arrayof_conjoint_bytes):
pushl %esi
movl 4+12(%esp),%ecx # count
pushl %edi
@@ -156,8 +130,7 @@ acb_CopyLeft:
# void* to,
# size_t count)
.p2align 4,,15
- .type _Copy_conjoint_jshorts_atomic,@function
-_Copy_conjoint_jshorts_atomic:
+DECLARE_FUNC(_Copy_conjoint_jshorts_atomic):
pushl %esi
movl 4+12(%esp),%ecx # count
pushl %edi
@@ -243,8 +216,7 @@ cs_CopyLeft:
# void* to,
# size_t count)
.p2align 4,,15
- .type _Copy_arrayof_conjoint_jshorts,@function
-_Copy_arrayof_conjoint_jshorts:
+DECLARE_FUNC(_Copy_arrayof_conjoint_jshorts):
pushl %esi
movl 4+12(%esp),%ecx # count
pushl %edi
@@ -320,10 +292,8 @@ acs_CopyLeft:
# Equivalent to
# arrayof_conjoint_jints
.p2align 4,,15
- .type _Copy_conjoint_jints_atomic,@function
- .type _Copy_arrayof_conjoint_jints,@function
-_Copy_conjoint_jints_atomic:
-_Copy_arrayof_conjoint_jints:
+DECLARE_FUNC(_Copy_conjoint_jints_atomic):
+DECLARE_FUNC(_Copy_arrayof_conjoint_jints):
pushl %esi
movl 4+12(%esp),%ecx # count
pushl %edi
@@ -397,8 +367,7 @@ ci_CopyLeft:
# }
*/
.p2align 4,,15
- .type _Copy_conjoint_jlongs_atomic,@function
-_Copy_conjoint_jlongs_atomic:
+DECLARE_FUNC(_Copy_conjoint_jlongs_atomic):
movl 4+8(%esp),%ecx # count
movl 4+0(%esp),%eax # from
movl 4+4(%esp),%edx # to
@@ -426,8 +395,7 @@ cla_CopyLeft:
# void* to,
# size_t count)
.p2align 4,,15
- .type _mmx_Copy_arrayof_conjoint_jshorts,@function
-_mmx_Copy_arrayof_conjoint_jshorts:
+DECLARE_FUNC(_mmx_Copy_arrayof_conjoint_jshorts):
pushl %esi
movl 4+12(%esp),%ecx
pushl %edi
@@ -524,8 +492,7 @@ mmx_acs_CopyLeft:
# jlong exchange_value)
#
.p2align 4,,15
- .type _Atomic_cmpxchg_long,@function
-_Atomic_cmpxchg_long:
+DECLARE_FUNC(_Atomic_cmpxchg_long):
# 8(%esp) : return PC
pushl %ebx # 4(%esp) : old %ebx
pushl %edi # 0(%esp) : old %edi
@@ -543,8 +510,7 @@ _Atomic_cmpxchg_long:
# Support for jlong Atomic::load and Atomic::store.
# void _Atomic_move_long(const volatile jlong* src, volatile jlong* dst)
.p2align 4,,15
- .type _Atomic_move_long,@function
-_Atomic_move_long:
+DECLARE_FUNC(_Atomic_move_long):
movl 4(%esp), %eax # src
fildll (%eax)
movl 8(%esp), %eax # dest
diff --git a/src/hotspot/os_cpu/linux_x86/linux_x86_64.S b/src/hotspot/os_cpu/linux_x86/linux_x86_64.S
index 65580a194afab..e15f27f7a6ad4 100644
--- a/src/hotspot/os_cpu/linux_x86/linux_x86_64.S
+++ b/src/hotspot/os_cpu/linux_x86/linux_x86_64.S
@@ -21,7 +21,7 @@
# questions.
#
- .globl SpinPause
+#include "defs.S.inc"
# NOTE WELL! The _Copy functions are called directly
# from server-compiler-generated code via CallLeafNoFP,
@@ -29,29 +29,10 @@
# point or use it in the same manner as does the server
# compiler.
- .globl _Copy_arrayof_conjoint_bytes
- .globl _Copy_arrayof_conjoint_jshorts
- .globl _Copy_conjoint_jshorts_atomic
- .globl _Copy_arrayof_conjoint_jints
- .globl _Copy_conjoint_jints_atomic
- .globl _Copy_arrayof_conjoint_jlongs
- .globl _Copy_conjoint_jlongs_atomic
-
- .hidden SpinPause
-
- .hidden _Copy_arrayof_conjoint_bytes
- .hidden _Copy_arrayof_conjoint_jshorts
- .hidden _Copy_conjoint_jshorts_atomic
- .hidden _Copy_arrayof_conjoint_jints
- .hidden _Copy_conjoint_jints_atomic
- .hidden _Copy_arrayof_conjoint_jlongs
- .hidden _Copy_conjoint_jlongs_atomic
-
.text
.align 16
- .type SpinPause,@function
-SpinPause:
+DECLARE_FUNC(SpinPause):
rep
nop
movq $1, %rax
@@ -65,8 +46,7 @@ SpinPause:
# rdx - count, treated as ssize_t
#
.p2align 4,,15
- .type _Copy_arrayof_conjoint_bytes,@function
-_Copy_arrayof_conjoint_bytes:
+DECLARE_FUNC(_Copy_arrayof_conjoint_bytes):
movq %rdx,%r8 # byte count
shrq $3,%rdx # qword count
cmpq %rdi,%rsi
@@ -167,10 +147,8 @@ acb_CopyLeft:
# rdx - count, treated as ssize_t
#
.p2align 4,,15
- .type _Copy_arrayof_conjoint_jshorts,@function
- .type _Copy_conjoint_jshorts_atomic,@function
-_Copy_arrayof_conjoint_jshorts:
-_Copy_conjoint_jshorts_atomic:
+DECLARE_FUNC(_Copy_arrayof_conjoint_jshorts):
+DECLARE_FUNC(_Copy_conjoint_jshorts_atomic):
movq %rdx,%r8 # word count
shrq $2,%rdx # qword count
cmpq %rdi,%rsi
@@ -257,10 +235,8 @@ acs_CopyLeft:
# rdx - count, treated as ssize_t
#
.p2align 4,,15
- .type _Copy_arrayof_conjoint_jints,@function
- .type _Copy_conjoint_jints_atomic,@function
-_Copy_arrayof_conjoint_jints:
-_Copy_conjoint_jints_atomic:
+DECLARE_FUNC(_Copy_arrayof_conjoint_jints):
+DECLARE_FUNC(_Copy_conjoint_jints_atomic):
movq %rdx,%r8 # dword count
shrq %rdx # qword count
cmpq %rdi,%rsi
@@ -336,10 +312,8 @@ aci_CopyLeft:
# rdx - count, treated as ssize_t
#
.p2align 4,,15
- .type _Copy_arrayof_conjoint_jlongs,@function
- .type _Copy_conjoint_jlongs_atomic,@function
-_Copy_arrayof_conjoint_jlongs:
-_Copy_conjoint_jlongs_atomic:
+DECLARE_FUNC(_Copy_arrayof_conjoint_jlongs):
+DECLARE_FUNC(_Copy_conjoint_jlongs_atomic):
cmpq %rdi,%rsi
leaq -8(%rdi,%rdx,8),%rax # from + count*8 - 8
jbe acl_CopyRight
diff --git a/src/hotspot/os_cpu/linux_x86/safefetch_linux_x86_32.S b/src/hotspot/os_cpu/linux_x86/safefetch_linux_x86_32.S
index 54775cb7e8ede..73f6cdf38c9cc 100644
--- a/src/hotspot/os_cpu/linux_x86/safefetch_linux_x86_32.S
+++ b/src/hotspot/os_cpu/linux_x86/safefetch_linux_x86_32.S
@@ -21,13 +21,8 @@
# or visit www.oracle.com if you need additional information or have any
# questions.
#
- .globl SafeFetch32_impl
- .globl _SafeFetch32_fault
- .globl _SafeFetch32_continuation
- .hidden SafeFetch32_impl
- .hidden _SafeFetch32_fault
- .hidden _SafeFetch32_continuation
+#include "defs.S.inc"
.text
@@ -36,12 +31,11 @@
# 8(%esp) : default value
# 4(%esp) : crash address
# 0(%esp) : return pc
- .type _SafeFetch32_impl,@function
-SafeFetch32_impl:
+DECLARE_FUNC(SafeFetch32_impl):
movl 4(%esp),%ecx # load address from stack
-_SafeFetch32_fault:
+DECLARE_FUNC(_SafeFetch32_fault):
movl (%ecx), %eax # load target value, may fault
ret
-_SafeFetch32_continuation:
+DECLARE_FUNC(_SafeFetch32_continuation):
movl 8(%esp),%eax # load default value from stack
ret
diff --git a/src/hotspot/os_cpu/linux_x86/safefetch_linux_x86_64.S b/src/hotspot/os_cpu/linux_x86/safefetch_linux_x86_64.S
index 1937e71708897..b3145141cf2bd 100644
--- a/src/hotspot/os_cpu/linux_x86/safefetch_linux_x86_64.S
+++ b/src/hotspot/os_cpu/linux_x86/safefetch_linux_x86_64.S
@@ -21,33 +21,20 @@
# or visit www.oracle.com if you need additional information or have any
# questions.
#
- .globl SafeFetch32_impl
- .globl SafeFetchN_impl
- .globl _SafeFetch32_fault
- .globl _SafeFetchN_fault
- .globl _SafeFetch32_continuation
- .globl _SafeFetchN_continuation
- .hidden SafeFetch32_impl
- .hidden SafeFetchN_impl
- .hidden _SafeFetch32_fault
- .hidden _SafeFetchN_fault
- .hidden _SafeFetch32_continuation
- .hidden _SafeFetchN_continuation
+#include "defs.S.inc"
.text
-
# Support for int SafeFetch32(int* address, int defaultval);
#
# %rdi : address
# %esi : defaultval
- .type SafeFetch32_impl,@function
-SafeFetch32_impl:
-_SafeFetch32_fault:
+DECLARE_FUNC(SafeFetch32_impl):
+DECLARE_FUNC(_SafeFetch32_fault):
movl (%rdi), %eax # load target value, may fault
ret
-_SafeFetch32_continuation:
+DECLARE_FUNC(_SafeFetch32_continuation):
movl %esi, %eax # return default
ret
@@ -55,11 +42,10 @@ _SafeFetch32_continuation:
#
# %rdi : address
# %rsi : defaultval
- .type SafeFetchN_impl,@function
-SafeFetchN_impl:
-_SafeFetchN_fault:
+DECLARE_FUNC(SafeFetchN_impl):
+DECLARE_FUNC(_SafeFetchN_fault):
movq (%rdi), %rax # load target value, may fault
ret
-_SafeFetchN_continuation:
+DECLARE_FUNC(_SafeFetchN_continuation):
movq %rsi, %rax # return default
ret
diff --git a/src/hotspot/share/asm/codeBuffer.cpp b/src/hotspot/share/asm/codeBuffer.cpp
index 4bc33cc89a63b..e818853870e55 100644
--- a/src/hotspot/share/asm/codeBuffer.cpp
+++ b/src/hotspot/share/asm/codeBuffer.cpp
@@ -1012,6 +1012,8 @@ void CodeBuffer::log_section_sizes(const char* name) {
}
bool CodeBuffer::finalize_stubs() {
+ // Record size of code before we generate stubs in instructions section
+ _main_code_size = _insts.size();
if (_finalize_stubs && !pd_finalize_stubs()) {
// stub allocation failure
return false;
diff --git a/src/hotspot/share/asm/codeBuffer.hpp b/src/hotspot/share/asm/codeBuffer.hpp
index 48476bedfe2f6..5cff78a96cc83 100644
--- a/src/hotspot/share/asm/codeBuffer.hpp
+++ b/src/hotspot/share/asm/codeBuffer.hpp
@@ -427,6 +427,9 @@ class CodeBuffer: public StackObj DEBUG_ONLY(COMMA private Scrubber) {
address _total_start; // first address of combined memory buffer
csize_t _total_size; // size in bytes of combined memory buffer
+ // Size of code without stubs generated at the end of instructions section
+ csize_t _main_code_size;
+
OopRecorder* _oop_recorder;
OopRecorder _default_oop_recorder; // override with initialize_oop_recorder
@@ -457,6 +460,7 @@ class CodeBuffer: public StackObj DEBUG_ONLY(COMMA private Scrubber) {
_oop_recorder = nullptr;
_overflow_arena = nullptr;
_last_insn = nullptr;
+ _main_code_size = 0;
_finalize_stubs = false;
_shared_stub_to_interp_requests = nullptr;
_shared_trampoline_requests = nullptr;
@@ -630,6 +634,9 @@ class CodeBuffer: public StackObj DEBUG_ONLY(COMMA private Scrubber) {
// number of bytes remaining in the insts section
csize_t insts_remaining() const { return _insts.remaining(); }
+ // size of code without stubs in instruction section
+ csize_t main_code_size() const { return _main_code_size; }
+
// is a given address in the insts section? (2nd version is end-inclusive)
bool insts_contains(address pc) const { return _insts.contains(pc); }
bool insts_contains2(address pc) const { return _insts.contains2(pc); }
diff --git a/src/hotspot/share/c1/c1_Compiler.cpp b/src/hotspot/share/c1/c1_Compiler.cpp
index bdbeb39f89a4a..3a902bb4230c0 100644
--- a/src/hotspot/share/c1/c1_Compiler.cpp
+++ b/src/hotspot/share/c1/c1_Compiler.cpp
@@ -235,6 +235,9 @@ bool Compiler::is_intrinsic_supported(vmIntrinsics::ID id) {
case vmIntrinsics::_counterTime:
#endif
case vmIntrinsics::_getObjectSize:
+#if defined(X86) || defined(AARCH64) || defined(S390)
+ case vmIntrinsics::_clone:
+#endif
break;
case vmIntrinsics::_blackhole:
break;
diff --git a/src/hotspot/share/c1/c1_GraphBuilder.cpp b/src/hotspot/share/c1/c1_GraphBuilder.cpp
index a361f3da9c281..db025883b78ba 100644
--- a/src/hotspot/share/c1/c1_GraphBuilder.cpp
+++ b/src/hotspot/share/c1/c1_GraphBuilder.cpp
@@ -2026,8 +2026,11 @@ void GraphBuilder::invoke(Bytecodes::Code code) {
int index = state()->stack_size() - (target->arg_size_no_receiver() + 1);
receiver = state()->stack_at(index);
ciType* type = receiver->exact_type();
- if (type != nullptr && type->is_loaded() &&
- type->is_instance_klass() && !type->as_instance_klass()->is_interface()) {
+ if (type != nullptr && type->is_loaded()) {
+ assert(!type->is_instance_klass() || !type->as_instance_klass()->is_interface(), "Must not be an interface");
+ // Detects non-interface instances, primitive arrays, and some object arrays.
+ // Array receivers can only call Object methods, so we should be able to allow
+ // all object arrays here too, even those with unloaded types.
receiver_klass = (ciInstanceKlass*) type;
type_is_exact = true;
}
@@ -2243,7 +2246,7 @@ void GraphBuilder::new_instance(int klass_index) {
void GraphBuilder::new_type_array() {
ValueStack* state_before = copy_state_exhandling();
- apush(append_split(new NewTypeArray(ipop(), (BasicType)stream()->get_index(), state_before)));
+ apush(append_split(new NewTypeArray(ipop(), (BasicType)stream()->get_index(), state_before, true)));
}
@@ -3650,9 +3653,13 @@ void GraphBuilder::build_graph_for_intrinsic(ciMethod* callee, bool ignore_retur
case vmIntrinsics::_getAndSetReference : append_unsafe_get_and_set(callee, false); return;
case vmIntrinsics::_getCharStringU : append_char_access(callee, false); return;
case vmIntrinsics::_putCharStringU : append_char_access(callee, true); return;
+ case vmIntrinsics::_clone : append_alloc_array_copy(callee); return;
default:
break;
}
+ if (_inline_bailout_msg != nullptr) {
+ return;
+ }
// create intrinsic node
const bool has_receiver = !callee->is_static();
@@ -3714,6 +3721,9 @@ bool GraphBuilder::try_inline_intrinsics(ciMethod* callee, bool ignore_return) {
}
}
build_graph_for_intrinsic(callee, ignore_return);
+ if (_inline_bailout_msg != nullptr) {
+ return false;
+ }
return true;
}
@@ -4427,6 +4437,43 @@ void GraphBuilder::append_char_access(ciMethod* callee, bool is_store) {
}
}
+void GraphBuilder::append_alloc_array_copy(ciMethod* callee) {
+ const int args_base = state()->stack_size() - callee->arg_size();
+ ciType* receiver_type = state()->stack_at(args_base)->exact_type();
+ if (receiver_type == nullptr) {
+ inline_bailout("must have a receiver");
+ return;
+ }
+ if (!receiver_type->is_type_array_klass()) {
+ inline_bailout("clone array not primitive");
+ return;
+ }
+
+ ValueStack* state_before = copy_state_before();
+ state_before->set_force_reexecute();
+ Value src = apop();
+ BasicType basic_type = src->exact_type()->as_array_klass()->element_type()->basic_type();
+ Value length = append(new ArrayLength(src, state_before));
+ Value new_array = append_split(new NewTypeArray(length, basic_type, state_before, false));
+
+ ValueType* result_type = as_ValueType(callee->return_type());
+ vmIntrinsics::ID id = vmIntrinsics::_arraycopy;
+ Values* args = new Values(5);
+ args->push(src);
+ args->push(append(new Constant(new IntConstant(0))));
+ args->push(new_array);
+ args->push(append(new Constant(new IntConstant(0))));
+ args->push(length);
+ const bool has_receiver = true;
+ Intrinsic* array_copy = new Intrinsic(result_type, id,
+ args, has_receiver, state_before,
+ vmIntrinsics::preserves_state(id),
+ vmIntrinsics::can_trap(id));
+ array_copy->set_flag(Instruction::OmitChecksFlag, true);
+ append_split(array_copy);
+ apush(new_array);
+}
+
void GraphBuilder::print_inlining(ciMethod* callee, const char* msg, bool success) {
CompileLog* log = compilation()->log();
if (log != nullptr) {
diff --git a/src/hotspot/share/c1/c1_GraphBuilder.hpp b/src/hotspot/share/c1/c1_GraphBuilder.hpp
index 42233455d4c73..92b9a518a2070 100644
--- a/src/hotspot/share/c1/c1_GraphBuilder.hpp
+++ b/src/hotspot/share/c1/c1_GraphBuilder.hpp
@@ -379,6 +379,7 @@ class GraphBuilder {
void append_unsafe_CAS(ciMethod* callee);
void append_unsafe_get_and_set(ciMethod* callee, bool is_add);
void append_char_access(ciMethod* callee, bool is_store);
+ void append_alloc_array_copy(ciMethod* callee);
void print_inlining(ciMethod* callee, const char* msg, bool success = true);
diff --git a/src/hotspot/share/c1/c1_Instruction.hpp b/src/hotspot/share/c1/c1_Instruction.hpp
index 8f7fd698e7944..32ff3d9f61cab 100644
--- a/src/hotspot/share/c1/c1_Instruction.hpp
+++ b/src/hotspot/share/c1/c1_Instruction.hpp
@@ -364,6 +364,7 @@ class Instruction: public CompilationResourceObj {
InWorkListFlag,
DeoptimizeOnException,
KillsMemoryFlag,
+ OmitChecksFlag,
InstructionLastFlag
};
@@ -1327,16 +1328,19 @@ BASE(NewArray, StateSplit)
LEAF(NewTypeArray, NewArray)
private:
BasicType _elt_type;
+ bool _zero_array;
public:
// creation
- NewTypeArray(Value length, BasicType elt_type, ValueStack* state_before)
+ NewTypeArray(Value length, BasicType elt_type, ValueStack* state_before, bool zero_array)
: NewArray(length, state_before)
, _elt_type(elt_type)
+ , _zero_array(zero_array)
{}
// accessors
BasicType elt_type() const { return _elt_type; }
+ bool zero_array() const { return _zero_array; }
ciType* exact_type() const;
};
diff --git a/src/hotspot/share/c1/c1_LIR.cpp b/src/hotspot/share/c1/c1_LIR.cpp
index dee208c11bed3..a9975010a5690 100644
--- a/src/hotspot/share/c1/c1_LIR.cpp
+++ b/src/hotspot/share/c1/c1_LIR.cpp
@@ -196,14 +196,12 @@ void LIR_Op2::verify() const {
if (two_operand_lir_form) {
-#ifdef ASSERT
bool threeOperandForm = false;
#ifdef S390
// There are 3 operand shifts on S390 (see LIR_Assembler::shift_op()).
threeOperandForm =
code() == lir_shl ||
((code() == lir_shr || code() == lir_ushr) && (result_opr()->is_double_cpu() || in_opr1()->type() == T_OBJECT));
-#endif
#endif
switch (code()) {
@@ -353,7 +351,15 @@ LIR_OpArrayCopy::LIR_OpArrayCopy(LIR_Opr src, LIR_Opr src_pos, LIR_Opr dst, LIR_
, _tmp(tmp)
, _expected_type(expected_type)
, _flags(flags) {
+#if defined(X86) || defined(AARCH64) || defined(S390)
+ if (expected_type != nullptr && flags == 0) {
+ _stub = nullptr;
+ } else {
+ _stub = new ArrayCopyStub(this);
+ }
+#else
_stub = new ArrayCopyStub(this);
+#endif
}
LIR_OpUpdateCRC32::LIR_OpUpdateCRC32(LIR_Opr crc, LIR_Opr val, LIR_Opr res)
@@ -999,7 +1005,10 @@ void LIR_OpLabel::emit_code(LIR_Assembler* masm) {
void LIR_OpArrayCopy::emit_code(LIR_Assembler* masm) {
masm->emit_arraycopy(this);
- masm->append_code_stub(stub());
+ ArrayCopyStub* code_stub = stub();
+ if (code_stub != nullptr) {
+ masm->append_code_stub(code_stub);
+ }
}
void LIR_OpUpdateCRC32::emit_code(LIR_Assembler* masm) {
@@ -1365,7 +1374,7 @@ void LIR_List::allocate_object(LIR_Opr dst, LIR_Opr t1, LIR_Opr t2, LIR_Opr t3,
stub));
}
-void LIR_List::allocate_array(LIR_Opr dst, LIR_Opr len, LIR_Opr t1,LIR_Opr t2, LIR_Opr t3,LIR_Opr t4, BasicType type, LIR_Opr klass, CodeStub* stub) {
+void LIR_List::allocate_array(LIR_Opr dst, LIR_Opr len, LIR_Opr t1,LIR_Opr t2, LIR_Opr t3,LIR_Opr t4, BasicType type, LIR_Opr klass, CodeStub* stub, bool zero_array) {
append(new LIR_OpAllocArray(
klass,
len,
@@ -1375,7 +1384,8 @@ void LIR_List::allocate_array(LIR_Opr dst, LIR_Opr len, LIR_Opr t1,LIR_Opr t2, L
t3,
t4,
type,
- stub));
+ stub,
+ zero_array));
}
void LIR_List::shift_left(LIR_Opr value, LIR_Opr count, LIR_Opr dst, LIR_Opr tmp) {
diff --git a/src/hotspot/share/c1/c1_LIR.hpp b/src/hotspot/share/c1/c1_LIR.hpp
index 6f527135fbe1c..c69d29f8d619d 100644
--- a/src/hotspot/share/c1/c1_LIR.hpp
+++ b/src/hotspot/share/c1/c1_LIR.hpp
@@ -1750,9 +1750,10 @@ class LIR_OpAllocArray : public LIR_Op {
LIR_Opr _tmp4;
BasicType _type;
CodeStub* _stub;
+ bool _zero_array;
public:
- LIR_OpAllocArray(LIR_Opr klass, LIR_Opr len, LIR_Opr result, LIR_Opr t1, LIR_Opr t2, LIR_Opr t3, LIR_Opr t4, BasicType type, CodeStub* stub)
+ LIR_OpAllocArray(LIR_Opr klass, LIR_Opr len, LIR_Opr result, LIR_Opr t1, LIR_Opr t2, LIR_Opr t3, LIR_Opr t4, BasicType type, CodeStub* stub, bool zero_array)
: LIR_Op(lir_alloc_array, result, nullptr)
, _klass(klass)
, _len(len)
@@ -1761,7 +1762,8 @@ class LIR_OpAllocArray : public LIR_Op {
, _tmp3(t3)
, _tmp4(t4)
, _type(type)
- , _stub(stub) {}
+ , _stub(stub)
+ , _zero_array(zero_array) {}
LIR_Opr klass() const { return _klass; }
LIR_Opr len() const { return _len; }
@@ -1772,6 +1774,7 @@ class LIR_OpAllocArray : public LIR_Op {
LIR_Opr tmp4() const { return _tmp4; }
BasicType type() const { return _type; }
CodeStub* stub() const { return _stub; }
+ bool zero_array() const { return _zero_array; }
virtual void emit_code(LIR_Assembler* masm);
virtual LIR_OpAllocArray * as_OpAllocArray () { return this; }
@@ -2302,7 +2305,7 @@ class LIR_List: public CompilationResourceObj {
void irem(LIR_Opr left, int right, LIR_Opr res, LIR_Opr tmp, CodeEmitInfo* info);
void allocate_object(LIR_Opr dst, LIR_Opr t1, LIR_Opr t2, LIR_Opr t3, LIR_Opr t4, int header_size, int object_size, LIR_Opr klass, bool init_check, CodeStub* stub);
- void allocate_array(LIR_Opr dst, LIR_Opr len, LIR_Opr t1,LIR_Opr t2, LIR_Opr t3,LIR_Opr t4, BasicType type, LIR_Opr klass, CodeStub* stub);
+ void allocate_array(LIR_Opr dst, LIR_Opr len, LIR_Opr t1,LIR_Opr t2, LIR_Opr t3,LIR_Opr t4, BasicType type, LIR_Opr klass, CodeStub* stub, bool zero_array = true);
// jump is an unconditional branch
void jump(BlockBegin* block) {
diff --git a/src/hotspot/share/c1/c1_ValueStack.cpp b/src/hotspot/share/c1/c1_ValueStack.cpp
index e7bf714a3674d..41424e36d0784 100644
--- a/src/hotspot/share/c1/c1_ValueStack.cpp
+++ b/src/hotspot/share/c1/c1_ValueStack.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 1999, 2023, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1999, 2024, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -38,6 +38,7 @@ ValueStack::ValueStack(IRScope* scope, ValueStack* caller_state)
, _locals(scope->method()->max_locals(), scope->method()->max_locals(), nullptr)
, _stack(scope->method()->max_stack())
, _locks(nullptr)
+, _force_reexecute(false)
{
verify();
}
@@ -50,6 +51,7 @@ ValueStack::ValueStack(ValueStack* copy_from, Kind kind, int bci)
, _locals(copy_from->locals_size_for_copy(kind))
, _stack(copy_from->stack_size_for_copy(kind))
, _locks(copy_from->locks_size() == 0 ? nullptr : new Values(copy_from->locks_size()))
+ , _force_reexecute(false)
{
switch (kind) {
case EmptyExceptionState:
diff --git a/src/hotspot/share/c1/c1_ValueStack.hpp b/src/hotspot/share/c1/c1_ValueStack.hpp
index 0a75fa39bf607..bb0c475585c86 100644
--- a/src/hotspot/share/c1/c1_ValueStack.hpp
+++ b/src/hotspot/share/c1/c1_ValueStack.hpp
@@ -58,6 +58,7 @@ class ValueStack: public CompilationResourceObj {
Values _locals; // the locals
Values _stack; // the expression stack
Values* _locks; // the monitor stack (holding the locked values)
+ bool _force_reexecute; // force the reexecute flag on, used for patching stub
Value check(ValueTag tag, Value t) {
assert(tag == t->type()->tag() || (tag == objectTag && t->type()->tag() == addressTag), "types must correspond");
@@ -225,6 +226,9 @@ class ValueStack: public CompilationResourceObj {
void setup_phi_for_stack(BlockBegin* b, int index);
void setup_phi_for_local(BlockBegin* b, int index);
+ bool force_reexecute() const { return _force_reexecute; }
+ void set_force_reexecute() { _force_reexecute = true; }
+
// debugging
void print() PRODUCT_RETURN;
void verify() PRODUCT_RETURN;
diff --git a/src/hotspot/share/cds/archiveBuilder.cpp b/src/hotspot/share/cds/archiveBuilder.cpp
index b8004f741ba63..a98fd04ba6862 100644
--- a/src/hotspot/share/cds/archiveBuilder.cpp
+++ b/src/hotspot/share/cds/archiveBuilder.cpp
@@ -146,7 +146,7 @@ void ArchiveBuilder::SourceObjList::relocate(int i, ArchiveBuilder* builder) {
}
ArchiveBuilder::ArchiveBuilder() :
- _current_dump_space(nullptr),
+ _current_dump_region(nullptr),
_buffer_bottom(nullptr),
_last_verified_top(nullptr),
_num_dump_regions_used(0),
@@ -326,7 +326,7 @@ size_t ArchiveBuilder::estimate_archive_size() {
address ArchiveBuilder::reserve_buffer() {
size_t buffer_size = estimate_archive_size();
- ReservedSpace rs(buffer_size, MetaspaceShared::core_region_alignment(), os::vm_page_size(), mtClassShared);
+ ReservedSpace rs(buffer_size, MetaspaceShared::core_region_alignment(), os::vm_page_size());
if (!rs.is_reserved()) {
log_error(cds)("Failed to reserve " SIZE_FORMAT " bytes of output buffer.", buffer_size);
MetaspaceShared::unrecoverable_writing_error();
@@ -341,10 +341,10 @@ address ArchiveBuilder::reserve_buffer() {
_buffer_bottom = buffer_bottom;
_last_verified_top = buffer_bottom;
- _current_dump_space = &_rw_region;
+ _current_dump_region = &_rw_region;
_num_dump_regions_used = 1;
_other_region_used_bytes = 0;
- _current_dump_space->init(&_shared_rs, &_shared_vs);
+ _current_dump_region->init(&_shared_rs, &_shared_vs);
ArchivePtrMarker::initialize(&_ptrmap, &_shared_vs);
@@ -560,21 +560,21 @@ ArchiveBuilder::FollowMode ArchiveBuilder::get_follow_mode(MetaspaceClosure::Ref
}
}
-void ArchiveBuilder::start_dump_space(DumpRegion* next) {
+void ArchiveBuilder::start_dump_region(DumpRegion* next) {
address bottom = _last_verified_top;
- address top = (address)(current_dump_space()->top());
+ address top = (address)(current_dump_region()->top());
_other_region_used_bytes += size_t(top - bottom);
- current_dump_space()->pack(next);
- _current_dump_space = next;
+ current_dump_region()->pack(next);
+ _current_dump_region = next;
_num_dump_regions_used ++;
- _last_verified_top = (address)(current_dump_space()->top());
+ _last_verified_top = (address)(current_dump_region()->top());
}
void ArchiveBuilder::verify_estimate_size(size_t estimate, const char* which) {
address bottom = _last_verified_top;
- address top = (address)(current_dump_space()->top());
+ address top = (address)(current_dump_region()->top());
size_t used = size_t(top - bottom) + _other_region_used_bytes;
int diff = int(estimate) - int(used);
@@ -630,7 +630,7 @@ void ArchiveBuilder::dump_ro_metadata() {
ResourceMark rm;
log_info(cds)("Allocating RO objects ... ");
- start_dump_space(&_ro_region);
+ start_dump_region(&_ro_region);
make_shallow_copies(&_ro_region, &_ro_src_objs);
#if INCLUDE_CDS_JAVA_HEAP
diff --git a/src/hotspot/share/cds/archiveBuilder.hpp b/src/hotspot/share/cds/archiveBuilder.hpp
index dab369265b0b4..cbde5a7e02cbc 100644
--- a/src/hotspot/share/cds/archiveBuilder.hpp
+++ b/src/hotspot/share/cds/archiveBuilder.hpp
@@ -91,7 +91,7 @@ const int SharedSpaceObjectAlignment = KlassAlignmentInBytes;
//
class ArchiveBuilder : public StackObj {
protected:
- DumpRegion* _current_dump_space;
+ DumpRegion* _current_dump_region;
address _buffer_bottom; // for writing the contents of rw/ro regions
address _last_verified_top;
int _num_dump_regions_used;
@@ -114,7 +114,7 @@ class ArchiveBuilder : public StackObj {
intx _buffer_to_requested_delta;
- DumpRegion* current_dump_space() const { return _current_dump_space; }
+ DumpRegion* current_dump_region() const { return _current_dump_region; }
public:
enum FollowMode {
@@ -278,17 +278,17 @@ class ArchiveBuilder : public StackObj {
size_t estimate_archive_size();
- void start_dump_space(DumpRegion* next);
+ void start_dump_region(DumpRegion* next);
void verify_estimate_size(size_t estimate, const char* which);
public:
address reserve_buffer();
- address buffer_bottom() const { return _buffer_bottom; }
- address buffer_top() const { return (address)current_dump_space()->top(); }
- address requested_static_archive_bottom() const { return _requested_static_archive_bottom; }
- address mapped_static_archive_bottom() const { return _mapped_static_archive_bottom; }
- intx buffer_to_requested_delta() const { return _buffer_to_requested_delta; }
+ address buffer_bottom() const { return _buffer_bottom; }
+ address buffer_top() const { return (address)current_dump_region()->top(); }
+ address requested_static_archive_bottom() const { return _requested_static_archive_bottom; }
+ address mapped_static_archive_bottom() const { return _mapped_static_archive_bottom; }
+ intx buffer_to_requested_delta() const { return _buffer_to_requested_delta; }
bool is_in_buffer_space(address p) const {
return (buffer_bottom() <= p && p < buffer_top());
diff --git a/src/hotspot/share/cds/archiveHeapWriter.cpp b/src/hotspot/share/cds/archiveHeapWriter.cpp
index a86996d3b1fd8..bf49805658c09 100644
--- a/src/hotspot/share/cds/archiveHeapWriter.cpp
+++ b/src/hotspot/share/cds/archiveHeapWriter.cpp
@@ -67,8 +67,10 @@ ArchiveHeapWriter::BufferOffsetToSourceObjectTable*
ArchiveHeapWriter::_buffer_offset_to_source_obj_table = nullptr;
-typedef ResourceHashtable FillersTable;
static FillersTable* _fillers;
@@ -87,7 +89,7 @@ void ArchiveHeapWriter::init() {
_source_objs = new GrowableArrayCHeap(10000);
guarantee(UseG1GC, "implementation limitation");
- guarantee(MIN_GC_REGION_ALIGNMENT <= /*G1*/HeapRegion::min_region_size_in_words() * HeapWordSize, "must be");
+ guarantee(MIN_GC_REGION_ALIGNMENT <= G1HeapRegion::min_region_size_in_words() * HeapWordSize, "must be");
}
}
@@ -361,12 +363,12 @@ void ArchiveHeapWriter::maybe_fill_gc_region_gap(size_t required_byte_size) {
array_length, fill_bytes, _buffer_used);
HeapWord* filler = init_filler_array_at_buffer_top(array_length, fill_bytes);
_buffer_used = filler_end;
- _fillers->put((address)filler, fill_bytes);
+ _fillers->put(buffered_address_to_offset((address)filler), fill_bytes);
}
}
size_t ArchiveHeapWriter::get_filler_size_at(address buffered_addr) {
- size_t* p = _fillers->get(buffered_addr);
+ size_t* p = _fillers->get(buffered_address_to_offset(buffered_addr));
if (p != nullptr) {
assert(*p > 0, "filler must be larger than zero bytes");
return *p;
@@ -437,7 +439,7 @@ void ArchiveHeapWriter::set_requested_address(ArchiveHeapInfo* info) {
if (UseCompressedOops) {
- _requested_bottom = align_down(heap_end - heap_region_byte_size, HeapRegion::GrainBytes);
+ _requested_bottom = align_down(heap_end - heap_region_byte_size, G1HeapRegion::GrainBytes);
} else {
// We always write the objects as if the heap started at this address. This
// makes the contents of the archive heap deterministic.
@@ -447,7 +449,7 @@ void ArchiveHeapWriter::set_requested_address(ArchiveHeapInfo* info) {
_requested_bottom = (address)NOCOOPS_REQUESTED_BASE;
}
- assert(is_aligned(_requested_bottom, HeapRegion::GrainBytes), "sanity");
+ assert(is_aligned(_requested_bottom, G1HeapRegion::GrainBytes), "sanity");
_requested_top = _requested_bottom + _buffer_used;
@@ -530,10 +532,8 @@ void ArchiveHeapWriter::update_header_for_requested_obj(oop requested_obj, oop s
fake_oop->set_narrow_klass(nk);
// We need to retain the identity_hash, because it may have been used by some hashtables
- // in the shared heap. This also has the side effect of pre-initializing the
- // identity_hash for all shared objects, so they are less likely to be written
- // into during run time, increasing the potential of memory sharing.
- if (src_obj != nullptr) {
+ // in the shared heap.
+ if (src_obj != nullptr && !src_obj->fast_no_hash_check()) {
intptr_t src_hash = src_obj->identity_hash();
fake_oop->set_mark(markWord::prototype().copy_set_hash(src_hash));
assert(fake_oop->mark().is_unlocked(), "sanity");
diff --git a/src/hotspot/share/cds/archiveUtils.cpp b/src/hotspot/share/cds/archiveUtils.cpp
index 5ba36960c55b1..8fd20e2026700 100644
--- a/src/hotspot/share/cds/archiveUtils.cpp
+++ b/src/hotspot/share/cds/archiveUtils.cpp
@@ -310,18 +310,11 @@ void WriteClosure::do_ptr(void** p) {
if (ptr != nullptr && !ArchiveBuilder::current()->is_in_buffer_space(ptr)) {
ptr = ArchiveBuilder::current()->get_buffered_addr(ptr);
}
- _dump_region->append_intptr_t((intptr_t)ptr, true);
-}
-
-void WriteClosure::do_region(u_char* start, size_t size) {
- assert((intptr_t)start % sizeof(intptr_t) == 0, "bad alignment");
- assert(size % sizeof(intptr_t) == 0, "bad size");
- do_tag((int)size);
- while (size > 0) {
- do_ptr((void**)start);
- start += sizeof(intptr_t);
- size -= sizeof(intptr_t);
+ // null pointers do not need to be converted to offsets
+ if (ptr != nullptr) {
+ ptr = (address)ArchiveBuilder::current()->buffer_to_offset(ptr);
}
+ _dump_region->append_intptr_t((intptr_t)ptr, false);
}
void ReadClosure::do_ptr(void** p) {
@@ -329,7 +322,7 @@ void ReadClosure::do_ptr(void** p) {
intptr_t obj = nextPtr();
assert((intptr_t)obj >= 0 || (intptr_t)obj < -100,
"hit tag while initializing ptrs.");
- *p = (void*)obj;
+ *p = (void*)obj != nullptr ? (void*)(SharedBaseAddress + obj) : (void*)obj;
}
void ReadClosure::do_u4(u4* p) {
@@ -355,17 +348,6 @@ void ReadClosure::do_tag(int tag) {
FileMapInfo::assert_mark(tag == old_tag);
}
-void ReadClosure::do_region(u_char* start, size_t size) {
- assert((intptr_t)start % sizeof(intptr_t) == 0, "bad alignment");
- assert(size % sizeof(intptr_t) == 0, "bad size");
- do_tag((int)size);
- while (size > 0) {
- *(intptr_t*)start = nextPtr();
- start += sizeof(intptr_t);
- size -= sizeof(intptr_t);
- }
-}
-
void ArchiveUtils::log_to_classlist(BootstrapInfo* bootstrap_specifier, TRAPS) {
if (ClassListWriter::is_enabled()) {
if (SystemDictionaryShared::is_supported_invokedynamic(bootstrap_specifier)) {
diff --git a/src/hotspot/share/cds/archiveUtils.hpp b/src/hotspot/share/cds/archiveUtils.hpp
index efe5a468b93b4..32cef97886f06 100644
--- a/src/hotspot/share/cds/archiveUtils.hpp
+++ b/src/hotspot/share/cds/archiveUtils.hpp
@@ -215,7 +215,10 @@ class WriteClosure : public SerializeClosure {
_dump_region->append_intptr_t((intptr_t)tag);
}
- void do_region(u_char* start, size_t size);
+ char* region_top() {
+ return _dump_region->top();
+ }
+
bool reading() const { return false; }
};
@@ -238,8 +241,8 @@ class ReadClosure : public SerializeClosure {
void do_int(int* p);
void do_bool(bool *p);
void do_tag(int tag);
- void do_region(u_char* start, size_t size);
bool reading() const { return true; }
+ char* region_top() { return nullptr; }
};
class ArchiveUtils {
diff --git a/src/hotspot/share/cds/classListParser.cpp b/src/hotspot/share/cds/classListParser.cpp
index b17da725deb3f..4455dd7db1b02 100644
--- a/src/hotspot/share/cds/classListParser.cpp
+++ b/src/hotspot/share/cds/classListParser.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2015, 2023, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2015, 2024, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -50,33 +50,27 @@
#include "runtime/javaCalls.hpp"
#include "utilities/defaultStream.hpp"
#include "utilities/macros.hpp"
+#include "utilities/utf8.hpp"
volatile Thread* ClassListParser::_parsing_thread = nullptr;
ClassListParser* ClassListParser::_instance = nullptr;
-ClassListParser::ClassListParser(const char* file, ParseMode parse_mode) : _id2klass_table(INITIAL_TABLE_SIZE, MAX_TABLE_SIZE) {
+ClassListParser::ClassListParser(const char* file, ParseMode parse_mode) :
+ _classlist_file(file),
+ _id2klass_table(INITIAL_TABLE_SIZE, MAX_TABLE_SIZE),
+ _file_input(do_open(file), /* need_close=*/true),
+ _input_stream(&_file_input),
+ _parse_mode(parse_mode) {
log_info(cds)("Parsing %s%s", file,
- (parse_mode == _parse_lambda_forms_invokers_only) ? " (lambda form invokers only)" : "");
- _classlist_file = file;
- _file = nullptr;
- // Use os::open() because neither fopen() nor os::fopen()
- // can handle long path name on Windows.
- int fd = os::open(file, O_RDONLY, S_IREAD);
- if (fd != -1) {
- // Obtain a File* from the file descriptor so that fgets()
- // can be used in parse_one_line()
- _file = os::fdopen(fd, "r");
- }
- if (_file == nullptr) {
+ parse_lambda_forms_invokers_only() ? " (lambda form invokers only)" : "");
+ if (!_file_input.is_open()) {
char errmsg[JVM_MAXPATHLEN];
os::lasterror(errmsg, JVM_MAXPATHLEN);
vm_exit_during_initialization("Loading classlist failed", errmsg);
}
- _line_no = 0;
- _token = _line;
+ _token = _line = nullptr;
_interfaces = new (mtClass) GrowableArray(10, mtClass);
_indy_items = new (mtClass) GrowableArray(9, mtClass);
- _parse_mode = parse_mode;
// _instance should only be accessed by the thread that created _instance.
assert(_instance == nullptr, "must be singleton");
@@ -84,142 +78,132 @@ ClassListParser::ClassListParser(const char* file, ParseMode parse_mode) : _id2k
Atomic::store(&_parsing_thread, Thread::current());
}
+FILE* ClassListParser::do_open(const char* file) {
+ // Use os::open() because neither fopen() nor os::fopen()
+ // can handle long path name on Windows. (See JDK-8216184)
+ int fd = os::open(file, O_RDONLY, S_IREAD);
+ FILE* fp = nullptr;
+ if (fd != -1) {
+ // Obtain a FILE* from the file descriptor so that _input_stream
+ // can be used in ClassListParser::parse()
+ fp = os::fdopen(fd, "r");
+ }
+ return fp;
+}
+
bool ClassListParser::is_parsing_thread() {
return Atomic::load(&_parsing_thread) == Thread::current();
}
ClassListParser::~ClassListParser() {
- if (_file != nullptr) {
- fclose(_file);
- }
Atomic::store(&_parsing_thread, (Thread*)nullptr);
delete _indy_items;
delete _interfaces;
_instance = nullptr;
}
-int ClassListParser::parse(TRAPS) {
- int class_count = 0;
-
- while (parse_one_line()) {
- if (lambda_form_line()) {
- // The current line is "@lambda-form-invoker ...". It has been recorded in LambdaFormInvokers,
- // and will be processed later.
- continue;
+void ClassListParser::parse(TRAPS) {
+ for (; !_input_stream.done(); _input_stream.next()) {
+ _line = _input_stream.current_line();
+ clean_up_input_line();
+
+ // Each line in the classlist can be one of three forms:
+ if (_line[0] == '#') {
+ // A comment; ignore it
+ } else if (_line[0] == '@') {
+ // @xxx - a tag like @lambda-proxy, to be parsed by parse_at_tags()
+ parse_at_tags(CHECK);
+ } else {
+ // A class name, followed by optional attributes. E.g.
+ // java/lang/String
+ // java/lang/Object id: 1
+ // my/pkg/TestClass id: 5 super: 1 interfaces: 3 4 source: foo.jar
+ parse_class_name_and_attributes(CHECK);
}
+ }
+}
- if (_parse_mode == _parse_lambda_forms_invokers_only) {
- continue;
- }
+void ClassListParser::parse_class_name_and_attributes(TRAPS) {
+ read_class_name_and_attributes();
- TempNewSymbol class_name_symbol = SymbolTable::new_symbol(_class_name);
- if (_indy_items->length() > 0) {
- // The current line is "@lambda-proxy class_name". Load the proxy class.
- resolve_indy(THREAD, class_name_symbol);
- class_count++;
- continue;
- }
-
- Klass* klass = load_current_class(class_name_symbol, THREAD);
- if (HAS_PENDING_EXCEPTION) {
- if (PENDING_EXCEPTION->is_a(vmClasses::OutOfMemoryError_klass())) {
- // If we have run out of memory, don't try to load the rest of the classes in
- // the classlist. Throw an exception, which will terminate the dumping process.
- return 0; // THROW
- }
+ if (parse_lambda_forms_invokers_only()) {
+ return;
+ }
- ResourceMark rm(THREAD);
- char* ex_msg = (char*)"";
- oop message = java_lang_Throwable::message(PENDING_EXCEPTION);
- if (message != nullptr) {
- ex_msg = java_lang_String::as_utf8_string(message);
- }
- log_warning(cds)("%s: %s", PENDING_EXCEPTION->klass()->external_name(), ex_msg);
- // We might have an invalid class name or an bad class. Warn about it
- // and keep going to the next line.
- CLEAR_PENDING_EXCEPTION;
- log_warning(cds)("Preload Warning: Cannot find %s", _class_name);
- continue;
+ check_class_name(_class_name);
+ TempNewSymbol class_name_symbol = SymbolTable::new_symbol(_class_name);
+ Klass* klass = load_current_class(class_name_symbol, THREAD);
+ if (HAS_PENDING_EXCEPTION) {
+ if (PENDING_EXCEPTION->is_a(vmClasses::OutOfMemoryError_klass())) {
+ // If we have run out of memory, don't try to load the rest of the classes in
+ // the classlist. Throw an exception, which will terminate the dumping process.
+ return; // THROW
}
- assert(klass != nullptr, "sanity");
- if (log_is_enabled(Trace, cds)) {
- ResourceMark rm(THREAD);
- log_trace(cds)("Shared spaces preloaded: %s", klass->external_name());
+ ResourceMark rm(THREAD);
+ char* ex_msg = (char*)"";
+ oop message = java_lang_Throwable::message(PENDING_EXCEPTION);
+ if (message != nullptr) {
+ ex_msg = java_lang_String::as_utf8_string(message);
}
+ log_warning(cds)("%s: %s", PENDING_EXCEPTION->klass()->external_name(), ex_msg);
+ // We might have an invalid class name or an bad class. Warn about it
+ // and keep going to the next line.
+ CLEAR_PENDING_EXCEPTION;
+ log_warning(cds)("Preload Warning: Cannot find %s", _class_name);
+ return;
+ }
- if (klass->is_instance_klass()) {
- InstanceKlass* ik = InstanceKlass::cast(klass);
+ assert(klass != nullptr, "sanity");
+ if (log_is_enabled(Trace, cds)) {
+ ResourceMark rm(THREAD);
+ log_trace(cds)("Shared spaces preloaded: %s", klass->external_name());
+ }
- // Link the class to cause the bytecodes to be rewritten and the
- // cpcache to be created. The linking is done as soon as classes
- // are loaded in order that the related data structures (klass and
- // cpCache) are located together.
- MetaspaceShared::try_link_class(THREAD, ik);
- }
+ if (klass->is_instance_klass()) {
+ InstanceKlass* ik = InstanceKlass::cast(klass);
- class_count++;
+ // Link the class to cause the bytecodes to be rewritten and the
+ // cpcache to be created. The linking is done as soon as classes
+ // are loaded in order that the related data structures (klass and
+ // cpCache) are located together.
+ MetaspaceShared::try_link_class(THREAD, ik);
}
-
- return class_count;
}
-bool ClassListParser::parse_one_line() {
- for (;;) {
- if (fgets(_line, sizeof(_line), _file) == nullptr) {
- return false;
- }
- ++ _line_no;
- _line_len = (int)strlen(_line);
- if (_line_len > _max_allowed_line_len) {
- error("input line too long (must be no longer than %d chars)", _max_allowed_line_len);
- }
- if (*_line == '#') { // comment
- continue;
+void ClassListParser::clean_up_input_line() {
+ int len = (int)strlen(_line);
+ int i;
+ // Replace \t\r\n\f with ' '
+ for (i=0; i 0) {
- if (_line[len-1] == ' ') {
- _line[len-1] = '\0';
- len --;
- } else {
- break;
- }
- }
- _line_len = len;
+ // Remove trailing newline/space
+ while (len > 0) {
+ if (_line[len-1] == ' ') {
+ _line[len-1] = '\0';
+ len --;
+ } else {
+ break;
}
-
- // valid line
- break;
}
+ _line_len = len;
+}
+void ClassListParser::read_class_name_and_attributes() {
_class_name = _line;
_id = _unspecified;
_super = _unspecified;
_interfaces->clear();
_source = nullptr;
_interfaces_specified = false;
- _indy_items->clear();
- _lambda_form_line = false;
-
- if (_line[0] == '@') {
- return parse_at_tags();
- }
if ((_token = strchr(_line, ' ')) == nullptr) {
- // No optional arguments are specified.
- return true;
+ // No optional attributes are specified.
+ return;
}
// Mark the end of the name, and go to the next input char
@@ -261,10 +245,9 @@ bool ClassListParser::parse_one_line() {
// # the class is loaded from classpath
// id may be specified
// super, interfaces, loader must not be specified
- return true;
}
-void ClassListParser::split_tokens_by_whitespace(int offset) {
+void ClassListParser::split_tokens_by_whitespace(int offset, GrowableArray* items) {
int start = offset;
int end;
bool done = false;
@@ -277,7 +260,7 @@ void ClassListParser::split_tokens_by_whitespace(int offset) {
} else {
_line[end] = '\0';
}
- _indy_items->append(_line + start);
+ items->append(_line + start);
start = ++end;
}
}
@@ -286,7 +269,7 @@ int ClassListParser::split_at_tag_from_line() {
_token = _line;
char* ptr;
if ((ptr = strchr(_line, ' ')) == nullptr) {
- error("Too few items following the @ tag \"%s\" line #%d", _line, _line_no);
+ error("Too few items following the @ tag \"%s\" line #%zu", _line, lineno());
return 0;
}
*ptr++ = '\0';
@@ -294,29 +277,30 @@ int ClassListParser::split_at_tag_from_line() {
return (int)(ptr - _line);
}
-bool ClassListParser::parse_at_tags() {
+void ClassListParser::parse_at_tags(TRAPS) {
assert(_line[0] == '@', "must be");
- int offset;
- if ((offset = split_at_tag_from_line()) == 0) {
- return false;
- }
+ int offset = split_at_tag_from_line();
+ assert(offset > 0, "would have exited VM");
if (strcmp(_token, LAMBDA_PROXY_TAG) == 0) {
- split_tokens_by_whitespace(offset);
+ _indy_items->clear();
+ split_tokens_by_whitespace(offset, _indy_items);
if (_indy_items->length() < 2) {
- error("Line with @ tag has too few items \"%s\" line #%d", _token, _line_no);
- return false;
+ error("Line with @ tag has too few items \"%s\" line #%zu", _token, lineno());
+ }
+ if (!parse_lambda_forms_invokers_only()) {
+ _class_name = _indy_items->at(0);
+ check_class_name(_class_name);
+ TempNewSymbol class_name_symbol = SymbolTable::new_symbol(_class_name);
+ if (_indy_items->length() > 0) {
+ // The current line is "@lambda-proxy class_name". Load the proxy class.
+ resolve_indy(THREAD, class_name_symbol);
+ }
}
- // set the class name
- _class_name = _indy_items->at(0);
- return true;
} else if (strcmp(_token, LAMBDA_FORM_TAG) == 0) {
LambdaFormInvokers::append(os::strdup((const char*)(_line + offset), mtInternal));
- _lambda_form_line = true;
- return true;
} else {
- error("Invalid @ tag at the beginning of line \"%s\" line #%d", _token, _line_no);
- return false;
+ error("Invalid @ tag at the beginning of line \"%s\" line #%zu", _token, lineno());
}
}
@@ -423,8 +407,8 @@ void ClassListParser::error(const char* msg, ...) {
}
jio_fprintf(defaultStream::error_stream(),
- "An error has occurred while processing class list file %s %d:%d.\n",
- _classlist_file, _line_no, (error_index + 1));
+ "An error has occurred while processing class list file %s %zu:%d.\n",
+ _classlist_file, lineno(), (error_index + 1));
jio_vfprintf(defaultStream::error_stream(), msg, ap);
if (_line_len <= 0) {
@@ -445,9 +429,28 @@ void ClassListParser::error(const char* msg, ...) {
}
jio_fprintf(defaultStream::error_stream(), "^\n");
}
+ va_end(ap);
vm_exit_during_initialization("class list format error.", nullptr);
- va_end(ap);
+}
+
+void ClassListParser::check_class_name(const char* class_name) {
+ const char* err = nullptr;
+ size_t len = strlen(class_name);
+ if (len > (size_t)Symbol::max_length()) {
+ err = "class name too long";
+ } else {
+ assert(Symbol::max_length() < INT_MAX && len < INT_MAX, "must be");
+ if (!UTF8::is_legal_utf8((const unsigned char*)class_name, (int)len, /*version_leq_47*/false)) {
+ err = "class name is not valid UTF8";
+ }
+ }
+ if (err != nullptr) {
+ jio_fprintf(defaultStream::error_stream(),
+ "An error has occurred while processing class list file %s:%zu %s\n",
+ _classlist_file, lineno(), err);
+ vm_exit_during_initialization("class list format error.", nullptr);
+ }
}
// This function is used for loading classes for customized class loaders
diff --git a/src/hotspot/share/cds/classListParser.hpp b/src/hotspot/share/cds/classListParser.hpp
index 74a2ff10515f8..50ede4f6dff47 100644
--- a/src/hotspot/share/cds/classListParser.hpp
+++ b/src/hotspot/share/cds/classListParser.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2015, 2023, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2015, 2024, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -28,6 +28,7 @@
#include "utilities/exceptions.hpp"
#include "utilities/globalDefinitions.hpp"
#include "utilities/growableArray.hpp"
+#include "utilities/istream.hpp"
#include "utilities/resizeableResourceHash.hpp"
#define LAMBDA_PROXY_TAG "@lambda-proxy"
@@ -80,14 +81,6 @@ class ClassListParser : public StackObj {
enum {
_unspecified = -999,
-
- // Max number of bytes allowed per line in the classlist.
- // Theoretically Java class names could be 65535 bytes in length. Also, an input line
- // could have a very long path name up to JVM_MAXPATHLEN bytes in length. In reality,
- // 4K bytes is more than enough.
- _max_allowed_line_len = 4096,
- _line_buf_extra = 10, // for detecting input too long
- _line_buf_size = _max_allowed_line_len + _line_buf_extra
};
// Use a small initial size in debug build to test resizing logic
@@ -96,16 +89,14 @@ class ClassListParser : public StackObj {
static volatile Thread* _parsing_thread; // the thread that created _instance
static ClassListParser* _instance; // the singleton.
const char* _classlist_file;
- FILE* _file;
ID2KlassTable _id2klass_table;
- // The following field contains information from the *current* line being
- // parsed.
- char _line[_line_buf_size]; // The buffer that holds the current line. Some characters in
+ FileInput _file_input;
+ inputStream _input_stream;
+ char* _line; // The buffer that holds the current line. Some characters in
// the buffer may be overwritten by '\0' during parsing.
int _line_len; // Original length of the input line.
- int _line_no; // Line number for current line being parsed
const char* _class_name;
GrowableArray* _indy_items; // items related to invoke dynamic for archiving lambda proxy classes
int _id;
@@ -113,7 +104,6 @@ class ClassListParser : public StackObj {
GrowableArray* _interfaces;
bool _interfaces_specified;
const char* _source;
- bool _lambda_form_line;
ParseMode _parse_mode;
bool parse_int_option(const char* option_name, int* value);
@@ -129,16 +119,20 @@ class ClassListParser : public StackObj {
void resolve_indy(JavaThread* current, Symbol* class_name_symbol);
void resolve_indy_impl(Symbol* class_name_symbol, TRAPS);
- bool parse_one_line();
+ void clean_up_input_line();
+ void read_class_name_and_attributes();
+ void parse_class_name_and_attributes(TRAPS);
Klass* load_current_class(Symbol* class_name_symbol, TRAPS);
+ size_t lineno() { return _input_stream.lineno(); }
+ FILE* do_open(const char* file);
ClassListParser(const char* file, ParseMode _parse_mode);
~ClassListParser();
public:
- static int parse_classlist(const char* classlist_path, ParseMode parse_mode, TRAPS) {
+ static void parse_classlist(const char* classlist_path, ParseMode parse_mode, TRAPS) {
ClassListParser parser(classlist_path, parse_mode);
- return parser.parse(THREAD); // returns the number of classes loaded.
+ parser.parse(THREAD);
}
static bool is_parsing_thread();
@@ -148,10 +142,10 @@ class ClassListParser : public StackObj {
return _instance;
}
- int parse(TRAPS);
- void split_tokens_by_whitespace(int offset);
+ void parse(TRAPS);
+ void split_tokens_by_whitespace(int offset, GrowableArray* items);
int split_at_tag_from_line();
- bool parse_at_tags();
+ void parse_at_tags(TRAPS);
char* _token;
void error(const char* msg, ...);
void parse_int(int* value);
@@ -161,6 +155,9 @@ class ClassListParser : public StackObj {
void skip_whitespaces();
void skip_non_whitespaces();
+ bool parse_lambda_forms_invokers_only() {
+ return _parse_mode == _parse_lambda_forms_invokers_only;
+ }
bool is_id_specified() {
return _id != _unspecified;
}
@@ -183,6 +180,7 @@ class ClassListParser : public StackObj {
error("%s id %d is not yet loaded", which, id);
}
}
+ void check_class_name(const char* class_name);
const char* current_class_name() {
return _class_name;
@@ -190,8 +188,6 @@ class ClassListParser : public StackObj {
bool is_loading_from_source();
- bool lambda_form_line() { return _lambda_form_line; }
-
// Look up the super or interface of the current class being loaded
// (in this->load_current_class()).
InstanceKlass* lookup_super_for_current_class(Symbol* super_name);
diff --git a/src/hotspot/share/cds/cppVtables.cpp b/src/hotspot/share/cds/cppVtables.cpp
index c339ce9c0deaa..f17d94a82fdb5 100644
--- a/src/hotspot/share/cds/cppVtables.cpp
+++ b/src/hotspot/share/cds/cppVtables.cpp
@@ -213,23 +213,30 @@ void CppVtableCloner::init_orig_cpp_vtptr(int kind) {
// the following holds true:
// _index[ConstantPool_Kind]->cloned_vtable() == ((intptr_t**)cp)[0]
// _index[InstanceKlass_Kind]->cloned_vtable() == ((intptr_t**)ik)[0]
-CppVtableInfo** CppVtables::_index = nullptr;
+static CppVtableInfo* _index[_num_cloned_vtable_kinds];
-char* CppVtables::dumptime_init(ArchiveBuilder* builder) {
+// Vtables are all fixed offsets from ArchiveBuilder::current()->mapped_base()
+// E.g. ConstantPool is at offset 0x58. We can archive these offsets in the
+// RO region and use them to alculate their location at runtime without storing
+// the pointers in the RW region
+char* CppVtables::_vtables_serialized_base = nullptr;
+
+void CppVtables::dumptime_init(ArchiveBuilder* builder) {
assert(CDSConfig::is_dumping_static_archive(), "cpp tables are only dumped into static archive");
- size_t vtptrs_bytes = _num_cloned_vtable_kinds * sizeof(CppVtableInfo*);
- _index = (CppVtableInfo**)builder->rw_region()->allocate(vtptrs_bytes);
CPP_VTABLE_TYPES_DO(ALLOCATE_AND_INITIALIZE_VTABLE);
size_t cpp_tables_size = builder->rw_region()->top() - builder->rw_region()->base();
builder->alloc_stats()->record_cpp_vtables((int)cpp_tables_size);
-
- return (char*)_index;
}
void CppVtables::serialize(SerializeClosure* soc) {
- soc->do_ptr(&_index);
+ if (!soc->reading()) {
+ _vtables_serialized_base = (char*)ArchiveBuilder::current()->buffer_top();
+ }
+ for (int i = 0; i < _num_cloned_vtable_kinds; i++) {
+ soc->do_ptr(&_index[i]);
+ }
if (soc->reading()) {
CPP_VTABLE_TYPES_DO(INITIALIZE_VTABLE);
}
diff --git a/src/hotspot/share/cds/cppVtables.hpp b/src/hotspot/share/cds/cppVtables.hpp
index 5318a9de2ba95..973502909ddf4 100644
--- a/src/hotspot/share/cds/cppVtables.hpp
+++ b/src/hotspot/share/cds/cppVtables.hpp
@@ -36,13 +36,14 @@ class CppVtableInfo;
// Support for C++ vtables in CDS archive.
class CppVtables : AllStatic {
- static CppVtableInfo** _index;
+ static char* _vtables_serialized_base;
public:
- static char* dumptime_init(ArchiveBuilder* builder);
+ static void dumptime_init(ArchiveBuilder* builder);
static void zero_archived_vtables();
static intptr_t* get_archived_vtable(MetaspaceObj::Type msotype, address obj);
static void serialize(SerializeClosure* sc);
static bool is_valid_shared_method(const Method* m) NOT_CDS_RETURN_(false);
+ static char* vtables_serialized_base() { return _vtables_serialized_base; }
};
#endif // SHARE_CDS_CPPVTABLES_HPP
diff --git a/src/hotspot/share/cds/dynamicArchive.cpp b/src/hotspot/share/cds/dynamicArchive.cpp
index cd5dd88b09939..f255b337d148f 100644
--- a/src/hotspot/share/cds/dynamicArchive.cpp
+++ b/src/hotspot/share/cds/dynamicArchive.cpp
@@ -137,7 +137,7 @@ class DynamicArchiveBuilder : public ArchiveBuilder {
// Note that these tables still point to the *original* objects, so
// they would need to call DynamicArchive::original_to_target() to
// get the correct addresses.
- assert(current_dump_space() == ro_region(), "Must be RO space");
+ assert(current_dump_region() == ro_region(), "Must be RO space");
SymbolTable::write_to_archive(symbols());
ArchiveBuilder::OtherROAllocMark mark;
diff --git a/src/hotspot/share/cds/filemap.cpp b/src/hotspot/share/cds/filemap.cpp
index 22895bd326162..5d56eb2153f62 100644
--- a/src/hotspot/share/cds/filemap.cpp
+++ b/src/hotspot/share/cds/filemap.cpp
@@ -1710,9 +1710,9 @@ void FileMapInfo::close() {
/*
* Same as os::map_memory() but also pretouches if AlwaysPreTouch is enabled.
*/
-static char* map_and_pretouch_memory(int fd, const char* file_name, size_t file_offset,
- char *addr, size_t bytes, bool read_only,
- bool allow_exec, MEMFLAGS flags) {
+static char* map_memory(int fd, const char* file_name, size_t file_offset,
+ char *addr, size_t bytes, bool read_only,
+ bool allow_exec, MEMFLAGS flags = mtNone) {
char* mem = os::map_memory(fd, file_name, file_offset, addr, bytes,
AlwaysPreTouch ? false : read_only,
allow_exec, flags);
@@ -1741,7 +1741,7 @@ bool FileMapInfo::remap_shared_readonly_as_readwrite() {
// Replace old mapping with new one that is writable.
char *base = os::map_memory(_fd, _full_path, r->file_offset(),
addr, size, false /* !read_only */,
- r->allow_exec(), mtClassShared);
+ r->allow_exec());
close();
// These have to be errors because the shared region is now unmapped.
if (base == nullptr) {
@@ -1800,7 +1800,7 @@ bool FileMapInfo::read_region(int i, char* base, size_t size, bool do_commit) {
log_info(cds)("Commit %s region #%d at base " INTPTR_FORMAT " top " INTPTR_FORMAT " (%s)%s",
is_static() ? "static " : "dynamic", i, p2i(base), p2i(base + size),
shared_region_name[i], r->allow_exec() ? " exec" : "");
- if (!os::commit_memory(base, size, r->allow_exec(), mtClassShared)) {
+ if (!os::commit_memory(base, size, r->allow_exec())) {
log_error(cds)("Failed to commit %s region #%d (%s)", is_static() ? "static " : "dynamic",
i, shared_region_name[i]);
return false;
@@ -1860,9 +1860,9 @@ MapArchiveResult FileMapInfo::map_region(int i, intx addr_delta, char* mapped_ba
// Note that this may either be a "fresh" mapping into unreserved address
// space (Windows, first mapping attempt), or a mapping into pre-reserved
// space (Posix). See also comment in MetaspaceShared::map_archives().
- char* base = map_and_pretouch_memory(_fd, _full_path, r->file_offset(),
- requested_addr, size, r->read_only(),
- r->allow_exec(), mtClassShared);
+ char* base = map_memory(_fd, _full_path, r->file_offset(),
+ requested_addr, size, r->read_only(),
+ r->allow_exec(), mtClassShared);
if (base != requested_addr) {
log_info(cds)("Unable to map %s shared space at " INTPTR_FORMAT,
shared_region_name[i], p2i(requested_addr));
@@ -1889,8 +1889,8 @@ char* FileMapInfo::map_bitmap_region() {
}
bool read_only = true, allow_exec = false;
char* requested_addr = nullptr; // allow OS to pick any location
- char* bitmap_base = map_and_pretouch_memory(_fd, _full_path, r->file_offset(),
- requested_addr, r->used_aligned(), read_only, allow_exec, mtClassShared);
+ char* bitmap_base = map_memory(_fd, _full_path, r->file_offset(),
+ requested_addr, r->used_aligned(), read_only, allow_exec, mtClassShared);
if (bitmap_base == nullptr) {
log_info(cds)("failed to map relocation bitmap");
return nullptr;
@@ -2064,8 +2064,8 @@ bool FileMapInfo::can_use_heap_region() {
archive_narrow_klass_shift);
log_info(cds)(" narrow_oop_mode = %d, narrow_oop_base = " PTR_FORMAT ", narrow_oop_shift = %d",
narrow_oop_mode(), p2i(narrow_oop_base()), narrow_oop_shift());
- log_info(cds)("The current max heap size = " SIZE_FORMAT "M, HeapRegion::GrainBytes = " SIZE_FORMAT,
- MaxHeapSize/M, HeapRegion::GrainBytes);
+ log_info(cds)("The current max heap size = " SIZE_FORMAT "M, G1HeapRegion::GrainBytes = " SIZE_FORMAT,
+ MaxHeapSize/M, G1HeapRegion::GrainBytes);
log_info(cds)(" narrow_klass_base = " PTR_FORMAT ", narrow_klass_shift = %d",
p2i(CompressedKlassPointers::base()), CompressedKlassPointers::shift());
log_info(cds)(" narrow_oop_mode = %d, narrow_oop_base = " PTR_FORMAT ", narrow_oop_shift = %d",
@@ -2130,7 +2130,7 @@ bool FileMapInfo::map_heap_region() {
#ifdef ASSERT
// The "old" regions must be parsable -- we cannot have any unused space
// at the start of the lowest G1 region that contains archived objects.
- assert(is_aligned(_mapped_heap_memregion.start(), HeapRegion::GrainBytes), "must be");
+ assert(is_aligned(_mapped_heap_memregion.start(), G1HeapRegion::GrainBytes), "must be");
// Make sure we map at the very top of the heap - see comments in
// init_heap_region_relocation().
@@ -2140,7 +2140,7 @@ bool FileMapInfo::map_heap_region() {
address heap_end = (address)heap_range.end();
address mapped_heap_region_end = (address)_mapped_heap_memregion.end();
assert(heap_end >= mapped_heap_region_end, "must be");
- assert(heap_end - mapped_heap_region_end < (intx)(HeapRegion::GrainBytes),
+ assert(heap_end - mapped_heap_region_end < (intx)(G1HeapRegion::GrainBytes),
"must be at the top of the heap to avoid fragmentation");
#endif
@@ -2176,11 +2176,10 @@ bool FileMapInfo::map_heap_region_impl() {
// Map the archived heap data. No need to call MemTracker::record_virtual_memory_type()
// for mapped region as it is part of the reserved java heap, which is already recorded.
- // So we pass the mtJavaHeap to tell MemTracker the type of the already tracked memory.
char* addr = (char*)_mapped_heap_memregion.start();
- char* base = map_and_pretouch_memory(_fd, _full_path, r->file_offset(),
- addr, _mapped_heap_memregion.byte_size(), r->read_only(),
- r->allow_exec(), mtJavaHeap);
+ char* base = map_memory(_fd, _full_path, r->file_offset(),
+ addr, _mapped_heap_memregion.byte_size(), r->read_only(),
+ r->allow_exec());
if (base == nullptr || base != addr) {
dealloc_heap_region();
log_info(cds)("UseSharedSpaces: Unable to map at required address in java heap. "
diff --git a/src/hotspot/share/cds/heapShared.cpp b/src/hotspot/share/cds/heapShared.cpp
index 202042e622e0b..b9f5d963c85ac 100644
--- a/src/hotspot/share/cds/heapShared.cpp
+++ b/src/hotspot/share/cds/heapShared.cpp
@@ -278,12 +278,6 @@ bool HeapShared::archive_object(oop obj) {
} else {
count_allocation(obj->size());
ArchiveHeapWriter::add_source_obj(obj);
-
- // The archived objects are discovered in a predictable order. Compute
- // their identity_hash() as soon as we see them. This ensures that the
- // the identity_hash in the object header will have a predictable value,
- // making the archive reproducible.
- obj->identity_hash();
CachedOopInfo info = make_cached_oop_info(obj);
archived_object_cache()->put_when_absent(obj, info);
archived_object_cache()->maybe_grow();
diff --git a/src/hotspot/share/cds/metaspaceShared.cpp b/src/hotspot/share/cds/metaspaceShared.cpp
index 3f5560ca22ae0..6213b9848f10f 100644
--- a/src/hotspot/share/cds/metaspaceShared.cpp
+++ b/src/hotspot/share/cds/metaspaceShared.cpp
@@ -225,6 +225,14 @@ static bool shared_base_too_high(char* specified_base, char* aligned_base, size_
static char* compute_shared_base(size_t cds_max) {
char* specified_base = (char*)SharedBaseAddress;
char* aligned_base = align_up(specified_base, MetaspaceShared::core_region_alignment());
+ if (UseCompressedClassPointers) {
+ aligned_base = align_up(specified_base, Metaspace::reserve_alignment());
+ }
+
+ if (aligned_base != specified_base) {
+ log_info(cds)("SharedBaseAddress (" INTPTR_FORMAT ") aligned up to " INTPTR_FORMAT,
+ p2i(specified_base), p2i(aligned_base));
+ }
const char* err = nullptr;
if (shared_base_too_high(specified_base, aligned_base, cds_max)) {
@@ -269,7 +277,7 @@ void MetaspaceShared::initialize_for_static_dump() {
SharedBaseAddress = (size_t)_requested_base_address;
size_t symbol_rs_size = LP64_ONLY(3 * G) NOT_LP64(128 * M);
- _symbol_rs = ReservedSpace(symbol_rs_size, mtClassShared);
+ _symbol_rs = ReservedSpace(symbol_rs_size);
if (!_symbol_rs.is_reserved()) {
log_error(cds)("Unable to reserve memory for symbols: " SIZE_FORMAT " bytes.", symbol_rs_size);
MetaspaceShared::unrecoverable_writing_error();
@@ -511,7 +519,7 @@ void VM_PopulateDumpSharedSpace::doit() {
builder.gather_source_objs();
builder.reserve_buffer();
- char* cloned_vtables = CppVtables::dumptime_init(&builder);
+ CppVtables::dumptime_init(&builder);
builder.sort_metadata_objs();
builder.dump_rw_metadata();
@@ -542,7 +550,7 @@ void VM_PopulateDumpSharedSpace::doit() {
FileMapInfo* mapinfo = new FileMapInfo(static_archive, true);
mapinfo->populate_header(MetaspaceShared::core_region_alignment());
mapinfo->set_serialized_data(serialized_data);
- mapinfo->set_cloned_vtables(cloned_vtables);
+ mapinfo->set_cloned_vtables(CppVtables::vtables_serialized_base());
mapinfo->open_for_write();
builder.write_archive(mapinfo, &_heap_info);
@@ -737,18 +745,18 @@ void MetaspaceShared::preload_classes(TRAPS) {
}
log_info(cds)("Loading classes to share ...");
- int class_count = ClassListParser::parse_classlist(classlist_path,
- ClassListParser::_parse_all, CHECK);
+ ClassListParser::parse_classlist(classlist_path,
+ ClassListParser::_parse_all, CHECK);
if (ExtraSharedClassListFile) {
- class_count += ClassListParser::parse_classlist(ExtraSharedClassListFile,
- ClassListParser::_parse_all, CHECK);
+ ClassListParser::parse_classlist(ExtraSharedClassListFile,
+ ClassListParser::_parse_all, CHECK);
}
if (classlist_path != default_classlist) {
struct stat statbuf;
if (os::stat(default_classlist, &statbuf) == 0) {
// File exists, let's use it.
- class_count += ClassListParser::parse_classlist(default_classlist,
- ClassListParser::_parse_lambda_forms_invokers_only, CHECK);
+ ClassListParser::parse_classlist(default_classlist,
+ ClassListParser::_parse_lambda_forms_invokers_only, CHECK);
}
}
@@ -758,7 +766,6 @@ void MetaspaceShared::preload_classes(TRAPS) {
CDSProtectionDomain::create_jar_manifest(dummy, strlen(dummy), CHECK);
log_info(cds)("Loading classes to share: done.");
- log_info(cds)("Shared spaces: preloaded %d classes", class_count);
}
void MetaspaceShared::preload_and_dump_impl(TRAPS) {
@@ -1260,20 +1267,22 @@ char* MetaspaceShared::reserve_address_space_for_archives(FileMapInfo* static_ma
size_t archive_end_offset = (dynamic_mapinfo == nullptr) ? static_mapinfo->mapping_end_offset() : dynamic_mapinfo->mapping_end_offset();
size_t archive_space_size = align_up(archive_end_offset, archive_space_alignment);
- // If a base address is given, it must have valid alignment and be suitable as encoding base.
- if (base_address != nullptr) {
- assert(is_aligned(base_address, archive_space_alignment),
- "Archive base address invalid: " PTR_FORMAT ".", p2i(base_address));
- }
-
if (!Metaspace::using_class_space()) {
// Get the simple case out of the way first:
// no compressed class space, simple allocation.
+
+ // When running without class space, requested archive base should be aligned to cds core alignment.
+ assert(is_aligned(base_address, archive_space_alignment),
+ "Archive base address unaligned: " PTR_FORMAT ", needs alignment: %zu.",
+ p2i(base_address), archive_space_alignment);
+
archive_space_rs = ReservedSpace(archive_space_size, archive_space_alignment,
- os::vm_page_size(), mtClassShared, (char*)base_address);
+ os::vm_page_size(), (char*)base_address);
if (archive_space_rs.is_reserved()) {
assert(base_address == nullptr ||
(address)archive_space_rs.base() == base_address, "Sanity");
+ // Register archive space with NMT.
+ MemTracker::record_virtual_memory_type(archive_space_rs.base(), mtClassShared);
return archive_space_rs.base();
}
return nullptr;
@@ -1287,12 +1296,12 @@ char* MetaspaceShared::reserve_address_space_for_archives(FileMapInfo* static_ma
const size_t class_space_alignment = Metaspace::reserve_alignment();
- // To simplify matters, lets assume that metaspace alignment will always be
- // equal or a multiple of archive alignment.
- assert(is_power_of_2(class_space_alignment) &&
- is_power_of_2(archive_space_alignment) &&
- class_space_alignment >= archive_space_alignment,
- "Sanity");
+ // When running with class space, requested archive base must satisfy both cds core alignment
+ // and class space alignment.
+ const size_t base_address_alignment = MAX2(class_space_alignment, archive_space_alignment);
+ assert(is_aligned(base_address, base_address_alignment),
+ "Archive base address unaligned: " PTR_FORMAT ", needs alignment: %zu.",
+ p2i(base_address), base_address_alignment);
const size_t class_space_size = CompressedClassSpaceSize;
assert(CompressedClassSpaceSize > 0 &&
@@ -1300,12 +1309,11 @@ char* MetaspaceShared::reserve_address_space_for_archives(FileMapInfo* static_ma
"CompressedClassSpaceSize malformed: "
SIZE_FORMAT, CompressedClassSpaceSize);
- const size_t ccs_begin_offset = align_up(base_address + archive_space_size,
- class_space_alignment) - base_address;
+ const size_t ccs_begin_offset = align_up(archive_space_size, class_space_alignment);
const size_t gap_size = ccs_begin_offset - archive_space_size;
const size_t total_range_size =
- align_up(archive_space_size + gap_size + class_space_size, core_region_alignment());
+ archive_space_size + gap_size + class_space_size;
assert(total_range_size > ccs_begin_offset, "must be");
if (use_windows_memory_mapping() && use_archive_base_addr) {
@@ -1317,18 +1325,21 @@ char* MetaspaceShared::reserve_address_space_for_archives(FileMapInfo* static_ma
// via sequential file IO.
address ccs_base = base_address + archive_space_size + gap_size;
archive_space_rs = ReservedSpace(archive_space_size, archive_space_alignment,
- os::vm_page_size(), mtClassShared, (char*)base_address);
+ os::vm_page_size(), (char*)base_address);
class_space_rs = ReservedSpace(class_space_size, class_space_alignment,
- os::vm_page_size(), mtClass, (char*)ccs_base);
+ os::vm_page_size(), (char*)ccs_base);
}
if (!archive_space_rs.is_reserved() || !class_space_rs.is_reserved()) {
release_reserved_spaces(total_space_rs, archive_space_rs, class_space_rs);
return nullptr;
}
+ // NMT: fix up the space tags
+ MemTracker::record_virtual_memory_type(archive_space_rs.base(), mtClassShared);
+ MemTracker::record_virtual_memory_type(class_space_rs.base(), mtClass);
} else {
if (use_archive_base_addr && base_address != nullptr) {
- total_space_rs = ReservedSpace(total_range_size, archive_space_alignment,
- os::vm_page_size(), mtClassShared, (char*) base_address);
+ total_space_rs = ReservedSpace(total_range_size, base_address_alignment,
+ os::vm_page_size(), (char*) base_address);
} else {
// We did not manage to reserve at the preferred address, or were instructed to relocate. In that
// case we reserve wherever possible, but the start address needs to be encodable as narrow Klass
@@ -1345,7 +1356,7 @@ char* MetaspaceShared::reserve_address_space_for_archives(FileMapInfo* static_ma
// Paranoid checks:
assert(base_address == nullptr || (address)total_space_rs.base() == base_address,
"Sanity (" PTR_FORMAT " vs " PTR_FORMAT ")", p2i(base_address), p2i(total_space_rs.base()));
- assert(is_aligned(total_space_rs.base(), archive_space_alignment), "Sanity");
+ assert(is_aligned(total_space_rs.base(), base_address_alignment), "Sanity");
assert(total_space_rs.size() == total_range_size, "Sanity");
// Now split up the space into ccs and cds archive. For simplicity, just leave
diff --git a/src/hotspot/share/cds/serializeClosure.hpp b/src/hotspot/share/cds/serializeClosure.hpp
index 275009286cb81..3d401407f37a1 100644
--- a/src/hotspot/share/cds/serializeClosure.hpp
+++ b/src/hotspot/share/cds/serializeClosure.hpp
@@ -48,8 +48,20 @@ class SerializeClosure : public StackObj {
// Read/write the bool pointed to by p.
virtual void do_bool(bool* p) = 0;
- // Read/write the region specified.
- virtual void do_region(u_char* start, size_t size) = 0;
+ // Iterate on the pointers from p[0] through p[num_pointers-1]
+ void do_ptrs(void** p, size_t size) {
+ assert((intptr_t)p % sizeof(intptr_t) == 0, "bad alignment");
+ assert(size % sizeof(intptr_t) == 0, "bad size");
+ do_tag((int)size);
+ while (size > 0) {
+ do_ptr(p);
+ p++;
+ size -= sizeof(intptr_t);
+ }
+ }
+
+ // Address of the first element being written (write only)
+ virtual char* region_top() = 0;
// Check/write the tag. If reading, then compare the tag against
// the passed in value and fail is they don't match. This allows
diff --git a/src/hotspot/share/ci/ciEnv.cpp b/src/hotspot/share/ci/ciEnv.cpp
index 5861ac3ac3395..6d11a436f65d9 100644
--- a/src/hotspot/share/ci/ciEnv.cpp
+++ b/src/hotspot/share/ci/ciEnv.cpp
@@ -1139,15 +1139,17 @@ void ciEnv::register_method(ciMethod* target,
#endif
if (entry_bci == InvocationEntryBci) {
- // If there is an old version we're done with it
- nmethod* old = method->code();
- if (TraceMethodReplacement && old != nullptr) {
- ResourceMark rm;
- char *method_name = method->name_and_sig_as_C_string();
- tty->print_cr("Replacing method %s", method_name);
- }
- if (old != nullptr) {
- old->make_not_used();
+ if (TieredCompilation) {
+ // If there is an old version we're done with it
+ nmethod* old = method->code();
+ if (TraceMethodReplacement && old != nullptr) {
+ ResourceMark rm;
+ char *method_name = method->name_and_sig_as_C_string();
+ tty->print_cr("Replacing method %s", method_name);
+ }
+ if (old != nullptr) {
+ old->make_not_used();
+ }
}
LogTarget(Info, nmethod, install) lt;
diff --git a/src/hotspot/share/ci/ciMethod.cpp b/src/hotspot/share/ci/ciMethod.cpp
index aac2a553cda31..532ac7856b83f 100644
--- a/src/hotspot/share/ci/ciMethod.cpp
+++ b/src/hotspot/share/ci/ciMethod.cpp
@@ -1126,13 +1126,14 @@ int ciMethod::code_size_for_inlining() {
// not highly relevant to an inlined method. So we use the more
// specific accessor nmethod::insts_size.
// Also some instructions inside the code are excluded from inline
-// heuristic (e.g. post call nop instructions; see InlineSkippedInstructionsCounter)
+// heuristic (e.g. post call nop instructions and GC barriers;
+// see InlineSkippedInstructionsCounter).
int ciMethod::inline_instructions_size() {
if (_inline_instructions_size == -1) {
GUARDED_VM_ENTRY(
nmethod* code = get_Method()->code();
if (code != nullptr && (code->comp_level() == CompLevel_full_optimization)) {
- int isize = code->insts_end() - code->verified_entry_point() - code->skipped_instructions_size();
+ int isize = code->inline_insts_size();
_inline_instructions_size = isize > 0 ? isize : 0;
} else {
_inline_instructions_size = 0;
diff --git a/src/hotspot/share/classfile/classFileParser.cpp b/src/hotspot/share/classfile/classFileParser.cpp
index 734b425cb4593..815550da0bc91 100644
--- a/src/hotspot/share/classfile/classFileParser.cpp
+++ b/src/hotspot/share/classfile/classFileParser.cpp
@@ -2827,11 +2827,6 @@ Method* ClassFileParser::parse_method(const ClassFileStream* const cfs,
_has_finalizer = true;
}
}
- if (name == vmSymbols::object_initializer_name() &&
- signature == vmSymbols::void_method_signature() &&
- m->is_vanilla_constructor()) {
- _has_vanilla_constructor = true;
- }
NOT_PRODUCT(m->verify());
return m;
@@ -4162,29 +4157,6 @@ void ClassFileParser::set_precomputed_flags(InstanceKlass* ik) {
}
}
- // Check if this klass has a vanilla default constructor
- if (super == nullptr) {
- // java.lang.Object has empty default constructor
- ik->set_has_vanilla_constructor();
- } else {
- if (super->has_vanilla_constructor() &&
- _has_vanilla_constructor) {
- ik->set_has_vanilla_constructor();
- }
-#ifdef ASSERT
- bool v = false;
- if (super->has_vanilla_constructor()) {
- const Method* const constructor =
- ik->find_method(vmSymbols::object_initializer_name(),
- vmSymbols::void_method_signature());
- if (constructor != nullptr && constructor->is_vanilla_constructor()) {
- v = true;
- }
- }
- assert(v == ik->has_vanilla_constructor(), "inconsistent has_vanilla_constructor");
-#endif
- }
-
// If it cannot be fast-path allocated, set a bit in the layout helper.
// See documentation of InstanceKlass::can_be_fastpath_allocated().
assert(ik->size_helper() > 0, "layout_helper is initialized");
@@ -5371,7 +5343,7 @@ void ClassFileParser::fill_instance_klass(InstanceKlass* ik,
ik->set_has_contended_annotations(true);
}
- // Fill in has_finalizer, has_vanilla_constructor, and layout_helper
+ // Fill in has_finalizer and layout_helper
set_precomputed_flags(ik);
// check if this class can access its super class
@@ -5557,7 +5529,6 @@ ClassFileParser::ClassFileParser(ClassFileStream* stream,
_has_contended_fields(false),
_has_finalizer(false),
_has_empty_finalizer(false),
- _has_vanilla_constructor(false),
_max_bootstrap_specifier_index(-1) {
_class_name = name != nullptr ? name : vmSymbols::unknown_class_name();
diff --git a/src/hotspot/share/classfile/classFileParser.hpp b/src/hotspot/share/classfile/classFileParser.hpp
index 03984baf1aea5..f43303722a8ab 100644
--- a/src/hotspot/share/classfile/classFileParser.hpp
+++ b/src/hotspot/share/classfile/classFileParser.hpp
@@ -197,7 +197,6 @@ class ClassFileParser {
// precomputed flags
bool _has_finalizer;
bool _has_empty_finalizer;
- bool _has_vanilla_constructor;
int _max_bootstrap_specifier_index; // detects BSS values
void parse_stream(const ClassFileStream* const stream, TRAPS);
diff --git a/src/hotspot/share/classfile/compactHashtable.cpp b/src/hotspot/share/classfile/compactHashtable.cpp
index 890d0e035f862..d4657e35a84eb 100644
--- a/src/hotspot/share/classfile/compactHashtable.cpp
+++ b/src/hotspot/share/classfile/compactHashtable.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 1997, 2024, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 1997, 2023, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -240,7 +240,7 @@ HashtableTextDump::HashtableTextDump(const char* filename) : _fd(-1) {
if (_fd < 0) {
quit("Unable to open hashtable dump file", filename);
}
- _base = os::map_memory(_fd, filename, 0, nullptr, _size, true, false, mtClassShared);
+ _base = os::map_memory(_fd, filename, 0, nullptr, _size, true, false);
if (_base == nullptr) {
quit("Unable to map hashtable dump file", filename);
}
diff --git a/src/hotspot/share/classfile/modules.cpp b/src/hotspot/share/classfile/modules.cpp
index d4f3cc29962ac..ddbb84db3be1f 100644
--- a/src/hotspot/share/classfile/modules.cpp
+++ b/src/hotspot/share/classfile/modules.cpp
@@ -560,9 +560,7 @@ void Modules::verify_archived_modules() {
ModuleEntry::verify_archived_module_entries();
}
-#if INCLUDE_CDS_JAVA_HEAP
char* Modules::_archived_main_module_name = nullptr;
-#endif
void Modules::dump_main_module_name() {
const char* module_name = Arguments::get_property("jdk.module.main");
diff --git a/src/hotspot/share/classfile/vmIntrinsics.cpp b/src/hotspot/share/classfile/vmIntrinsics.cpp
index 8d4f57165e151..e60495d1f47fd 100644
--- a/src/hotspot/share/classfile/vmIntrinsics.cpp
+++ b/src/hotspot/share/classfile/vmIntrinsics.cpp
@@ -492,6 +492,10 @@ bool vmIntrinsics::disabled_by_jvm_flags(vmIntrinsics::ID id) {
case vmIntrinsics::_poly1305_processBlocks:
if (!UsePoly1305Intrinsics) return true;
break;
+ case vmIntrinsics::_intpoly_montgomeryMult_P256:
+ case vmIntrinsics::_intpoly_assign:
+ if (!UseIntPolyIntrinsics) return true;
+ break;
case vmIntrinsics::_updateBytesCRC32C:
case vmIntrinsics::_updateDirectByteBufferCRC32C:
if (!UseCRC32CIntrinsics) return true;
diff --git a/src/hotspot/share/classfile/vmIntrinsics.hpp b/src/hotspot/share/classfile/vmIntrinsics.hpp
index a0db1a65d3a46..b8d8c40cc47a4 100644
--- a/src/hotspot/share/classfile/vmIntrinsics.hpp
+++ b/src/hotspot/share/classfile/vmIntrinsics.hpp
@@ -526,7 +526,18 @@ class methodHandle;
do_intrinsic(_digestBase_implCompressMB, sun_security_provider_digestbase, implCompressMB_name, countPositives_signature, F_R) \
do_name( implCompressMB_name, "implCompressMultiBlock0") \
\
- /* support for java.util.Base64.Encoder*/ \
+ /* support for sun.security.util.math.intpoly.MontgomeryIntegerPolynomialP256 */ \
+ do_class(sun_security_util_math_intpoly_MontgomeryIntegerPolynomialP256, "sun/security/util/math/intpoly/MontgomeryIntegerPolynomialP256") \
+ do_intrinsic(_intpoly_montgomeryMult_P256, sun_security_util_math_intpoly_MontgomeryIntegerPolynomialP256, intPolyMult_name, intPolyMult_signature, F_R) \
+ do_name(intPolyMult_name, "mult") \
+ do_signature(intPolyMult_signature, "([J[J[J)I") \
+ \
+ do_class(sun_security_util_math_intpoly_IntegerPolynomial, "sun/security/util/math/intpoly/IntegerPolynomial") \
+ do_intrinsic(_intpoly_assign, sun_security_util_math_intpoly_IntegerPolynomial, intPolyAssign_name, intPolyAssign_signature, F_S) \
+ do_name(intPolyAssign_name, "conditionalAssign") \
+ do_signature(intPolyAssign_signature, "(I[J[J)V") \
+ \
+ /* support for java.util.Base64.Encoder*/ \
do_class(java_util_Base64_Encoder, "java/util/Base64$Encoder") \
do_intrinsic(_base64_encodeBlock, java_util_Base64_Encoder, encodeBlock_name, encodeBlock_signature, F_R) \
do_name(encodeBlock_name, "encodeBlock") \
diff --git a/src/hotspot/share/classfile/vmSymbols.cpp b/src/hotspot/share/classfile/vmSymbols.cpp
index 05cd4767e9a6d..172d074255b38 100644
--- a/src/hotspot/share/classfile/vmSymbols.cpp
+++ b/src/hotspot/share/classfile/vmSymbols.cpp
@@ -205,9 +205,9 @@ void vmSymbols::metaspace_pointers_do(MetaspaceClosure *closure) {
}
void vmSymbols::serialize(SerializeClosure* soc) {
- soc->do_region((u_char*)&Symbol::_vm_symbols[FIRST_SID],
+ soc->do_ptrs((void**)&Symbol::_vm_symbols[FIRST_SID],
(SID_LIMIT - FIRST_SID) * sizeof(Symbol::_vm_symbols[0]));
- soc->do_region((u_char*)_type_signatures, sizeof(_type_signatures));
+ soc->do_ptrs((void**)_type_signatures, sizeof(_type_signatures));
}
#ifndef PRODUCT
diff --git a/src/hotspot/share/classfile/vmSymbols.hpp b/src/hotspot/share/classfile/vmSymbols.hpp
index f79af705f0b0a..39340243bc4b8 100644
--- a/src/hotspot/share/classfile/vmSymbols.hpp
+++ b/src/hotspot/share/classfile/vmSymbols.hpp
@@ -58,7 +58,6 @@ class SerializeClosure;
template(java_lang_System, "java/lang/System") \
template(java_lang_Object, "java/lang/Object") \
template(java_lang_Class, "java/lang/Class") \
- template(java_lang_Package, "java/lang/Package") \
template(java_lang_Module, "java/lang/Module") \
template(java_lang_String, "java/lang/String") \
template(java_lang_StringLatin1, "java/lang/StringLatin1") \
@@ -118,7 +117,6 @@ class SerializeClosure;
template(java_lang_reflect_RecordComponent, "java/lang/reflect/RecordComponent") \
template(java_lang_StringBuffer, "java/lang/StringBuffer") \
template(java_lang_StringBuilder, "java/lang/StringBuilder") \
- template(java_lang_CharSequence, "java/lang/CharSequence") \
template(java_lang_SecurityManager, "java/lang/SecurityManager") \
template(java_lang_ScopedValue, "java/lang/ScopedValue") \
template(java_lang_ScopedValue_Carrier, "java/lang/ScopedValue$Carrier") \
@@ -131,28 +129,16 @@ class SerializeClosure;
template(java_net_URL, "java/net/URL") \
template(java_net_URLClassLoader, "java/net/URLClassLoader") \
template(java_util_jar_Manifest, "java/util/jar/Manifest") \
- template(java_io_OutputStream, "java/io/OutputStream") \
- template(java_io_Reader, "java/io/Reader") \
- template(java_io_BufferedReader, "java/io/BufferedReader") \
- template(java_io_File, "java/io/File") \
- template(java_io_FileInputStream, "java/io/FileInputStream") \
template(java_io_ByteArrayInputStream, "java/io/ByteArrayInputStream") \
template(java_io_Serializable, "java/io/Serializable") \
template(java_nio_Buffer, "java/nio/Buffer") \
template(java_util_Arrays, "java/util/Arrays") \
- template(java_util_Objects, "java/util/Objects") \
template(java_util_Properties, "java/util/Properties") \
- template(java_util_Vector, "java/util/Vector") \
- template(java_util_AbstractList, "java/util/AbstractList") \
- template(java_util_Hashtable, "java/util/Hashtable") \
template(java_util_DualPivotQuicksort, "java/util/DualPivotQuicksort") \
- template(java_lang_Compiler, "java/lang/Compiler") \
template(jdk_internal_misc_Signal, "jdk/internal/misc/Signal") \
template(jdk_internal_util_Preconditions, "jdk/internal/util/Preconditions") \
template(java_lang_AssertionStatusDirectives, "java/lang/AssertionStatusDirectives") \
- template(getBootClassPathEntryForClass_name, "getBootClassPathEntryForClass") \
template(jdk_internal_vm_PostVMInitHook, "jdk/internal/vm/PostVMInitHook") \
- template(sun_net_www_ParseUtil, "sun/net/www/ParseUtil") \
template(java_util_Iterator, "java/util/Iterator") \
template(java_lang_Record, "java/lang/Record") \
template(sun_instrument_InstrumentationImpl, "sun/instrument/InstrumentationImpl") \
@@ -227,7 +213,6 @@ class SerializeClosure;
template(java_lang_BootstrapMethodError, "java/lang/BootstrapMethodError") \
template(java_lang_LinkageError, "java/lang/LinkageError") \
template(java_lang_NegativeArraySizeException, "java/lang/NegativeArraySizeException") \
- template(java_lang_NoSuchFieldException, "java/lang/NoSuchFieldException") \
template(java_lang_NoSuchMethodException, "java/lang/NoSuchMethodException") \
template(java_lang_NullPointerException, "java/lang/NullPointerException") \
template(java_lang_StringIndexOutOfBoundsException, "java/lang/StringIndexOutOfBoundsException")\
@@ -237,7 +222,6 @@ class SerializeClosure;
template(java_lang_Exception, "java/lang/Exception") \
template(java_lang_RuntimeException, "java/lang/RuntimeException") \
template(java_io_IOException, "java/io/IOException") \
- template(java_security_PrivilegedActionException, "java/security/PrivilegedActionException") \
\
/* error klasses: at least all errors thrown by the VM have entries here */ \
template(java_lang_AbstractMethodError, "java/lang/AbstractMethodError") \
@@ -283,7 +267,6 @@ class SerializeClosure;
template(reflect_CallerSensitive_signature, "Ljdk/internal/reflect/CallerSensitive;") \
template(reflect_DirectConstructorHandleAccessor_NativeAccessor, "jdk/internal/reflect/DirectConstructorHandleAccessor$NativeAccessor") \
template(reflect_SerializationConstructorAccessorImpl, "jdk/internal/reflect/SerializationConstructorAccessorImpl") \
- template(checkedExceptions_name, "checkedExceptions") \
template(clazz_name, "clazz") \
template(exceptionTypes_name, "exceptionTypes") \
template(modifiers_name, "modifiers") \
@@ -363,10 +346,6 @@ class SerializeClosure;
template(linkDynamicConstant_signature, "(Ljava/lang/Object;Ljava/lang/Object;Ljava/lang/Object;Ljava/lang/Object;Ljava/lang/Object;)Ljava/lang/Object;") \
template(linkCallSite_name, "linkCallSite") \
template(linkCallSite_signature, "(Ljava/lang/Object;Ljava/lang/Object;Ljava/lang/Object;Ljava/lang/Object;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/invoke/MemberName;") \
- template(setTargetNormal_name, "setTargetNormal") \
- template(setTargetVolatile_name, "setTargetVolatile") \
- template(setTarget_signature, "(Ljava/lang/invoke/MethodHandle;)V") \
- template(DEFAULT_CONTEXT_name, "DEFAULT_CONTEXT") \
NOT_LP64( do_alias(intptr_signature, int_signature) ) \
LP64_ONLY( do_alias(intptr_signature, long_signature) ) \
/* Foreign API Support */ \
@@ -411,7 +390,6 @@ class SerializeClosure;
template(maxPriority_name, "maxPriority") \
template(shutdown_name, "shutdown") \
template(finalize_method_name, "finalize") \
- template(reference_lock_name, "lock") \
template(reference_discovered_name, "discovered") \
template(run_finalization_name, "runFinalization") \
template(dispatchUncaughtException_name, "dispatchUncaughtException") \
@@ -426,8 +404,6 @@ class SerializeClosure;
template(enter_name, "enter") \
template(enterSpecial_name, "enterSpecial") \
template(onContinue_name, "onContinue0") \
- template(getStacks_name, "getStacks") \
- template(onPinned_name, "onPinned0") \
template(scope_name, "scope") \
template(yieldInfo_name, "yieldInfo") \
template(tail_name, "tail") \
@@ -435,26 +411,21 @@ class SerializeClosure;
template(bottom_name, "bottom") \
template(mode_name, "mode") \
template(numFrames_name, "numFrames") \
- template(numOops_name, "numOops") \
template(stack_name, "stack") \
template(maxSize_name, "maxSize") \
template(reset_name, "reset") \
template(done_name, "done") \
template(mounted_name, "mounted") \
- template(numInterpretedFrames_name, "numInterpretedFrames") \
template(jfrTraceId_name, "jfrTraceId") \
template(fp_name, "fp") \
template(sp_name, "sp") \
template(pc_name, "pc") \
template(cs_name, "cs") \
- template(refStack_name, "refStack") \
- template(refSP_name, "refSP") \
template(get_name, "get") \
template(refersTo0_name, "refersTo0") \
template(put_name, "put") \
template(type_name, "type") \
template(findNative_name, "findNative") \
- template(deadChild_name, "deadChild") \
template(getFromClass_name, "getFromClass") \
template(dispatch_name, "dispatch") \
template(bootLoader_name, "bootLoader") \
@@ -470,7 +441,6 @@ class SerializeClosure;
template(getClassContext_name, "getClassContext") \
template(wait_name, "wait0") \
template(checkPackageAccess_name, "checkPackageAccess") \
- template(newInstance0_name, "newInstance0") \
template(forName_name, "forName") \
template(forName0_name, "forName0") \
template(isJavaIdentifierStart_name, "isJavaIdentifierStart") \
@@ -492,7 +462,6 @@ class SerializeClosure;
template(vmholder_name, "vmholder") \
template(method_name, "method") \
template(vmindex_name, "vmindex") \
- template(vmcount_name, "vmcount") \
template(flags_name, "flags") \
template(basicType_name, "basicType") \
template(append_name, "append") \
@@ -560,7 +529,6 @@ class SerializeClosure;
template(void_float_signature, "()F") \
template(void_double_signature, "()D") \
template(bool_void_signature, "(Z)V") \
- template(bool_bool_void_signature, "(ZZ)V") \
template(int_void_signature, "(I)V") \
template(int_int_signature, "(I)I") \
template(char_char_signature, "(C)C") \
@@ -589,21 +557,16 @@ class SerializeClosure;
template(bool_array_signature, "[Z") \
template(byte_array_signature, "[B") \
template(char_array_signature, "[C") \
- template(int_array_signature, "[I") \
- template(long_array_signature, "[J") \
template(runnable_signature, "Ljava/lang/Runnable;") \
template(continuation_signature, "Ljdk/internal/vm/Continuation;") \
template(continuationscope_signature, "Ljdk/internal/vm/ContinuationScope;") \
template(stackchunk_signature, "Ljdk/internal/vm/StackChunk;") \
- template(vthread_signature, "Ljava/lang/VirtualThread;") \
template(object_void_signature, "(Ljava/lang/Object;)V") \
template(object_int_signature, "(Ljava/lang/Object;)I") \
template(long_object_long_signature, "(JLjava/lang/Object;)J") \
template(object_boolean_signature, "(Ljava/lang/Object;)Z") \
template(object_object_signature, "(Ljava/lang/Object;)Ljava/lang/Object;") \
template(string_void_signature, "(Ljava/lang/String;)V") \
- template(string_int_signature, "(Ljava/lang/String;)I") \
- template(string_byte_array_signature, "(Ljava/lang/String;)[B") \
template(string_bool_byte_array_signature, "(Ljava/lang/String;Z)[B") \
template(throwable_signature, "Ljava/lang/Throwable;") \
template(throwable_void_signature, "(Ljava/lang/Throwable;)V") \
@@ -613,10 +576,6 @@ class SerializeClosure;
template(class_long_signature, "(Ljava/lang/Class;)J") \
template(class_boolean_signature, "(Ljava/lang/Class;)Z") \
template(throwable_throwable_signature, "(Ljava/lang/Throwable;)Ljava/lang/Throwable;") \
- template(throwable_string_void_signature, "(Ljava/lang/Throwable;Ljava/lang/String;)V") \
- template(string_array_void_signature, "([Ljava/lang/String;)V") \
- template(string_array_string_array_void_signature, "([Ljava/lang/String;[Ljava/lang/String;)V") \
- template(thread_throwable_void_signature, "(Ljava/lang/Thread;Ljava/lang/Throwable;)V") \
template(thread_void_signature, "(Ljava/lang/Thread;)V") \
template(runnable_void_signature, "(Ljava/lang/Runnable;)V") \
template(threadgroup_runnable_void_signature, "(Ljava/lang/ThreadGroup;Ljava/lang/Runnable;)V") \
@@ -625,12 +584,9 @@ class SerializeClosure;
template(string_class_signature, "(Ljava/lang/String;)Ljava/lang/Class;") \
template(string_boolean_class_signature, "(Ljava/lang/String;Z)Ljava/lang/Class;") \
template(object_object_object_signature, "(Ljava/lang/Object;Ljava/lang/Object;)Ljava/lang/Object;") \
- template(string_string_string_signature, "(Ljava/lang/String;Ljava/lang/String;)Ljava/lang/String;") \
template(string_string_signature, "(Ljava/lang/String;)Ljava/lang/String;") \
template(classloader_string_long_signature, "(Ljava/lang/ClassLoader;Ljava/lang/String;)J") \
template(byte_array_void_signature, "([B)V") \
- template(char_array_void_signature, "([C)V") \
- template(int_int_void_signature, "(II)V") \
template(long_long_void_signature, "(JJ)V") \
template(void_byte_array_signature, "()[B") \
template(void_classloader_signature, "()Ljava/lang/ClassLoader;") \
@@ -639,16 +595,13 @@ class SerializeClosure;
template(void_class_signature, "()Ljava/lang/Class;") \
template(void_class_array_signature, "()[Ljava/lang/Class;") \
template(void_string_signature, "()Ljava/lang/String;") \
- template(void_module_signature, "()Ljava/lang/Module;") \
template(object_array_object_signature, "([Ljava/lang/Object;)Ljava/lang/Object;") \
template(object_object_array_object_signature, "(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;")\
- template(exception_void_signature, "(Ljava/lang/Exception;)V") \
template(protectiondomain_signature, "[Ljava/security/ProtectionDomain;") \
template(accesscontrolcontext_signature, "Ljava/security/AccessControlContext;") \
template(class_protectiondomain_signature, "(Ljava/lang/Class;Ljava/security/ProtectionDomain;)V") \
template(thread_signature, "Ljava/lang/Thread;") \
template(thread_fieldholder_signature, "Ljava/lang/Thread$FieldHolder;") \
- template(thread_array_signature, "[Ljava/lang/Thread;") \
template(threadgroup_signature, "Ljava/lang/ThreadGroup;") \
template(threadgroup_array_signature, "[Ljava/lang/ThreadGroup;") \
template(class_array_signature, "[Ljava/lang/Class;") \
@@ -660,7 +613,6 @@ class SerializeClosure;
template(string_array_signature, "[Ljava/lang/String;") \
template(reference_signature, "Ljava/lang/ref/Reference;") \
template(referencequeue_signature, "Ljava/lang/ref/ReferenceQueue;") \
- template(weakreference_array_signature, "[Ljava/lang/ref/WeakReference;") \
template(executable_signature, "Ljava/lang/reflect/Executable;") \
template(module_signature, "Ljava/lang/Module;") \
template(concurrenthashmap_signature, "Ljava/util/concurrent/ConcurrentHashMap;") \
@@ -690,7 +642,6 @@ class SerializeClosure;
\
/* JVM monitoring and management support */ \
template(java_lang_StackTraceElement_array, "[Ljava/lang/StackTraceElement;") \
- template(java_lang_management_ThreadState, "java/lang/management/ThreadState") \
template(java_lang_management_MemoryUsage, "java/lang/management/MemoryUsage") \
template(java_lang_management_ThreadInfo, "java/lang/management/ThreadInfo") \
template(jdk_internal_agent_Agent, "jdk/internal/agent/Agent") \
@@ -727,17 +678,12 @@ class SerializeClosure;
template(java_lang_management_MemoryPoolMXBean, "java/lang/management/MemoryPoolMXBean") \
template(java_lang_management_MemoryManagerMXBean, "java/lang/management/MemoryManagerMXBean") \
template(java_lang_management_GarbageCollectorMXBean,"java/lang/management/GarbageCollectorMXBean") \
- template(gcInfoBuilder_name, "gcInfoBuilder") \
template(createMemoryPool_name, "createMemoryPool") \
template(createMemoryManager_name, "createMemoryManager") \
template(createGarbageCollector_name, "createGarbageCollector") \
template(createMemoryPool_signature, "(Ljava/lang/String;ZJJ)Ljava/lang/management/MemoryPoolMXBean;") \
template(createMemoryManager_signature, "(Ljava/lang/String;)Ljava/lang/management/MemoryManagerMXBean;") \
template(createGarbageCollector_signature, "(Ljava/lang/String;Ljava/lang/String;)Ljava/lang/management/GarbageCollectorMXBean;") \
- template(addThreadDumpForMonitors_name, "addThreadDumpForMonitors") \
- template(addThreadDumpForSynchronizers_name, "addThreadDumpForSynchronizers") \
- template(addThreadDumpForMonitors_signature, "(Ljava/lang/management/ThreadInfo;[Ljava/lang/Object;[I)V") \
- template(addThreadDumpForSynchronizers_signature, "(Ljava/lang/management/ThreadInfo;[Ljava/lang/Object;)V") \
\
/* JVMTI/java.lang.instrument support and VM Attach mechanism */ \
template(jdk_internal_module_Modules, "jdk/internal/module/Modules") \
@@ -791,7 +737,6 @@ class SerializeClosure;
template(java_util_ArrayList, "java/util/ArrayList") \
template(toFileURL_name, "toFileURL") \
template(toFileURL_signature, "(Ljava/lang/String;)Ljava/net/URL;") \
- template(url_void_signature, "(Ljava/net/URL;)V") \
template(url_array_classloader_void_signature, "([Ljava/net/URL;Ljava/lang/ClassLoader;)V") \
\
/* Thread.dump_to_file jcmd */ \
diff --git a/src/hotspot/share/code/codeCache.cpp b/src/hotspot/share/code/codeCache.cpp
index 73eca19f0869f..20583ce492ddd 100644
--- a/src/hotspot/share/code/codeCache.cpp
+++ b/src/hotspot/share/code/codeCache.cpp
@@ -34,7 +34,6 @@
#include "compiler/compilationPolicy.hpp"
#include "compiler/compileBroker.hpp"
#include "compiler/compilerDefinitions.inline.hpp"
-#include "compiler/compilerDirectives.hpp"
#include "compiler/oopMap.hpp"
#include "gc/shared/barrierSetNMethod.hpp"
#include "gc/shared/classUnloadingContext.hpp"
@@ -1330,67 +1329,6 @@ void CodeCache::mark_all_nmethods_for_evol_deoptimization(DeoptimizationScope* d
#endif // INCLUDE_JVMTI
-void CodeCache::mark_directives_matches(bool top_only) {
- MutexLocker mu(CodeCache_lock, Mutex::_no_safepoint_check_flag);
- Thread *thread = Thread::current();
- HandleMark hm(thread);
-
- NMethodIterator iter(NMethodIterator::not_unloading);
- while(iter.next()) {
- nmethod* nm = iter.method();
- methodHandle mh(thread, nm->method());
- if (DirectivesStack::hasMatchingDirectives(mh, top_only)) {
- ResourceMark rm;
- log_trace(codecache)("Mark because of matching directives %s", mh->external_name());
- mh->set_has_matching_directives();
- }
- }
-}
-
-void CodeCache::recompile_marked_directives_matches() {
- Thread *thread = Thread::current();
- HandleMark hm(thread);
-
- // Try the max level and let the directives be applied during the compilation.
- int comp_level = CompilationPolicy::highest_compile_level();
- RelaxedNMethodIterator iter(RelaxedNMethodIterator::not_unloading);
- while(iter.next()) {
- nmethod* nm = iter.method();
- methodHandle mh(thread, nm->method());
- if (mh->has_matching_directives()) {
- ResourceMark rm;
- mh->clear_directive_flags();
- bool deopt = false;
-
- if (!nm->is_osr_method()) {
- log_trace(codecache)("Recompile to level %d because of matching directives %s",
- comp_level, mh->external_name());
- nmethod * comp_nm = CompileBroker::compile_method(mh, InvocationEntryBci, comp_level,
- methodHandle(), 0,
- CompileTask::Reason_DirectivesChanged,
- (JavaThread*)thread);
- if (comp_nm == nullptr) {
- log_trace(codecache)("Recompilation to level %d failed, deoptimize %s",
- comp_level, mh->external_name());
- deopt = true;
- }
- } else {
- log_trace(codecache)("Deoptimize OSR %s", mh->external_name());
- deopt = true;
- }
- // For some reason the method cannot be compiled by C2, e.g. the new directives forbid it.
- // Deoptimize the method and let the usual hotspot logic do the rest.
- if (deopt) {
- if (!nm->has_been_deoptimized() && nm->can_be_deoptimized()) {
- nm->make_not_entrant();
- nm->make_deoptimized();
- }
- }
- gc_on_allocation(); // Flush unused methods from CodeCache if required.
- }
- }
-}
-
// Mark methods for deopt (if safe or possible).
void CodeCache::mark_all_nmethods_for_deoptimization(DeoptimizationScope* deopt_scope) {
MutexLocker mu(CodeCache_lock, Mutex::_no_safepoint_check_flag);
diff --git a/src/hotspot/share/code/codeCache.hpp b/src/hotspot/share/code/codeCache.hpp
index 7d8835abd11eb..8fcbf32b64484 100644
--- a/src/hotspot/share/code/codeCache.hpp
+++ b/src/hotspot/share/code/codeCache.hpp
@@ -295,9 +295,6 @@ class CodeCache : AllStatic {
static void mark_for_deoptimization(DeoptimizationScope* deopt_scope, Method* dependee);
static void make_marked_nmethods_deoptimized();
- static void mark_directives_matches(bool top_only = false);
- static void recompile_marked_directives_matches();
-
// Marks dependents during classloading
static void mark_dependents_on(DeoptimizationScope* deopt_scope, InstanceKlass* dependee);
diff --git a/src/hotspot/share/code/nmethod.cpp b/src/hotspot/share/code/nmethod.cpp
index 26d393d06db45..14c8795425c21 100644
--- a/src/hotspot/share/code/nmethod.cpp
+++ b/src/hotspot/share/code/nmethod.cpp
@@ -131,6 +131,7 @@ struct java_nmethod_stats_struct {
uint relocation_size;
uint consts_size;
uint insts_size;
+ uint inline_insts_size;
uint stub_size;
uint oops_size;
uint metadata_size;
@@ -151,6 +152,7 @@ struct java_nmethod_stats_struct {
relocation_size += nm->relocation_size();
consts_size += nm->consts_size();
insts_size += nm->insts_size();
+ inline_insts_size += nm->inline_insts_size();
stub_size += nm->stub_size();
oops_size += nm->oops_size();
metadata_size += nm->metadata_size();
@@ -185,6 +187,9 @@ struct java_nmethod_stats_struct {
if (insts_size != 0) {
tty->print_cr(" main code = %u (%f%%)", insts_size, (insts_size * 100.0f)/total_nm_size);
}
+ if (inline_insts_size != 0) {
+ tty->print_cr(" inline code = %u (%f%%)", inline_insts_size, (inline_insts_size * 100.0f)/total_nm_size);
+ }
if (stub_size != 0) {
tty->print_cr(" stub code = %u (%f%%)", stub_size, (stub_size * 100.0f)/total_nm_size);
}
@@ -1253,7 +1258,15 @@ void nmethod::init_defaults(CodeBuffer *code_buffer, CodeOffsets* offsets) {
CHECKED_CAST(_entry_offset, uint16_t, (offsets->value(CodeOffsets::Entry)));
CHECKED_CAST(_verified_entry_offset, uint16_t, (offsets->value(CodeOffsets::Verified_Entry)));
- CHECKED_CAST(_skipped_instructions_size, uint16_t, (code_buffer->total_skipped_instructions_size()));
+
+ int size = code_buffer->main_code_size();
+ assert(size >= 0, "should be initialized");
+ // Use instructions section size if it is 0 (e.g. native wrapper)
+ if (size == 0) size = code_size(); // requires _stub_offset to be set
+ assert(size <= code_size(), "incorrect size: %d > %d", size, code_size());
+ _inline_insts_size = size - _verified_entry_offset
+ - code_buffer->total_skipped_instructions_size();
+ assert(_inline_insts_size >= 0, "sanity");
}
// Post initialization
@@ -3984,9 +3997,7 @@ void nmethod::print_statistics() {
#endif
unknown_java_nmethod_stats.print_nmethod_stats("Unknown");
DebugInformationRecorder::print_statistics();
-#ifndef PRODUCT
pc_nmethod_stats.print_pc_stats();
-#endif
Dependencies::print_statistics();
if (xtty != nullptr) xtty->tail("statistics");
}
diff --git a/src/hotspot/share/code/nmethod.hpp b/src/hotspot/share/code/nmethod.hpp
index 22e38afeebd67..0677aeec14e1b 100644
--- a/src/hotspot/share/code/nmethod.hpp
+++ b/src/hotspot/share/code/nmethod.hpp
@@ -218,6 +218,8 @@ class nmethod : public CodeBlob {
// _consts_offset == _content_offset because SECT_CONSTS is first in code buffer
+ int _inline_insts_size;
+
int _stub_offset;
// Offsets for different stubs section parts
@@ -232,7 +234,6 @@ class nmethod : public CodeBlob {
int16_t _unwind_handler_offset;
// Number of arguments passed on the stack
uint16_t _num_stack_arg_slots;
- uint16_t _skipped_instructions_size;
// Offsets in mutable data section
// _oops_offset == _data_offset, offset where embedded oop table begins (inside data)
@@ -589,7 +590,7 @@ class nmethod : public CodeBlob {
int oops_count() const { assert(oops_size() % oopSize == 0, ""); return (oops_size() / oopSize) + 1; }
int metadata_count() const { assert(metadata_size() % wordSize == 0, ""); return (metadata_size() / wordSize) + 1; }
- int skipped_instructions_size () const { return _skipped_instructions_size; }
+ int inline_insts_size() const { return _inline_insts_size; }
int total_size() const;
// Containment
diff --git a/src/hotspot/share/code/relocInfo.cpp b/src/hotspot/share/code/relocInfo.cpp
index 69ff4bc78d639..d0f732edac486 100644
--- a/src/hotspot/share/code/relocInfo.cpp
+++ b/src/hotspot/share/code/relocInfo.cpp
@@ -277,30 +277,6 @@ DEFINE_COPY_INTO_AUX(Relocation)
#undef DEFINE_COPY_INTO_AUX
#undef DEFINE_COPY_INTO
-//////// Methods for RelocationHolder
-
-RelocationHolder RelocationHolder::plus(int offset) const {
- if (offset != 0) {
- switch (type()) {
- case relocInfo::none:
- break;
- case relocInfo::oop_type:
- {
- oop_Relocation* r = (oop_Relocation*)reloc();
- return oop_Relocation::spec(r->oop_index(), r->offset() + offset);
- }
- case relocInfo::metadata_type:
- {
- metadata_Relocation* r = (metadata_Relocation*)reloc();
- return metadata_Relocation::spec(r->metadata_index(), r->offset() + offset);
- }
- default:
- ShouldNotReachHere();
- }
- }
- return (*this);
-}
-
//////// Methods for flyweight Relocation types
// some relocations can compute their own values
@@ -402,24 +378,24 @@ void CallRelocation::fix_relocation_after_move(const CodeBuffer* src, CodeBuffer
void oop_Relocation::pack_data_to(CodeSection* dest) {
short* p = (short*) dest->locs_end();
- p = pack_2_ints_to(p, _oop_index, _offset);
+ p = pack_1_int_to(p, _oop_index);
dest->set_locs_end((relocInfo*) p);
}
void oop_Relocation::unpack_data() {
- unpack_2_ints(_oop_index, _offset);
+ _oop_index = unpack_1_int();
}
void metadata_Relocation::pack_data_to(CodeSection* dest) {
short* p = (short*) dest->locs_end();
- p = pack_2_ints_to(p, _metadata_index, _offset);
+ p = pack_1_int_to(p, _metadata_index);
dest->set_locs_end((relocInfo*) p);
}
void metadata_Relocation::unpack_data() {
- unpack_2_ints(_metadata_index, _offset);
+ _metadata_index = unpack_1_int();
}
@@ -855,8 +831,8 @@ void RelocIterator::print_current() {
raw_oop = *oop_addr;
oop_value = r->oop_value();
}
- tty->print(" | [oop_addr=" INTPTR_FORMAT " *=" INTPTR_FORMAT " offset=%d]",
- p2i(oop_addr), p2i(raw_oop), r->offset());
+ tty->print(" | [oop_addr=" INTPTR_FORMAT " *=" INTPTR_FORMAT "]",
+ p2i(oop_addr), p2i(raw_oop));
// Do not print the oop by default--we want this routine to
// work even during GC or other inconvenient times.
if (WizardMode && oop_value != nullptr) {
@@ -878,8 +854,8 @@ void RelocIterator::print_current() {
raw_metadata = *metadata_addr;
metadata_value = r->metadata_value();
}
- tty->print(" | [metadata_addr=" INTPTR_FORMAT " *=" INTPTR_FORMAT " offset=%d]",
- p2i(metadata_addr), p2i(raw_metadata), r->offset());
+ tty->print(" | [metadata_addr=" INTPTR_FORMAT " *=" INTPTR_FORMAT "]",
+ p2i(metadata_addr), p2i(raw_metadata));
if (metadata_value != nullptr) {
tty->print("metadata_value=" INTPTR_FORMAT ": ", p2i(metadata_value));
metadata_value->print_value_on(tty);
diff --git a/src/hotspot/share/code/relocInfo.hpp b/src/hotspot/share/code/relocInfo.hpp
index 9f1db9f4684e3..6d0907d97dedc 100644
--- a/src/hotspot/share/code/relocInfo.hpp
+++ b/src/hotspot/share/code/relocInfo.hpp
@@ -129,12 +129,7 @@ class nmethod;
// Value: an oop, or else the address (handle) of an oop
// Instruction types: memory (load), set (load address)
// Data: [] an oop stored in 4 bytes of instruction
-// [n] n is the index of an oop in the CodeBlob's oop pool
-// [[N]n l] and l is a byte offset to be applied to the oop
-// [Nn Ll] both index and offset may be 32 bits if necessary
-// Here is a special hack, used only by the old compiler:
-// [[N]n 00] the value is the __address__ of the nth oop in the pool
-// (Note that the offset allows optimal references to class variables.)
+// [[N]n] the index of an oop in the CodeBlob's oop pool
//
// relocInfo::internal_word_type -- an address within the same CodeBlob
// relocInfo::section_word_type -- same, but can refer to another section
@@ -515,9 +510,6 @@ class RelocationHolder {
Relocation* reloc() const { return (Relocation*)_relocbuf; }
inline relocInfo::relocType type() const;
- // Add a constant offset to a relocation. Helper for class Address.
- RelocationHolder plus(int offset) const;
-
// Return a holder containing a relocation of type Reloc, constructed using args.
template
static RelocationHolder construct(const Args&... args) {
@@ -788,8 +780,8 @@ class Relocation {
void const_set_data_value (address x);
void const_verify_data_value (address x);
// platform-dependent utilities for decoding and patching instructions
- void pd_set_data_value (address x, intptr_t off, bool verify_only = false); // a set or mem-ref
- void pd_verify_data_value (address x, intptr_t off) { pd_set_data_value(x, off, true); }
+ void pd_set_data_value (address x, bool verify_only = false); // a set or mem-ref
+ void pd_verify_data_value (address x) { pd_set_data_value(x, true); }
address pd_call_destination (address orig_addr = nullptr);
void pd_set_call_destination (address x);
@@ -895,41 +887,28 @@ relocInfo::relocType RelocationHolder::type() const {
// A DataRelocation always points at a memory or load-constant instruction..
// It is absolute on most machines, and the constant is split on RISCs.
// The specific subtypes are oop, external_word, and internal_word.
-// By convention, the "value" does not include a separately reckoned "offset".
class DataRelocation : public Relocation {
public:
DataRelocation(relocInfo::relocType type) : Relocation(type) {}
- bool is_data() override { return true; }
+ bool is_data() override { return true; }
- // both target and offset must be computed somehow from relocation data
- virtual int offset() { return 0; }
- address value() override = 0;
- void set_value(address x) override { set_value(x, offset()); }
- void set_value(address x, intptr_t o) {
- if (addr_in_const())
+ // target must be computed somehow from relocation data
+ address value() override = 0;
+ void set_value(address x) override {
+ if (addr_in_const()) {
const_set_data_value(x);
- else
- pd_set_data_value(x, o);
+ } else {
+ pd_set_data_value(x);
+ }
}
- void verify_value(address x) {
- if (addr_in_const())
+ void verify_value(address x) {
+ if (addr_in_const()) {
const_verify_data_value(x);
- else
- pd_verify_data_value(x, offset());
+ } else {
+ pd_verify_data_value(x);
+ }
}
-
- // The "o" (displacement) argument is relevant only to split relocations
- // on RISC machines. In some CPUs (SPARC), the set-hi and set-lo ins'ns
- // can encode more than 32 bits between them. This allows compilers to
- // share set-hi instructions between addresses that differ by a small
- // offset (e.g., different static variables in the same class).
- // On such machines, the "x" argument to set_value on all set-lo
- // instructions must be the same as the "x" argument for the
- // corresponding set-hi instructions. The "o" arguments for the
- // set-hi instructions are ignored, and must not affect the high-half
- // immediate constant. The "o" arguments for the set-lo instructions are
- // added into the low-half immediate constant, and must not overflow it.
};
class post_call_nop_Relocation : public Relocation {
@@ -976,40 +955,36 @@ class CallRelocation : public Relocation {
class oop_Relocation : public DataRelocation {
public:
- // encode in one of these formats: [] [n] [n l] [Nn l] [Nn Ll]
- // an oop in the CodeBlob's oop pool
- static RelocationHolder spec(int oop_index, int offset = 0) {
+ // an oop in the CodeBlob's oop pool; encoded as [n] or [Nn]
+ static RelocationHolder spec(int oop_index) {
assert(oop_index > 0, "must be a pool-resident oop");
- return RelocationHolder::construct(oop_index, offset);
+ return RelocationHolder::construct(oop_index);
}
- // an oop in the instruction stream
+ // an oop in the instruction stream; encoded as []
static RelocationHolder spec_for_immediate() {
// If no immediate oops are generated, we can skip some walks over nmethods.
// Assert that they don't get generated accidentally!
assert(relocInfo::mustIterateImmediateOopsInCode(),
"Must return true so we will search for oops as roots etc. in the code.");
const int oop_index = 0;
- const int offset = 0; // if you want an offset, use the oop pool
- return RelocationHolder::construct(oop_index, offset);
+ return RelocationHolder::construct(oop_index);
}
void copy_into(RelocationHolder& holder) const override;
private:
jint _oop_index; // if > 0, index into CodeBlob::oop_at
- jint _offset; // byte offset to apply to the oop itself
- oop_Relocation(int oop_index, int offset)
- : DataRelocation(relocInfo::oop_type), _oop_index(oop_index), _offset(offset) { }
+ oop_Relocation(int oop_index)
+ : DataRelocation(relocInfo::oop_type), _oop_index(oop_index) { }
friend class RelocationHolder;
oop_Relocation() : DataRelocation(relocInfo::oop_type) {}
public:
int oop_index() { return _oop_index; }
- int offset() override { return _offset; }
- // data is packed in "2_ints" format: [i o] or [Ii Oo]
+ // oop_index is packed in "1_int" format: [n] or [Nn]
void pack_data_to(CodeSection* dest) override;
void unpack_data() override;
@@ -1031,27 +1006,24 @@ class oop_Relocation : public DataRelocation {
class metadata_Relocation : public DataRelocation {
public:
- // encode in one of these formats: [] [n] [n l] [Nn l] [Nn Ll]
- // an metadata in the CodeBlob's metadata pool
- static RelocationHolder spec(int metadata_index, int offset = 0) {
+ // an metadata in the CodeBlob's metadata pool; encoded as [n] or [Nn]
+ static RelocationHolder spec(int metadata_index) {
assert(metadata_index > 0, "must be a pool-resident metadata");
- return RelocationHolder::construct(metadata_index, offset);
+ return RelocationHolder::construct(metadata_index);
}
- // an metadata in the instruction stream
+ // an metadata in the instruction stream; encoded as []
static RelocationHolder spec_for_immediate() {
const int metadata_index = 0;
- const int offset = 0; // if you want an offset, use the metadata pool
- return RelocationHolder::construct(metadata_index, offset);
+ return RelocationHolder::construct(metadata_index);
}
void copy_into(RelocationHolder& holder) const override;
private:
jint _metadata_index; // if > 0, index into nmethod::metadata_at
- jint _offset; // byte offset to apply to the metadata itself
- metadata_Relocation(int metadata_index, int offset)
- : DataRelocation(relocInfo::metadata_type), _metadata_index(metadata_index), _offset(offset) { }
+ metadata_Relocation(int metadata_index)
+ : DataRelocation(relocInfo::metadata_type), _metadata_index(metadata_index) { }
friend class RelocationHolder;
metadata_Relocation() : DataRelocation(relocInfo::metadata_type) { }
@@ -1063,9 +1035,8 @@ class metadata_Relocation : public DataRelocation {
public:
int metadata_index() { return _metadata_index; }
- int offset() override { return _offset; }
- // data is packed in "2_ints" format: [i o] or [Ii Oo]
+ // metadata_index is packed in "1_int" format: [n] or [Nn]
void pack_data_to(CodeSection* dest) override;
void unpack_data() override;
diff --git a/src/hotspot/share/code/vmreg.cpp b/src/hotspot/share/code/vmreg.cpp
index 37c3137a10270..ba30c114c6c4b 100644
--- a/src/hotspot/share/code/vmreg.cpp
+++ b/src/hotspot/share/code/vmreg.cpp
@@ -30,7 +30,7 @@
// used by SA and jvmti, but it's a leaky abstraction: SA and jvmti
// "know" that stack0 is an integer masquerading as a pointer. For the
// sake of those clients, we preserve this interface.
-VMReg VMRegImpl::stack0 = (VMReg)(intptr_t)VMRegImpl::stack_0()->value();
+VMReg VMRegImpl::stack0 = (VMReg)(intptr_t)FIRST_STACK;
// VMRegs are 4 bytes wide on all platforms
const int VMRegImpl::stack_slot_size = 4;
diff --git a/src/hotspot/share/code/vmreg.hpp b/src/hotspot/share/code/vmreg.hpp
index 78f52b8d57af5..41e40e9a0178e 100644
--- a/src/hotspot/share/code/vmreg.hpp
+++ b/src/hotspot/share/code/vmreg.hpp
@@ -54,7 +54,8 @@ friend class OptoReg;
// friend class Location;
private:
enum {
- BAD_REG = -1
+ BAD_REG = -1,
+ FIRST_STACK = (ConcreteRegisterImpl::number_of_registers + 7) & ~7
};
// Despite being private, this field is exported to the
@@ -71,7 +72,7 @@ friend class OptoReg;
public:
static constexpr VMReg stack_0() {
- return first() + ((ConcreteRegisterImpl::number_of_registers + 7) & ~7);
+ return first() + FIRST_STACK;
}
static VMReg as_VMReg(int val, bool bad_ok = false) {
diff --git a/src/hotspot/share/compiler/compilationMemoryStatistic.cpp b/src/hotspot/share/compiler/compilationMemoryStatistic.cpp
index ae3c2c75fdbb8..196a11fd06537 100644
--- a/src/hotspot/share/compiler/compilationMemoryStatistic.cpp
+++ b/src/hotspot/share/compiler/compilationMemoryStatistic.cpp
@@ -54,7 +54,7 @@
ArenaStatCounter::ArenaStatCounter() :
_current(0), _start(0), _peak(0),
_na(0), _ra(0),
- _limit(0), _hit_limit(false),
+ _limit(0), _hit_limit(false), _limit_in_process(false),
_na_at_peak(0), _ra_at_peak(0), _live_nodes_at_peak(0)
{}
@@ -181,10 +181,16 @@ class MemStatEntry : public CHeapObj {
int _num_recomp;
// Compiling thread. Only for diagnostic purposes. Thread may not be alive anymore.
const Thread* _thread;
+ // active limit for this compilation, if any
+ size_t _limit;
+ // peak usage, bytes, over all arenas
size_t _total;
+ // usage in node arena when total peaked
size_t _na_at_peak;
+ // usage in resource area when total peaked
size_t _ra_at_peak;
+ // number of nodes (c2 only) when total peaked
unsigned _live_nodes_at_peak;
const char* _result;
@@ -192,7 +198,7 @@ class MemStatEntry : public CHeapObj {
MemStatEntry(FullMethodName method)
: _method(method), _comptype(compiler_c1),
- _time(0), _num_recomp(0), _thread(nullptr),
+ _time(0), _num_recomp(0), _thread(nullptr), _limit(0),
_total(0), _na_at_peak(0), _ra_at_peak(0), _live_nodes_at_peak(0),
_result(nullptr) {
}
@@ -200,6 +206,7 @@ class MemStatEntry : public CHeapObj {
void set_comptype(CompilerType comptype) { _comptype = comptype; }
void set_current_time() { _time = os::elapsedTime(); }
void set_current_thread() { _thread = Thread::current(); }
+ void set_limit(size_t limit) { _limit = limit; }
void inc_recompilation() { _num_recomp++; }
void set_total(size_t n) { _total = n; }
@@ -218,6 +225,7 @@ class MemStatEntry : public CHeapObj {
st->print_cr(" RA : ...how much in resource areas");
st->print_cr(" result : Result: 'ok' finished successfully, 'oom' hit memory limit, 'err' compilation failed");
st->print_cr(" #nodes : ...how many nodes (c2 only)");
+ st->print_cr(" limit : memory limit, if set");
st->print_cr(" time : time of last compilation (sec)");
st->print_cr(" type : compiler type");
st->print_cr(" #rc : how often recompiled");
@@ -225,7 +233,7 @@ class MemStatEntry : public CHeapObj {
}
static void print_header(outputStream* st) {
- st->print_cr("total NA RA result #nodes time type #rc thread method");
+ st->print_cr("total NA RA result #nodes limit time type #rc thread method");
}
void print_on(outputStream* st, bool human_readable) const {
@@ -260,7 +268,19 @@ class MemStatEntry : public CHeapObj {
col += 8; st->fill_to(col);
// Number of Nodes when memory peaked
- st->print("%u ", _live_nodes_at_peak);
+ if (_live_nodes_at_peak > 0) {
+ st->print("%u ", _live_nodes_at_peak);
+ } else {
+ st->print("-");
+ }
+ col += 8; st->fill_to(col);
+
+ // Limit
+ if (_limit > 0) {
+ st->print(PROPERFMT " ", PROPERFMTARGS(_limit));
+ } else {
+ st->print("-");
+ }
col += 8; st->fill_to(col);
// TimeStamp
@@ -322,7 +342,7 @@ class MemStatTable :
void add(const FullMethodName& fmn, CompilerType comptype,
size_t total, size_t na_at_peak, size_t ra_at_peak,
- unsigned live_nodes_at_peak, const char* result) {
+ unsigned live_nodes_at_peak, size_t limit, const char* result) {
assert_lock_strong(NMTCompilationCostHistory_lock);
MemStatTableKey key(fmn, comptype);
MemStatEntry** pe = get(key);
@@ -343,6 +363,7 @@ class MemStatTable :
e->set_na_at_peak(na_at_peak);
e->set_ra_at_peak(ra_at_peak);
e->set_live_nodes_at_peak(live_nodes_at_peak);
+ e->set_limit(limit);
e->set_result(result);
}
@@ -430,6 +451,7 @@ void CompilationMemoryStatistic::on_end_compilation() {
arena_stat->na_at_peak(),
arena_stat->ra_at_peak(),
arena_stat->live_nodes_at_peak(),
+ arena_stat->limit(),
result);
}
if (print) {
@@ -483,18 +505,23 @@ void CompilationMemoryStatistic::on_arena_change(ssize_t diff, const Arena* aren
CompilerThread* const th = Thread::current()->as_Compiler_thread();
ArenaStatCounter* const arena_stat = th->arena_stat();
+ if (arena_stat->limit_in_process()) {
+ return; // avoid recursion on limit hit
+ }
+
bool hit_limit_before = arena_stat->hit_limit();
if (arena_stat->account(diff, (int)arena->get_tag())) { // new peak?
// Limit handling
if (arena_stat->hit_limit()) {
-
char name[1024] = "";
bool print = false;
bool crash = false;
CompilerType ct = compiler_none;
+ arena_stat->set_limit_in_process(true); // prevent recursive limit hits
+
// get some more info
const CompileTask* const task = th->task();
if (task != nullptr) {
@@ -516,8 +543,8 @@ void CompilationMemoryStatistic::on_arena_change(ssize_t diff, const Arena* aren
if (ct != compiler_none && name[0] != '\0') {
ss.print("%s %s: ", compilertype2name(ct), name);
}
- ss.print("Hit MemLimit %s (limit: %zu now: %zu)",
- (hit_limit_before ? "again" : ""),
+ ss.print("Hit MemLimit %s(limit: %zu now: %zu)",
+ (hit_limit_before ? "again " : ""),
arena_stat->limit(), arena_stat->peak_since_start());
}
@@ -533,6 +560,8 @@ void CompilationMemoryStatistic::on_arena_change(ssize_t diff, const Arena* aren
} else {
inform_compilation_about_oom(ct);
}
+
+ arena_stat->set_limit_in_process(false);
}
}
}
diff --git a/src/hotspot/share/compiler/compilationMemoryStatistic.hpp b/src/hotspot/share/compiler/compilationMemoryStatistic.hpp
index 625875f05f4d6..d7cccad17fda8 100644
--- a/src/hotspot/share/compiler/compilationMemoryStatistic.hpp
+++ b/src/hotspot/share/compiler/compilationMemoryStatistic.hpp
@@ -50,6 +50,7 @@ class ArenaStatCounter : public CHeapObj {
// MemLimit handling
size_t _limit;
bool _hit_limit;
+ bool _limit_in_process;
// Peak composition:
// Size of node arena when total peaked (c2 only)
@@ -86,6 +87,9 @@ class ArenaStatCounter : public CHeapObj {
size_t limit() const { return _limit; }
bool hit_limit() const { return _hit_limit; }
+ bool limit_in_process() const { return _limit_in_process; }
+ void set_limit_in_process(bool v) { _limit_in_process = v; }
+
};
class CompilationMemoryStatistic : public AllStatic {
diff --git a/src/hotspot/share/compiler/compileBroker.cpp b/src/hotspot/share/compiler/compileBroker.cpp
index 0935ea6a8e132..45b0b331e2513 100644
--- a/src/hotspot/share/compiler/compileBroker.cpp
+++ b/src/hotspot/share/compiler/compileBroker.cpp
@@ -38,6 +38,7 @@
#include "compiler/compilerEvent.hpp"
#include "compiler/compilerOracle.hpp"
#include "compiler/directivesParser.hpp"
+#include "gc/shared/memAllocator.hpp"
#include "interpreter/linkResolver.hpp"
#include "jvm.h"
#include "jfr/jfrEvents.hpp"
@@ -1171,13 +1172,11 @@ void CompileBroker::compile_method_base(const methodHandle& method,
tty->cr();
}
- if (compile_reason != CompileTask::Reason_DirectivesChanged) {
- // A request has been made for compilation. Before we do any
- // real work, check to see if the method has been compiled
- // in the meantime with a definitive result.
- if (compilation_is_complete(method, osr_bci, comp_level)) {
- return;
- }
+ // A request has been made for compilation. Before we do any
+ // real work, check to see if the method has been compiled
+ // in the meantime with a definitive result.
+ if (compilation_is_complete(method, osr_bci, comp_level)) {
+ return;
}
#ifndef PRODUCT
@@ -1222,13 +1221,11 @@ void CompileBroker::compile_method_base(const methodHandle& method,
return;
}
- if (compile_reason != CompileTask::Reason_DirectivesChanged) {
- // We need to check again to see if the compilation has
- // completed. A previous compilation may have registered
- // some result.
- if (compilation_is_complete(method, osr_bci, comp_level)) {
- return;
- }
+ // We need to check again to see if the compilation has
+ // completed. A previous compilation may have registered
+ // some result.
+ if (compilation_is_complete(method, osr_bci, comp_level)) {
+ return;
}
// We now know that this compilation is not pending, complete,
@@ -1377,7 +1374,7 @@ nmethod* CompileBroker::compile_method(const methodHandle& method, int osr_bci,
if (osr_bci == InvocationEntryBci) {
// standard compilation
nmethod* method_code = method->code();
- if (method_code != nullptr && (compile_reason != CompileTask::Reason_DirectivesChanged)) {
+ if (method_code != nullptr) {
if (compilation_is_complete(method, osr_bci, comp_level)) {
return method_code;
}
@@ -1396,6 +1393,7 @@ nmethod* CompileBroker::compile_method(const methodHandle& method, int osr_bci,
assert(!HAS_PENDING_EXCEPTION, "No exception should be present");
// some prerequisites that are compiler specific
if (comp->is_c2() || comp->is_jvmci()) {
+ InternalOOMEMark iom(THREAD);
method->constants()->resolve_string_constants(CHECK_AND_CLEAR_NONASYNC_NULL);
// Resolve all classes seen in the signature of the method
// we are compiling.
diff --git a/src/hotspot/share/compiler/compileTask.hpp b/src/hotspot/share/compiler/compileTask.hpp
index 137e9843c351e..37459bd0ff52f 100644
--- a/src/hotspot/share/compiler/compileTask.hpp
+++ b/src/hotspot/share/compiler/compileTask.hpp
@@ -62,7 +62,6 @@ class CompileTask : public CHeapObj {
Reason_Whitebox, // Whitebox API
Reason_MustBeCompiled, // Used for -Xcomp or AlwaysCompileLoopMethods (see CompilationPolicy::must_be_compiled())
Reason_Bootstrap, // JVMCI bootstrap
- Reason_DirectivesChanged, // Changed CompilerDirectivesStack
Reason_Count
};
@@ -75,8 +74,7 @@ class CompileTask : public CHeapObj {
"replay",
"whitebox",
"must_be_compiled",
- "bootstrap",
- "directives_changed"
+ "bootstrap"
};
return reason_names[compile_reason];
}
diff --git a/src/hotspot/share/compiler/compilerDirectives.cpp b/src/hotspot/share/compiler/compilerDirectives.cpp
index f5ccd9e69e929..b48eb4fb817d8 100644
--- a/src/hotspot/share/compiler/compilerDirectives.cpp
+++ b/src/hotspot/share/compiler/compilerDirectives.cpp
@@ -747,25 +747,6 @@ void DirectivesStack::release(CompilerDirectives* dir) {
}
}
-bool DirectivesStack::hasMatchingDirectives(const methodHandle& method, bool top_only) {
- assert(_depth > 0, "Must never be empty");
- MutexLocker locker(DirectivesStack_lock, Mutex::_no_safepoint_check_flag);
-
- CompilerDirectives* dir = _top;
- assert(dir != nullptr, "Must be initialized");
-
- while (dir != nullptr) {
- if (!dir->is_default_directive() && dir->match(method)) {
- return true;
- }
- if (top_only) {
- break;
- }
- dir = dir->next();
- }
- return false;
-}
-
DirectiveSet* DirectivesStack::getMatchingDirective(const methodHandle& method, AbstractCompiler *comp) {
assert(_depth > 0, "Must never be empty");
diff --git a/src/hotspot/share/compiler/compilerDirectives.hpp b/src/hotspot/share/compiler/compilerDirectives.hpp
index 0b1090406b904..bf15fe9c71397 100644
--- a/src/hotspot/share/compiler/compilerDirectives.hpp
+++ b/src/hotspot/share/compiler/compilerDirectives.hpp
@@ -114,7 +114,6 @@ class DirectivesStack : AllStatic {
public:
static void init();
static DirectiveSet* getMatchingDirective(const methodHandle& mh, AbstractCompiler* comp);
- static bool hasMatchingDirectives(const methodHandle& method, bool top_only = false);
static DirectiveSet* getDefaultDirective(AbstractCompiler* comp);
static void push(CompilerDirectives* directive);
static void pop(int count);
diff --git a/src/hotspot/share/compiler/compilerOracle.cpp b/src/hotspot/share/compiler/compilerOracle.cpp
index ea09e3ebb7f75..a81d60c9fc422 100644
--- a/src/hotspot/share/compiler/compilerOracle.cpp
+++ b/src/hotspot/share/compiler/compilerOracle.cpp
@@ -40,8 +40,26 @@
#include "runtime/handles.inline.hpp"
#include "runtime/jniHandles.hpp"
#include "runtime/os.hpp"
+#include "utilities/istream.hpp"
#include "utilities/parseInteger.hpp"
+// Default compile commands, if defined, are parsed before any of the
+// explicitly defined compile commands. Thus, explicitly defined compile
+// commands take precedence over default compile commands. The effect is
+// as if the default compile commands had been specified at the start of
+// the command line.
+static const char* const default_compile_commands[] = {
+#ifdef ASSERT
+ // In debug builds, impose a (generous) per-compilation memory limit
+ // to catch pathological compilations during testing. The suboption
+ // "crash" will cause the JVM to assert.
+ //
+ // Note: to disable the default limit at the command line,
+ // set a limit of 0 (e.g. -XX:CompileCommand=MemLimit,*.*,0).
+ "MemLimit,*.*,1G~crash",
+#endif
+ nullptr };
+
static const char* optiontype_names[] = {
#define enum_of_types(type, name) name,
OPTION_TYPES(enum_of_types)
@@ -905,6 +923,14 @@ class LineCopy : StackObj {
}
};
+bool CompilerOracle::parse_from_line_quietly(char* line) {
+ const bool quiet0 = _quiet;
+ _quiet = true;
+ const bool result = parse_from_line(line);
+ _quiet = quiet0;
+ return result;
+}
+
bool CompilerOracle::parse_from_line(char* line) {
if ((line[0] == '\0') || (line[0] == '#')) {
return true;
@@ -1056,57 +1082,39 @@ bool CompilerOracle::parse_from_file() {
return true;
}
- char token[1024];
- int pos = 0;
- int c = getc(stream);
- bool success = true;
- while(c != EOF && pos < (int)(sizeof(token)-1)) {
- if (c == '\n') {
- token[pos++] = '\0';
- if (!parse_from_line(token)) {
- success = false;
- }
- pos = 0;
- } else {
- token[pos++] = c;
- }
- c = getc(stream);
- }
- token[pos++] = '\0';
- if (!parse_from_line(token)) {
- success = false;
- }
- fclose(stream);
- return success;
+ FileInput input(stream, /*need_close=*/ true);
+ return parse_from_input(&input, parse_from_line);
}
-bool CompilerOracle::parse_from_string(const char* str, bool (*parse_line)(char*)) {
- char token[1024];
- int pos = 0;
- const char* sp = str;
- int c = *sp++;
+bool CompilerOracle::parse_from_input(inputStream::Input* input,
+ CompilerOracle::
+ parse_from_line_fn_t* parse_from_line) {
bool success = true;
- while (c != '\0' && pos < (int)(sizeof(token)-1)) {
- if (c == '\n') {
- token[pos++] = '\0';
- if (!parse_line(token)) {
- success = false;
- }
- pos = 0;
- } else {
- token[pos++] = c;
+ for (inputStream in(input); !in.done(); in.next()) {
+ if (!parse_from_line(in.current_line())) {
+ success = false;
}
- c = *sp++;
- }
- token[pos++] = '\0';
- if (!parse_line(token)) {
- success = false;
}
return success;
}
+bool CompilerOracle::parse_from_string(const char* str,
+ CompilerOracle::
+ parse_from_line_fn_t* parse_from_line) {
+ MemoryInput input(str, strlen(str));
+ return parse_from_input(&input, parse_from_line);
+}
+
bool compilerOracle_init() {
bool success = true;
+ // Register default compile commands first - any commands specified via CompileCommand will
+ // supersede these default commands.
+ for (int i = 0; default_compile_commands[i] != nullptr; i ++) {
+ char* s = os::strdup(default_compile_commands[i]);
+ success = CompilerOracle::parse_from_line_quietly(s);
+ os::free(s);
+ assert(success, "default compile command \"%s\" failed to parse", default_compile_commands[i]);
+ }
if (!CompilerOracle::parse_from_string(CompileCommand, CompilerOracle::parse_from_line)) {
success = false;
}
diff --git a/src/hotspot/share/compiler/compilerOracle.hpp b/src/hotspot/share/compiler/compilerOracle.hpp
index 43a30b30facec..1a85e0629f93e 100644
--- a/src/hotspot/share/compiler/compilerOracle.hpp
+++ b/src/hotspot/share/compiler/compilerOracle.hpp
@@ -27,6 +27,7 @@
#include "memory/allStatic.hpp"
#include "oops/oopsHierarchy.hpp"
+#include "utilities/istream.hpp"
class methodHandle;
@@ -120,11 +121,18 @@ enum class MemStatAction {
};
class CompilerOracle : AllStatic {
+ public:
+ typedef bool parse_from_line_fn_t(char*);
+
private:
static bool _quiet;
static void print_parse_error(char* error_msg, char* original_line);
static void print_command(CompileCommandEnum option, const char* name, enum OptionType type);
+ // The core parser.
+ static bool parse_from_input(inputStream::Input* input,
+ parse_from_line_fn_t* parse_from_line);
+
public:
// True if the command file has been specified or is implicit
static bool has_command_file();
@@ -177,8 +185,10 @@ class CompilerOracle : AllStatic {
static bool option_matches_type(CompileCommandEnum option, T& value);
// Reads from string instead of file
- static bool parse_from_string(const char* option_string, bool (*parser)(char*));
+ static bool parse_from_string(const char* option_string,
+ parse_from_line_fn_t* parser);
static bool parse_from_line(char* line);
+ static bool parse_from_line_quietly(char* line);
static bool parse_compile_only(char* line);
// Fast check if there is any option set that compile control needs to know about
diff --git a/src/hotspot/share/gc/g1/c1/g1BarrierSetC1.cpp b/src/hotspot/share/gc/g1/c1/g1BarrierSetC1.cpp
index 53d4aa221f214..f82b5cfcc556a 100644
--- a/src/hotspot/share/gc/g1/c1/g1BarrierSetC1.cpp
+++ b/src/hotspot/share/gc/g1/c1/g1BarrierSetC1.cpp
@@ -158,13 +158,13 @@ void G1BarrierSetC1::post_barrier(LIRAccess& access, LIR_Opr addr, LIR_Opr new_v
__ logical_xor(xor_res, new_val, xor_res);
__ move(xor_res, xor_shift_res);
__ unsigned_shift_right(xor_shift_res,
- LIR_OprFact::intConst(checked_cast(HeapRegion::LogOfHRGrainBytes)),
+ LIR_OprFact::intConst(checked_cast(G1HeapRegion::LogOfHRGrainBytes)),
xor_shift_res,
LIR_Opr::illegalOpr());
} else {
__ logical_xor(addr, new_val, xor_res);
__ unsigned_shift_right(xor_res,
- LIR_OprFact::intConst(checked_cast(HeapRegion::LogOfHRGrainBytes)),
+ LIR_OprFact::intConst(checked_cast(G1HeapRegion::LogOfHRGrainBytes)),
xor_shift_res,
LIR_Opr::illegalOpr());
}
diff --git a/src/hotspot/share/gc/g1/c2/g1BarrierSetC2.cpp b/src/hotspot/share/gc/g1/c2/g1BarrierSetC2.cpp
index 807e9aa9c5938..13b993546cde4 100644
--- a/src/hotspot/share/gc/g1/c2/g1BarrierSetC2.cpp
+++ b/src/hotspot/share/gc/g1/c2/g1BarrierSetC2.cpp
@@ -453,8 +453,8 @@ void G1BarrierSetC2::post_barrier(GraphKit* kit,
// Should be able to do an unsigned compare of region_size instead of
// and extra shift. Do we have an unsigned compare??
- // Node* region_size = __ ConI(1 << HeapRegion::LogOfHRGrainBytes);
- Node* xor_res = __ URShiftX ( __ XorX( cast, __ CastPX(__ ctrl(), val)), __ ConI(checked_cast(HeapRegion::LogOfHRGrainBytes)));
+ // Node* region_size = __ ConI(1 << G1HeapRegion::LogOfHRGrainBytes);
+ Node* xor_res = __ URShiftX ( __ XorX( cast, __ CastPX(__ ctrl(), val)), __ ConI(checked_cast(G1HeapRegion::LogOfHRGrainBytes)));
// if (xor_res == 0) same region so skip
__ if_then(xor_res, BoolTest::ne, zeroX, likely); {
diff --git a/src/hotspot/share/gc/g1/g1AllocRegion.cpp b/src/hotspot/share/gc/g1/g1AllocRegion.cpp
index f20f24bcde42b..a205cf71ee673 100644
--- a/src/hotspot/share/gc/g1/g1AllocRegion.cpp
+++ b/src/hotspot/share/gc/g1/g1AllocRegion.cpp
@@ -34,9 +34,9 @@
#include "utilities/align.hpp"
G1CollectedHeap* G1AllocRegion::_g1h = nullptr;
-HeapRegion* G1AllocRegion::_dummy_region = nullptr;
+G1HeapRegion* G1AllocRegion::_dummy_region = nullptr;
-void G1AllocRegion::setup(G1CollectedHeap* g1h, HeapRegion* dummy_region) {
+void G1AllocRegion::setup(G1CollectedHeap* g1h, G1HeapRegion* dummy_region) {
assert(_dummy_region == nullptr, "should be set once");
assert(dummy_region != nullptr, "pre-condition");
assert(dummy_region->free() == 0, "pre-condition");
@@ -50,7 +50,7 @@ void G1AllocRegion::setup(G1CollectedHeap* g1h, HeapRegion* dummy_region) {
_dummy_region = dummy_region;
}
-size_t G1AllocRegion::fill_up_remaining_space(HeapRegion* alloc_region) {
+size_t G1AllocRegion::fill_up_remaining_space(G1HeapRegion* alloc_region) {
assert(alloc_region != nullptr && alloc_region != _dummy_region,
"pre-condition");
size_t result = 0;
@@ -94,7 +94,7 @@ size_t G1AllocRegion::fill_up_remaining_space(HeapRegion* alloc_region) {
return result;
}
-size_t G1AllocRegion::retire_internal(HeapRegion* alloc_region, bool fill_up) {
+size_t G1AllocRegion::retire_internal(G1HeapRegion* alloc_region, bool fill_up) {
// We never have to check whether the active region is empty or not,
// and potentially free it if it is, given that it's guaranteed that
// it will never be empty.
@@ -120,7 +120,7 @@ size_t G1AllocRegion::retire(bool fill_up) {
size_t waste = 0;
trace("retiring");
- HeapRegion* alloc_region = _alloc_region;
+ G1HeapRegion* alloc_region = _alloc_region;
if (alloc_region != _dummy_region) {
waste = retire_internal(alloc_region, fill_up);
reset_alloc_region();
@@ -130,13 +130,12 @@ size_t G1AllocRegion::retire(bool fill_up) {
return waste;
}
-HeapWord* G1AllocRegion::new_alloc_region_and_allocate(size_t word_size,
- bool force) {
+HeapWord* G1AllocRegion::new_alloc_region_and_allocate(size_t word_size) {
assert_alloc_region(_alloc_region == _dummy_region, "pre-condition");
assert_alloc_region(_used_bytes_before == 0, "pre-condition");
trace("attempting region allocation");
- HeapRegion* new_alloc_region = allocate_new_region(word_size, force);
+ G1HeapRegion* new_alloc_region = allocate_new_region(word_size);
if (new_alloc_region != nullptr) {
new_alloc_region->reset_pre_dummy_top();
// Need to do this before the allocation
@@ -167,7 +166,7 @@ void G1AllocRegion::init() {
trace("initialized");
}
-void G1AllocRegion::set(HeapRegion* alloc_region) {
+void G1AllocRegion::set(G1HeapRegion* alloc_region) {
trace("setting");
// We explicitly check that the region is not empty to make sure we
// maintain the "the alloc region cannot be empty" invariant.
@@ -182,7 +181,7 @@ void G1AllocRegion::set(HeapRegion* alloc_region) {
trace("set");
}
-void G1AllocRegion::update_alloc_region(HeapRegion* alloc_region) {
+void G1AllocRegion::update_alloc_region(G1HeapRegion* alloc_region) {
trace("update");
// We explicitly check that the region is not empty to make sure we
// maintain the "the alloc region cannot be empty" invariant.
@@ -193,9 +192,9 @@ void G1AllocRegion::update_alloc_region(HeapRegion* alloc_region) {
trace("updated");
}
-HeapRegion* G1AllocRegion::release() {
+G1HeapRegion* G1AllocRegion::release() {
trace("releasing");
- HeapRegion* alloc_region = _alloc_region;
+ G1HeapRegion* alloc_region = _alloc_region;
retire(false /* fill_up */);
assert_alloc_region(_alloc_region == _dummy_region, "post-condition of retire()");
_alloc_region = nullptr;
@@ -258,12 +257,11 @@ G1AllocRegion::G1AllocRegion(const char* name,
_node_index(node_index)
{ }
-HeapRegion* MutatorAllocRegion::allocate_new_region(size_t word_size,
- bool force) {
- return _g1h->new_mutator_alloc_region(word_size, force, _node_index);
+G1HeapRegion* MutatorAllocRegion::allocate_new_region(size_t word_size) {
+ return _g1h->new_mutator_alloc_region(word_size, _node_index);
}
-void MutatorAllocRegion::retire_region(HeapRegion* alloc_region,
+void MutatorAllocRegion::retire_region(G1HeapRegion* alloc_region,
size_t allocated_bytes) {
_g1h->retire_mutator_alloc_region(alloc_region, allocated_bytes);
}
@@ -274,7 +272,7 @@ void MutatorAllocRegion::init() {
_wasted_bytes = 0;
}
-bool MutatorAllocRegion::should_retain(HeapRegion* region) {
+bool MutatorAllocRegion::should_retain(G1HeapRegion* region) {
size_t free_bytes = region->free();
if (free_bytes < MinTLABSize) {
return false;
@@ -291,7 +289,7 @@ bool MutatorAllocRegion::should_retain(HeapRegion* region) {
size_t MutatorAllocRegion::retire(bool fill_up) {
size_t waste = 0;
trace("retiring");
- HeapRegion* current_region = get();
+ G1HeapRegion* current_region = get();
if (current_region != nullptr) {
// Retain the current region if it fits a TLAB and has more
// free than the currently retained region.
@@ -314,7 +312,7 @@ size_t MutatorAllocRegion::retire(bool fill_up) {
size_t MutatorAllocRegion::used_in_alloc_regions() {
size_t used = 0;
- HeapRegion* hr = get();
+ G1HeapRegion* hr = get();
if (hr != nullptr) {
used += hr->used();
}
@@ -326,8 +324,8 @@ size_t MutatorAllocRegion::used_in_alloc_regions() {
return used;
}
-HeapRegion* MutatorAllocRegion::release() {
- HeapRegion* ret = G1AllocRegion::release();
+G1HeapRegion* MutatorAllocRegion::release() {
+ G1HeapRegion* ret = G1AllocRegion::release();
// The retained alloc region must be retired and this must be
// done after the above call to release the mutator alloc region,
@@ -340,23 +338,21 @@ HeapRegion* MutatorAllocRegion::release() {
count(),
byte_size_in_proper_unit(_wasted_bytes),
proper_unit_for_byte_size(_wasted_bytes),
- percent_of(_wasted_bytes, count() * HeapRegion::GrainBytes));
+ percent_of(_wasted_bytes, count() * G1HeapRegion::GrainBytes));
return ret;
}
-HeapRegion* G1GCAllocRegion::allocate_new_region(size_t word_size,
- bool force) {
- assert(!force, "not supported for GC alloc regions");
+G1HeapRegion* G1GCAllocRegion::allocate_new_region(size_t word_size) {
return _g1h->new_gc_alloc_region(word_size, _purpose, _node_index);
}
-void G1GCAllocRegion::retire_region(HeapRegion* alloc_region,
+void G1GCAllocRegion::retire_region(G1HeapRegion* alloc_region,
size_t allocated_bytes) {
_g1h->retire_gc_alloc_region(alloc_region, allocated_bytes, _purpose);
}
size_t G1GCAllocRegion::retire(bool fill_up) {
- HeapRegion* retired = get();
+ G1HeapRegion* retired = get();
size_t end_waste = G1AllocRegion::retire(fill_up);
// Do not count retirement of the dummy allocation region.
if (retired != nullptr) {
diff --git a/src/hotspot/share/gc/g1/g1AllocRegion.hpp b/src/hotspot/share/gc/g1/g1AllocRegion.hpp
index b39a717d3b708..a8903fd54f148 100644
--- a/src/hotspot/share/gc/g1/g1AllocRegion.hpp
+++ b/src/hotspot/share/gc/g1/g1AllocRegion.hpp
@@ -52,7 +52,7 @@ class G1AllocRegion : public CHeapObj {
// then _alloc_region is null and this object should not be used to
// satisfy allocation requests (it was done this way to force the
// correct use of init() and release()).
- HeapRegion* volatile _alloc_region;
+ G1HeapRegion* volatile _alloc_region;
// It keeps track of the distinct number of regions that are used
// for allocation in the active interval of this object, i.e.,
@@ -76,17 +76,15 @@ class G1AllocRegion : public CHeapObj {
// == end()). When we don't have a valid active region we make
// _alloc_region point to this. This allows us to skip checking
// whether the _alloc_region is null or not.
- static HeapRegion* _dummy_region;
+ static G1HeapRegion* _dummy_region;
// After a region is allocated by alloc_new_region, this
// method is used to set it as the active alloc_region
- void update_alloc_region(HeapRegion* alloc_region);
+ void update_alloc_region(G1HeapRegion* alloc_region);
// Allocate a new active region and use it to perform a word_size
- // allocation. The force parameter will be passed on to
- // G1CollectedHeap::allocate_new_alloc_region() and tells it to try
- // to allocate a new region even if the max has been reached.
- HeapWord* new_alloc_region_and_allocate(size_t word_size, bool force);
+ // allocation.
+ HeapWord* new_alloc_region_and_allocate(size_t word_size);
// Perform an allocation out of a new allocation region, retiring the current one.
inline HeapWord* attempt_allocation_using_new_region(size_t min_word_size,
@@ -100,17 +98,17 @@ class G1AllocRegion : public CHeapObj {
void reset_alloc_region();
// Perform a non-MT-safe allocation out of the given region.
- inline HeapWord* allocate(HeapRegion* alloc_region,
+ inline HeapWord* allocate(G1HeapRegion* alloc_region,
size_t word_size);
// Perform a MT-safe allocation out of the given region.
- inline HeapWord* par_allocate(HeapRegion* alloc_region,
+ inline HeapWord* par_allocate(G1HeapRegion* alloc_region,
size_t word_size);
// Perform a MT-safe allocation out of the given region, with the given
// minimum and desired size. Returns the actual size allocated (between
// minimum and desired size) in actual_word_size if the allocation has been
// successful.
- inline HeapWord* par_allocate(HeapRegion* alloc_region,
+ inline HeapWord* par_allocate(G1HeapRegion* alloc_region,
size_t min_word_size,
size_t desired_word_size,
size_t* actual_word_size);
@@ -119,7 +117,7 @@ class G1AllocRegion : public CHeapObj {
// so that no one else can allocate out of it any more.
// Returns the number of bytes that have been wasted by filled up
// the space.
- size_t fill_up_remaining_space(HeapRegion* alloc_region);
+ size_t fill_up_remaining_space(G1HeapRegion* alloc_region);
// Retire the active allocating region. If fill_up is true then make
// sure that the region is full before we retire it so that no one
@@ -127,22 +125,22 @@ class G1AllocRegion : public CHeapObj {
// Returns the number of bytes that have been filled up during retire.
virtual size_t retire(bool fill_up);
- size_t retire_internal(HeapRegion* alloc_region, bool fill_up);
+ size_t retire_internal(G1HeapRegion* alloc_region, bool fill_up);
// For convenience as subclasses use it.
static G1CollectedHeap* _g1h;
- virtual HeapRegion* allocate_new_region(size_t word_size, bool force) = 0;
- virtual void retire_region(HeapRegion* alloc_region,
+ virtual G1HeapRegion* allocate_new_region(size_t word_size) = 0;
+ virtual void retire_region(G1HeapRegion* alloc_region,
size_t allocated_bytes) = 0;
G1AllocRegion(const char* name, bool bot_updates, uint node_index);
public:
- static void setup(G1CollectedHeap* g1h, HeapRegion* dummy_region);
+ static void setup(G1CollectedHeap* g1h, G1HeapRegion* dummy_region);
- HeapRegion* get() const {
- HeapRegion * hr = _alloc_region;
+ G1HeapRegion* get() const {
+ G1HeapRegion * hr = _alloc_region;
// Make sure that the dummy region does not escape this class.
return (hr == _dummy_region) ? nullptr : hr;
}
@@ -172,12 +170,6 @@ class G1AllocRegion : public CHeapObj {
size_t desired_word_size,
size_t* actual_word_size);
- // Should be called to allocate a new region even if the max of this
- // type of regions has been reached. Should only be called if other
- // allocation attempts have failed and we are not holding a valid
- // active region.
- inline HeapWord* attempt_allocation_force(size_t word_size);
-
// Should be called before we start using this object.
virtual void init();
@@ -185,11 +177,11 @@ class G1AllocRegion : public CHeapObj {
// region. (Use Example: we try to retain the last old GC alloc
// region that we've used during a GC and we can use set() to
// re-instate it at the beginning of the next GC.)
- void set(HeapRegion* alloc_region);
+ void set(G1HeapRegion* alloc_region);
// Should be called when we want to release the active region which
// is returned after it's been retired.
- virtual HeapRegion* release();
+ virtual G1HeapRegion* release();
void trace(const char* str,
size_t min_word_size = 0,
@@ -207,15 +199,16 @@ class MutatorAllocRegion : public G1AllocRegion {
// Retained allocation region. Used to lower the waste generated
// during mutation by having two active regions if the free space
// in a region about to be retired still could fit a TLAB.
- HeapRegion* volatile _retained_alloc_region;
+ G1HeapRegion* volatile _retained_alloc_region;
// Decide if the region should be retained, based on the free size
// in it and the free size in the currently retained region, if any.
- bool should_retain(HeapRegion* region);
+ bool should_retain(G1HeapRegion* region);
protected:
- virtual HeapRegion* allocate_new_region(size_t word_size, bool force);
- virtual void retire_region(HeapRegion* alloc_region, size_t allocated_bytes);
+ virtual G1HeapRegion* allocate_new_region(size_t word_size);
+ virtual void retire_region(G1HeapRegion* alloc_region, size_t allocated_bytes);
virtual size_t retire(bool fill_up);
+
public:
MutatorAllocRegion(uint node_index)
: G1AllocRegion("Mutator Alloc Region", false /* bot_updates */, node_index),
@@ -238,7 +231,7 @@ class MutatorAllocRegion : public G1AllocRegion {
// This specialization of release() makes sure that the retained alloc
// region is retired and set to null.
- virtual HeapRegion* release();
+ virtual G1HeapRegion* release();
virtual void init();
};
@@ -249,8 +242,8 @@ class G1GCAllocRegion : public G1AllocRegion {
G1EvacStats* _stats;
G1HeapRegionAttr::region_type_t _purpose;
- virtual HeapRegion* allocate_new_region(size_t word_size, bool force);
- virtual void retire_region(HeapRegion* alloc_region, size_t allocated_bytes);
+ virtual G1HeapRegion* allocate_new_region(size_t word_size);
+ virtual void retire_region(G1HeapRegion* alloc_region, size_t allocated_bytes);
virtual size_t retire(bool fill_up);
diff --git a/src/hotspot/share/gc/g1/g1AllocRegion.inline.hpp b/src/hotspot/share/gc/g1/g1AllocRegion.inline.hpp
index 00ef254e84c37..ba4f1a12628d0 100644
--- a/src/hotspot/share/gc/g1/g1AllocRegion.inline.hpp
+++ b/src/hotspot/share/gc/g1/g1AllocRegion.inline.hpp
@@ -41,19 +41,19 @@ inline void G1AllocRegion::reset_alloc_region() {
_alloc_region = _dummy_region;
}
-inline HeapWord* G1AllocRegion::allocate(HeapRegion* alloc_region,
+inline HeapWord* G1AllocRegion::allocate(G1HeapRegion* alloc_region,
size_t word_size) {
assert(alloc_region != nullptr, "pre-condition");
return alloc_region->allocate(word_size);
}
-inline HeapWord* G1AllocRegion::par_allocate(HeapRegion* alloc_region, size_t word_size) {
+inline HeapWord* G1AllocRegion::par_allocate(G1HeapRegion* alloc_region, size_t word_size) {
size_t temp;
return par_allocate(alloc_region, word_size, word_size, &temp);
}
-inline HeapWord* G1AllocRegion::par_allocate(HeapRegion* alloc_region,
+inline HeapWord* G1AllocRegion::par_allocate(G1HeapRegion* alloc_region,
size_t min_word_size,
size_t desired_word_size,
size_t* actual_word_size) {
@@ -66,7 +66,7 @@ inline HeapWord* G1AllocRegion::par_allocate(HeapRegion* alloc_region,
inline HeapWord* G1AllocRegion::attempt_allocation(size_t min_word_size,
size_t desired_word_size,
size_t* actual_word_size) {
- HeapRegion* alloc_region = _alloc_region;
+ G1HeapRegion* alloc_region = _alloc_region;
assert_alloc_region(alloc_region != nullptr, "not initialized properly");
HeapWord* result = par_allocate(alloc_region, min_word_size, desired_word_size, actual_word_size);
@@ -98,7 +98,7 @@ inline HeapWord* G1AllocRegion::attempt_allocation_using_new_region(size_t min_w
size_t desired_word_size,
size_t* actual_word_size) {
retire(true /* fill_up */);
- HeapWord* result = new_alloc_region_and_allocate(desired_word_size, false /* force */);
+ HeapWord* result = new_alloc_region_and_allocate(desired_word_size);
if (result != nullptr) {
*actual_word_size = desired_word_size;
trace("alloc locked (second attempt)", min_word_size, desired_word_size, *actual_word_size, result);
@@ -108,19 +108,6 @@ inline HeapWord* G1AllocRegion::attempt_allocation_using_new_region(size_t min_w
return nullptr;
}
-inline HeapWord* G1AllocRegion::attempt_allocation_force(size_t word_size) {
- assert_alloc_region(_alloc_region != nullptr, "not initialized properly");
-
- trace("forcing alloc", word_size, word_size);
- HeapWord* result = new_alloc_region_and_allocate(word_size, true /* force */);
- if (result != nullptr) {
- trace("alloc forced", word_size, word_size, word_size, result);
- return result;
- }
- trace("alloc forced failed", word_size, word_size);
- return nullptr;
-}
-
inline HeapWord* MutatorAllocRegion::attempt_retained_allocation(size_t min_word_size,
size_t desired_word_size,
size_t* actual_word_size) {
diff --git a/src/hotspot/share/gc/g1/g1Allocator.cpp b/src/hotspot/share/gc/g1/g1Allocator.cpp
index 3964ad8c8af3b..05c64287ec0eb 100644
--- a/src/hotspot/share/gc/g1/g1Allocator.cpp
+++ b/src/hotspot/share/gc/g1/g1Allocator.cpp
@@ -29,6 +29,7 @@
#include "gc/g1/g1EvacStats.inline.hpp"
#include "gc/g1/g1CollectedHeap.inline.hpp"
#include "gc/g1/g1HeapRegion.inline.hpp"
+#include "gc/g1/g1HeapRegionPrinter.hpp"
#include "gc/g1/g1HeapRegionSet.inline.hpp"
#include "gc/g1/g1HeapRegionType.hpp"
#include "gc/g1/g1NUMA.hpp"
@@ -88,14 +89,14 @@ void G1Allocator::release_mutator_alloc_regions() {
}
}
-bool G1Allocator::is_retained_old_region(HeapRegion* hr) {
+bool G1Allocator::is_retained_old_region(G1HeapRegion* hr) {
return _retained_old_gc_alloc_region == hr;
}
void G1Allocator::reuse_retained_old_region(G1EvacInfo* evacuation_info,
OldGCAllocRegion* old,
- HeapRegion** retained_old) {
- HeapRegion* retained_region = *retained_old;
+ G1HeapRegion** retained_old) {
+ G1HeapRegion* retained_region = *retained_old;
*retained_old = nullptr;
// We will discard the current GC alloc region if:
@@ -118,7 +119,7 @@ void G1Allocator::reuse_retained_old_region(G1EvacInfo* evacuation_info,
// it's retired again.
_g1h->old_set_remove(retained_region);
old->set(retained_region);
- _g1h->hr_printer()->reuse(retained_region);
+ G1HeapRegionPrinter::reuse(retained_region);
evacuation_info->set_alloc_regions_used_before(retained_region->used());
}
}
@@ -189,7 +190,7 @@ size_t G1Allocator::unsafe_max_tlab_alloc() {
// humongous objects.
uint node_index = current_node_index();
- HeapRegion* hr = mutator_alloc_region(node_index)->get();
+ G1HeapRegion* hr = mutator_alloc_region(node_index)->get();
size_t max_tlab = _g1h->max_tlab_size() * wordSize;
if (hr == nullptr || hr->free() < MinTLABSize) {
diff --git a/src/hotspot/share/gc/g1/g1Allocator.hpp b/src/hotspot/share/gc/g1/g1Allocator.hpp
index 32f37778a18ad..f9038071bd9d7 100644
--- a/src/hotspot/share/gc/g1/g1Allocator.hpp
+++ b/src/hotspot/share/gc/g1/g1Allocator.hpp
@@ -60,7 +60,7 @@ class G1Allocator : public CHeapObj {
// old objects.
OldGCAllocRegion _old_gc_alloc_region;
- HeapRegion* _retained_old_gc_alloc_region;
+ G1HeapRegion* _retained_old_gc_alloc_region;
bool survivor_is_full() const;
bool old_is_full() const;
@@ -70,7 +70,7 @@ class G1Allocator : public CHeapObj {
void reuse_retained_old_region(G1EvacInfo* evacuation_info,
OldGCAllocRegion* old,
- HeapRegion** retained);
+ G1HeapRegion** retained);
// Accessors to the allocation regions.
inline MutatorAllocRegion* mutator_alloc_region(uint node_index);
@@ -108,7 +108,7 @@ class G1Allocator : public CHeapObj {
void init_gc_alloc_regions(G1EvacInfo* evacuation_info);
void release_gc_alloc_regions(G1EvacInfo* evacuation_info);
void abandon_gc_alloc_regions();
- bool is_retained_old_region(HeapRegion* hr);
+ bool is_retained_old_region(G1HeapRegion* hr);
// Allocate blocks of memory during mutator time.
@@ -121,8 +121,6 @@ class G1Allocator : public CHeapObj {
// current allocation region, and then attempts an allocation using a new region.
inline HeapWord* attempt_allocation_locked(size_t word_size);
- inline HeapWord* attempt_allocation_force(size_t word_size);
-
size_t unsafe_max_tlab_alloc();
size_t used_in_alloc_regions();
diff --git a/src/hotspot/share/gc/g1/g1Allocator.inline.hpp b/src/hotspot/share/gc/g1/g1Allocator.inline.hpp
index 13ae9b9bbbdea..ff84306e74f95 100644
--- a/src/hotspot/share/gc/g1/g1Allocator.inline.hpp
+++ b/src/hotspot/share/gc/g1/g1Allocator.inline.hpp
@@ -71,11 +71,6 @@ inline HeapWord* G1Allocator::attempt_allocation_locked(size_t word_size) {
return result;
}
-inline HeapWord* G1Allocator::attempt_allocation_force(size_t word_size) {
- uint node_index = current_node_index();
- return mutator_alloc_region(node_index)->attempt_allocation_force(word_size);
-}
-
inline PLAB* G1PLABAllocator::alloc_buffer(G1HeapRegionAttr dest, uint node_index) const {
assert(dest.is_valid(),
"Allocation buffer index out of bounds: %s", dest.get_type_str());
diff --git a/src/hotspot/share/gc/g1/g1Arguments.cpp b/src/hotspot/share/gc/g1/g1Arguments.cpp
index d0d9cd3129709..a62eb994486f5 100644
--- a/src/hotspot/share/gc/g1/g1Arguments.cpp
+++ b/src/hotspot/share/gc/g1/g1Arguments.cpp
@@ -55,9 +55,9 @@ void G1Arguments::initialize_alignments() {
// There is a circular dependency here. We base the region size on the heap
// size, but the heap size should be aligned with the region size. To get
// around this we use the unaligned values for the heap.
- HeapRegion::setup_heap_region_size(MaxHeapSize);
+ G1HeapRegion::setup_heap_region_size(MaxHeapSize);
- SpaceAlignment = HeapRegion::GrainBytes;
+ SpaceAlignment = G1HeapRegion::GrainBytes;
HeapAlignment = calculate_heap_alignment(SpaceAlignment);
// We need to initialize card set configuration as soon as heap region size is
@@ -71,7 +71,7 @@ void G1Arguments::initialize_alignments() {
}
size_t G1Arguments::conservative_max_heap_alignment() {
- return HeapRegion::max_region_size();
+ return G1HeapRegion::max_region_size();
}
void G1Arguments::initialize_verification_types() {
@@ -130,22 +130,22 @@ void G1Arguments::initialize_mark_stack_size() {
void G1Arguments::initialize_card_set_configuration() {
- assert(HeapRegion::LogOfHRGrainBytes != 0, "not initialized");
+ assert(G1HeapRegion::LogOfHRGrainBytes != 0, "not initialized");
// Array of Cards card set container globals.
const uint LOG_M = 20;
assert(log2i_exact(HeapRegionBounds::min_size()) == LOG_M, "inv");
- assert(HeapRegion::LogOfHRGrainBytes >= LOG_M, "from the above");
- uint region_size_log_mb = HeapRegion::LogOfHRGrainBytes - LOG_M;
+ assert(G1HeapRegion::LogOfHRGrainBytes >= LOG_M, "from the above");
+ uint region_size_log_mb = G1HeapRegion::LogOfHRGrainBytes - LOG_M;
if (FLAG_IS_DEFAULT(G1RemSetArrayOfCardsEntries)) {
- uint max_cards_in_inline_ptr = G1CardSetConfiguration::max_cards_in_inline_ptr(HeapRegion::LogCardsPerRegion);
+ uint max_cards_in_inline_ptr = G1CardSetConfiguration::max_cards_in_inline_ptr(G1HeapRegion::LogCardsPerRegion);
FLAG_SET_ERGO(G1RemSetArrayOfCardsEntries, MAX2(max_cards_in_inline_ptr * 2,
G1RemSetArrayOfCardsEntriesBase << region_size_log_mb));
}
// Howl card set container globals.
if (FLAG_IS_DEFAULT(G1RemSetHowlNumBuckets)) {
- FLAG_SET_ERGO(G1RemSetHowlNumBuckets, G1CardSetHowl::num_buckets(HeapRegion::CardsPerRegion,
+ FLAG_SET_ERGO(G1RemSetHowlNumBuckets, G1CardSetHowl::num_buckets(G1HeapRegion::CardsPerRegion,
G1RemSetArrayOfCardsEntries,
G1RemSetHowlMaxNumBuckets));
}
diff --git a/src/hotspot/share/gc/g1/g1BarrierSet.cpp b/src/hotspot/share/gc/g1/g1BarrierSet.cpp
index 8b87fc3d78a3b..019cfb8cb987b 100644
--- a/src/hotspot/share/gc/g1/g1BarrierSet.cpp
+++ b/src/hotspot/share/gc/g1/g1BarrierSet.cpp
@@ -112,7 +112,7 @@ void G1BarrierSet::write_region(JavaThread* thread, MemRegion mr) {
// skip young gen cards
if (*byte == G1CardTable::g1_young_card_val()) {
// MemRegion should not span multiple regions for the young gen.
- DEBUG_ONLY(HeapRegion* containing_hr = G1CollectedHeap::heap()->heap_region_containing(mr.start());)
+ DEBUG_ONLY(G1HeapRegion* containing_hr = G1CollectedHeap::heap()->heap_region_containing(mr.start());)
assert(containing_hr->is_young(), "it should be young");
assert(containing_hr->is_in(mr.start()), "it should contain start");
assert(containing_hr->is_in(mr.last()), "it should also contain last");
diff --git a/src/hotspot/share/gc/g1/g1BlockOffsetTable.inline.hpp b/src/hotspot/share/gc/g1/g1BlockOffsetTable.inline.hpp
index b875f2f5c6d95..7e6b0e56d9346 100644
--- a/src/hotspot/share/gc/g1/g1BlockOffsetTable.inline.hpp
+++ b/src/hotspot/share/gc/g1/g1BlockOffsetTable.inline.hpp
@@ -62,7 +62,8 @@ inline uint8_t* G1BlockOffsetTable::entry_for_addr(const void* const p) const {
}
inline HeapWord* G1BlockOffsetTable::addr_for_entry(const uint8_t* const p) const {
- size_t delta = pointer_delta(p, _offset_base, sizeof(uint8_t));
+ // _offset_base can be "negative", so can't use pointer_delta().
+ size_t delta = p - _offset_base;
HeapWord* result = (HeapWord*) (delta << CardTable::card_shift());
assert(_reserved.contains(result),
"out of bounds accessor from block offset table");
diff --git a/src/hotspot/share/gc/g1/g1CardSet.cpp b/src/hotspot/share/gc/g1/g1CardSet.cpp
index 101822dbc4428..7ab739e0196d0 100644
--- a/src/hotspot/share/gc/g1/g1CardSet.cpp
+++ b/src/hotspot/share/gc/g1/g1CardSet.cpp
@@ -57,24 +57,24 @@ static uint default_log2_card_regions_per_region() {
uint log2_card_regions_per_heap_region = 0;
const uint card_container_limit = G1CardSetContainer::LogCardsPerRegionLimit;
- if (card_container_limit < (uint)HeapRegion::LogCardsPerRegion) {
- log2_card_regions_per_heap_region = (uint)HeapRegion::LogCardsPerRegion - card_container_limit;
+ if (card_container_limit < (uint)G1HeapRegion::LogCardsPerRegion) {
+ log2_card_regions_per_heap_region = (uint)G1HeapRegion::LogCardsPerRegion - card_container_limit;
}
return log2_card_regions_per_heap_region;
}
G1CardSetConfiguration::G1CardSetConfiguration() :
- G1CardSetConfiguration(HeapRegion::LogCardsPerRegion - default_log2_card_regions_per_region(), /* inline_ptr_bits_per_card */
+ G1CardSetConfiguration(G1HeapRegion::LogCardsPerRegion - default_log2_card_regions_per_region(), /* inline_ptr_bits_per_card */
G1RemSetArrayOfCardsEntries, /* max_cards_in_array */
(double)G1RemSetCoarsenHowlBitmapToHowlFullPercent / 100, /* cards_in_bitmap_threshold_percent */
G1RemSetHowlNumBuckets, /* num_buckets_in_howl */
(double)G1RemSetCoarsenHowlToFullPercent / 100, /* cards_in_howl_threshold_percent */
- (uint)HeapRegion::CardsPerRegion >> default_log2_card_regions_per_region(),
+ (uint)G1HeapRegion::CardsPerRegion >> default_log2_card_regions_per_region(),
/* max_cards_in_card_set */
default_log2_card_regions_per_region()) /* log2_card_regions_per_region */
{
- assert((_log2_card_regions_per_heap_region + _log2_cards_per_card_region) == (uint)HeapRegion::LogCardsPerRegion,
+ assert((_log2_card_regions_per_heap_region + _log2_cards_per_card_region) == (uint)G1HeapRegion::LogCardsPerRegion,
"inconsistent heap region virtualization setup");
}
@@ -395,7 +395,7 @@ G1CardSet::~G1CardSet() {
void G1CardSet::initialize(MemRegion reserved) {
const uint BitsInUint = sizeof(uint) * BitsPerByte;
- const uint CardBitsWithinCardRegion = MIN2((uint)HeapRegion::LogCardsPerRegion, G1CardSetContainer::LogCardsPerRegionLimit);
+ const uint CardBitsWithinCardRegion = MIN2((uint)G1HeapRegion::LogCardsPerRegion, G1CardSetContainer::LogCardsPerRegionLimit);
// Check if the number of cards within a region fits an uint.
if (CardBitsWithinCardRegion > BitsInUint) {
@@ -533,7 +533,7 @@ G1AddCardResult G1CardSet::add_to_howl(ContainerPtr parent_container,
ContainerPtr container;
uint bucket = _config->howl_bucket_index(card_in_region);
- ContainerPtr volatile* bucket_entry = howl->get_container_addr(bucket);
+ ContainerPtr volatile* bucket_entry = howl->container_addr(bucket);
while (true) {
if (Atomic::load(&howl->_num_entries) >= _config->cards_in_howl_threshold()) {
diff --git a/src/hotspot/share/gc/g1/g1CardSetContainers.hpp b/src/hotspot/share/gc/g1/g1CardSetContainers.hpp
index 16ec6b59a6d0d..43e6c8a3bf7eb 100644
--- a/src/hotspot/share/gc/g1/g1CardSetContainers.hpp
+++ b/src/hotspot/share/gc/g1/g1CardSetContainers.hpp
@@ -163,7 +163,8 @@ class G1CardSetArray : public G1CardSetContainer {
private:
EntryCountType _size;
EntryCountType volatile _num_entries;
- EntryDataType _data[2];
+ // VLA implementation.
+ EntryDataType _data[1];
static const EntryCountType LockBitMask = (EntryCountType)1 << (sizeof(EntryCountType) * BitsPerByte - 1);
static const EntryCountType EntryMask = LockBitMask - 1;
@@ -184,6 +185,14 @@ class G1CardSetArray : public G1CardSetContainer {
Atomic::release_store(_num_entries_addr, _local_num_entries);
}
};
+
+ EntryDataType const* base_addr() const;
+
+ EntryDataType const* entry_addr(EntryCountType index) const;
+
+ EntryDataType* entry_addr(EntryCountType index);
+
+ EntryDataType at(EntryCountType index) const;
public:
G1CardSetArray(uint const card_in_region, EntryCountType num_cards);
@@ -238,23 +247,27 @@ class G1CardSetHowl : public G1CardSetContainer {
using ContainerPtr = G1CardSet::ContainerPtr;
EntryCountType volatile _num_entries;
private:
- ContainerPtr _buckets[2];
- // Do not add class member variables beyond this point
+ // VLA implementation.
+ ContainerPtr _buckets[1];
+ // Do not add class member variables beyond this point.
// Iterates over the given ContainerPtr with at index in this Howl card set,
// applying a CardOrRangeVisitor on it.
template
void iterate_cardset(ContainerPtr const container, uint index, CardOrRangeVisitor& found, G1CardSetConfiguration* config);
+ ContainerPtr at(EntryCountType index) const;
+
+ ContainerPtr const* buckets() const;
+
public:
G1CardSetHowl(EntryCountType card_in_region, G1CardSetConfiguration* config);
- ContainerPtr* get_container_addr(EntryCountType index) {
- return &_buckets[index];
- }
+ ContainerPtr const* container_addr(EntryCountType index) const;
- bool contains(uint card_idx, G1CardSetConfiguration* config);
+ ContainerPtr* container_addr(EntryCountType index);
+ bool contains(uint card_idx, G1CardSetConfiguration* config);
// Iterates over all ContainerPtrs in this Howl card set, applying a CardOrRangeVisitor
// on it.
template
diff --git a/src/hotspot/share/gc/g1/g1CardSetContainers.inline.hpp b/src/hotspot/share/gc/g1/g1CardSetContainers.inline.hpp
index 35abc09230622..94ca13b0a3303 100644
--- a/src/hotspot/share/gc/g1/g1CardSetContainers.inline.hpp
+++ b/src/hotspot/share/gc/g1/g1CardSetContainers.inline.hpp
@@ -145,7 +145,7 @@ inline G1CardSetArray::G1CardSetArray(uint card_in_region, EntryCountType num_ca
_num_entries(1) {
assert(_size > 0, "CardSetArray of size 0 not supported.");
assert(_size < LockBitMask, "Only support CardSetArray of size %u or smaller.", LockBitMask - 1);
- _data[0] = checked_cast(card_in_region);
+ *entry_addr(0) = checked_cast(card_in_region);
}
inline G1CardSetArray::G1CardSetArrayLocker::G1CardSetArrayLocker(EntryCountType volatile* num_entries_addr) :
@@ -167,13 +167,31 @@ inline G1CardSetArray::G1CardSetArrayLocker::G1CardSetArrayLocker(EntryCountType
}
}
+inline G1CardSetArray::EntryDataType const* G1CardSetArray::base_addr() const {
+ const void* ptr = reinterpret_cast(this) + header_size_in_bytes();
+ return reinterpret_cast(ptr);
+}
+
+inline G1CardSetArray::EntryDataType const* G1CardSetArray::entry_addr(EntryCountType index) const {
+ assert(index < _num_entries, "precondition");
+ return base_addr() + index;
+}
+
+inline G1CardSetArray::EntryDataType* G1CardSetArray::entry_addr(EntryCountType index) {
+ return const_cast(const_cast(this)->entry_addr(index));
+}
+
+inline G1CardSetArray::EntryDataType G1CardSetArray::at(EntryCountType index) const {
+ return *entry_addr(index);
+}
+
inline G1AddCardResult G1CardSetArray::add(uint card_idx) {
- assert(card_idx < (1u << (sizeof(_data[0]) * BitsPerByte)),
+ assert(card_idx < (1u << (sizeof(EntryDataType) * BitsPerByte)),
"Card index %u does not fit allowed card value range.", card_idx);
EntryCountType num_entries = Atomic::load_acquire(&_num_entries) & EntryMask;
EntryCountType idx = 0;
for (; idx < num_entries; idx++) {
- if (_data[idx] == card_idx) {
+ if (at(idx) == card_idx) {
return Found;
}
}
@@ -186,7 +204,7 @@ inline G1AddCardResult G1CardSetArray::add(uint card_idx) {
num_entries = x.num_entries();
// Look if the cards added while waiting for the lock are the same as our card.
for (; idx < num_entries; idx++) {
- if (_data[idx] == card_idx) {
+ if (at(idx) == card_idx) {
return Found;
}
}
@@ -196,7 +214,7 @@ inline G1AddCardResult G1CardSetArray::add(uint card_idx) {
return Overflow;
}
- _data[num_entries] = checked_cast(card_idx);
+ *entry_addr(num_entries) = checked_cast(card_idx);
x.inc_num_entries();
@@ -207,7 +225,7 @@ inline bool G1CardSetArray::contains(uint card_idx) {
EntryCountType num_entries = Atomic::load_acquire(&_num_entries) & EntryMask;
for (EntryCountType idx = 0; idx < num_entries; idx++) {
- if (_data[idx] == card_idx) {
+ if (at(idx) == card_idx) {
return true;
}
}
@@ -218,7 +236,7 @@ template
void G1CardSetArray::iterate(CardVisitor& found) {
EntryCountType num_entries = Atomic::load_acquire(&_num_entries) & EntryMask;
for (EntryCountType idx = 0; idx < num_entries; idx++) {
- found(_data[idx]);
+ found(at(idx));
}
}
@@ -257,15 +275,33 @@ inline size_t G1CardSetBitMap::header_size_in_bytes() {
return offset_of(G1CardSetBitMap, _bits);
}
+inline G1CardSetHowl::ContainerPtr const* G1CardSetHowl::container_addr(EntryCountType index) const {
+ assert(index < _num_entries, "precondition");
+ return buckets() + index;
+}
+
+inline G1CardSetHowl::ContainerPtr* G1CardSetHowl::container_addr(EntryCountType index) {
+ return const_cast(const_cast(this)->container_addr(index));
+}
+
+inline G1CardSetHowl::ContainerPtr G1CardSetHowl::at(EntryCountType index) const {
+ return *container_addr(index);
+}
+
+inline G1CardSetHowl::ContainerPtr const* G1CardSetHowl::buckets() const {
+ const void* ptr = reinterpret_cast(this) + header_size_in_bytes();
+ return reinterpret_cast(ptr);
+}
+
inline G1CardSetHowl::G1CardSetHowl(EntryCountType card_in_region, G1CardSetConfiguration* config) :
G1CardSetContainer(),
_num_entries((config->max_cards_in_array() + 1)) /* Card Transfer will not increment _num_entries */ {
EntryCountType num_buckets = config->num_buckets_in_howl();
EntryCountType bucket = config->howl_bucket_index(card_in_region);
for (uint i = 0; i < num_buckets; ++i) {
- _buckets[i] = G1CardSetInlinePtr();
+ *container_addr(i) = G1CardSetInlinePtr();
if (i == bucket) {
- G1CardSetInlinePtr value(&_buckets[i], _buckets[i]);
+ G1CardSetInlinePtr value(container_addr(i), at(i));
value.add(card_in_region, config->inline_ptr_bits_per_card(), config->max_cards_in_inline_ptr());
}
}
@@ -273,7 +309,7 @@ inline G1CardSetHowl::G1CardSetHowl(EntryCountType card_in_region, G1CardSetConf
inline bool G1CardSetHowl::contains(uint card_idx, G1CardSetConfiguration* config) {
EntryCountType bucket = config->howl_bucket_index(card_idx);
- ContainerPtr* array_entry = get_container_addr(bucket);
+ ContainerPtr* array_entry = container_addr(bucket);
ContainerPtr container = Atomic::load_acquire(array_entry);
switch (G1CardSet::container_type(container)) {
@@ -299,14 +335,14 @@ inline bool G1CardSetHowl::contains(uint card_idx, G1CardSetConfiguration* confi
template
inline void G1CardSetHowl::iterate(CardOrRangeVisitor& found, G1CardSetConfiguration* config) {
for (uint i = 0; i < config->num_buckets_in_howl(); ++i) {
- iterate_cardset(_buckets[i], i, found, config);
+ iterate_cardset(at(i), i, found, config);
}
}
template
inline void G1CardSetHowl::iterate(ContainerPtrVisitor& found, uint num_card_sets) {
for (uint i = 0; i < num_card_sets; ++i) {
- found(&_buckets[i]);
+ found(container_addr(i));
}
}
diff --git a/src/hotspot/share/gc/g1/g1CardTable.cpp b/src/hotspot/share/gc/g1/g1CardTable.cpp
index 467ba6c56a906..56352865e378d 100644
--- a/src/hotspot/share/gc/g1/g1CardTable.cpp
+++ b/src/hotspot/share/gc/g1/g1CardTable.cpp
@@ -43,7 +43,7 @@ void G1CardTable::verify_g1_young_region(MemRegion mr) {
void G1CardTableChangedListener::on_commit(uint start_idx, size_t num_regions, bool zero_filled) {
// Default value for a clean card on the card table is -1. So we cannot take advantage of the zero_filled parameter.
- MemRegion mr(G1CollectedHeap::heap()->bottom_addr_for_region(start_idx), num_regions * HeapRegion::GrainWords);
+ MemRegion mr(G1CollectedHeap::heap()->bottom_addr_for_region(start_idx), num_regions * G1HeapRegion::GrainWords);
_card_table->clear_MemRegion(mr);
}
diff --git a/src/hotspot/share/gc/g1/g1CardTable.inline.hpp b/src/hotspot/share/gc/g1/g1CardTable.inline.hpp
index 0585b68e33644..295eea242f703 100644
--- a/src/hotspot/share/gc/g1/g1CardTable.inline.hpp
+++ b/src/hotspot/share/gc/g1/g1CardTable.inline.hpp
@@ -31,7 +31,7 @@
inline uint G1CardTable::region_idx_for(CardValue* p) {
size_t const card_idx = pointer_delta(p, _byte_map, sizeof(CardValue));
- return (uint)(card_idx >> HeapRegion::LogCardsPerRegion);
+ return (uint)(card_idx >> G1HeapRegion::LogCardsPerRegion);
}
inline bool G1CardTable::mark_clean_as_dirty(CardValue* card) {
diff --git a/src/hotspot/share/gc/g1/g1CodeRootSet.cpp b/src/hotspot/share/gc/g1/g1CodeRootSet.cpp
index 4b933d9a85ffb..e72e5e8d377ed 100644
--- a/src/hotspot/share/gc/g1/g1CodeRootSet.cpp
+++ b/src/hotspot/share/gc/g1/g1CodeRootSet.cpp
@@ -298,7 +298,7 @@ class CleanCallback : public StackObj {
NONCOPYABLE(CleanCallback); // can not copy, _blobs will point to old copy
class PointsIntoHRDetectionClosure : public OopClosure {
- HeapRegion* _hr;
+ G1HeapRegion* _hr;
template
void do_oop_work(T* p) {
@@ -309,7 +309,7 @@ class CleanCallback : public StackObj {
public:
bool _points_into;
- PointsIntoHRDetectionClosure(HeapRegion* hr) : _hr(hr), _points_into(false) {}
+ PointsIntoHRDetectionClosure(G1HeapRegion* hr) : _hr(hr), _points_into(false) {}
void do_oop(narrowOop* o) { do_oop_work(o); }
@@ -320,7 +320,7 @@ class CleanCallback : public StackObj {
NMethodToOopClosure _nmethod_cl;
public:
- CleanCallback(HeapRegion* hr) : _detector(hr), _nmethod_cl(&_detector, !NMethodToOopClosure::FixRelocations) {}
+ CleanCallback(G1HeapRegion* hr) : _detector(hr), _nmethod_cl(&_detector, !NMethodToOopClosure::FixRelocations) {}
bool operator()(nmethod** value) {
_detector._points_into = false;
@@ -329,7 +329,7 @@ class CleanCallback : public StackObj {
}
};
-void G1CodeRootSet::clean(HeapRegion* owner) {
+void G1CodeRootSet::clean(G1HeapRegion* owner) {
assert(!_is_iterating, "should not mutate while iterating the table");
CleanCallback eval(owner);
diff --git a/src/hotspot/share/gc/g1/g1CodeRootSet.hpp b/src/hotspot/share/gc/g1/g1CodeRootSet.hpp
index a4c425109b22e..01de2ec48ff38 100644
--- a/src/hotspot/share/gc/g1/g1CodeRootSet.hpp
+++ b/src/hotspot/share/gc/g1/g1CodeRootSet.hpp
@@ -29,7 +29,7 @@
#include "utilities/globalDefinitions.hpp"
class G1CodeRootSetHashTable;
-class HeapRegion;
+class G1HeapRegion;
class nmethod;
// Implements storage for a set of code roots.
@@ -53,7 +53,7 @@ class G1CodeRootSet {
void nmethods_do(NMethodClosure* blk) const;
// Remove all nmethods which no longer contain pointers into our "owner" region.
- void clean(HeapRegion* owner);
+ void clean(G1HeapRegion* owner);
bool is_empty() { return length() == 0;}
diff --git a/src/hotspot/share/gc/g1/g1CollectedHeap.cpp b/src/hotspot/share/gc/g1/g1CollectedHeap.cpp
index 2b890530c4bdf..258f3f667c5cd 100644
--- a/src/hotspot/share/gc/g1/g1CollectedHeap.cpp
+++ b/src/hotspot/share/gc/g1/g1CollectedHeap.cpp
@@ -47,6 +47,7 @@
#include "gc/g1/g1GCPhaseTimes.hpp"
#include "gc/g1/g1GCPauseType.hpp"
#include "gc/g1/g1HeapRegion.inline.hpp"
+#include "gc/g1/g1HeapRegionPrinter.hpp"
#include "gc/g1/g1HeapRegionRemSet.inline.hpp"
#include "gc/g1/g1HeapRegionSet.inline.hpp"
#include "gc/g1/g1HeapSizingPolicy.hpp"
@@ -144,7 +145,7 @@ void G1CollectedHeap::run_batch_task(G1BatchedTask* cl) {
}
uint G1CollectedHeap::get_chunks_per_region() {
- uint log_region_size = HeapRegion::LogOfHRGrainBytes;
+ uint log_region_size = G1HeapRegion::LogOfHRGrainBytes;
// Limit the expected input values to current known possible values of the
// (log) region size. Adjust as necessary after testing if changing the permissible
// values for region size.
@@ -153,22 +154,22 @@ uint G1CollectedHeap::get_chunks_per_region() {
return 1u << (log_region_size / 2 - 4);
}
-HeapRegion* G1CollectedHeap::new_heap_region(uint hrs_index,
- MemRegion mr) {
- return new HeapRegion(hrs_index, bot(), mr, &_card_set_config);
+G1HeapRegion* G1CollectedHeap::new_heap_region(uint hrs_index,
+ MemRegion mr) {
+ return new G1HeapRegion(hrs_index, bot(), mr, &_card_set_config);
}
// Private methods.
-HeapRegion* G1CollectedHeap::new_region(size_t word_size,
- HeapRegionType type,
- bool do_expand,
- uint node_index) {
- assert(!is_humongous(word_size) || word_size <= HeapRegion::GrainWords,
+G1HeapRegion* G1CollectedHeap::new_region(size_t word_size,
+ HeapRegionType type,
+ bool do_expand,
+ uint node_index) {
+ assert(!is_humongous(word_size) || word_size <= G1HeapRegion::GrainWords,
"the only time we use this to allocate a humongous region is "
"when we are allocating a single humongous region");
- HeapRegion* res = _hrm.allocate_free_region(type, node_index);
+ G1HeapRegion* res = _hrm.allocate_free_region(type, node_index);
if (res == nullptr && do_expand) {
// Currently, only attempts to allocate GC alloc regions set
@@ -179,7 +180,7 @@ HeapRegion* G1CollectedHeap::new_region(size_t word_size,
log_debug(gc, ergo, heap)("Attempt heap expansion (region allocation request failed). Allocation request: " SIZE_FORMAT "B",
word_size * HeapWordSize);
- assert(word_size * HeapWordSize < HeapRegion::GrainBytes,
+ assert(word_size * HeapWordSize < G1HeapRegion::GrainBytes,
"This kind of expansion should never be more than one region. Size: " SIZE_FORMAT,
word_size * HeapWordSize);
if (expand_single_region(node_index)) {
@@ -193,14 +194,14 @@ HeapRegion* G1CollectedHeap::new_region(size_t word_size,
return res;
}
-void G1CollectedHeap::set_humongous_metadata(HeapRegion* first_hr,
+void G1CollectedHeap::set_humongous_metadata(G1HeapRegion* first_hr,
uint num_regions,
size_t word_size,
bool update_remsets) {
// Calculate the new top of the humongous object.
HeapWord* obj_top = first_hr->bottom() + word_size;
// The word size sum of all the regions used
- size_t word_size_sum = num_regions * HeapRegion::GrainWords;
+ size_t word_size_sum = num_regions * G1HeapRegion::GrainWords;
assert(word_size <= word_size_sum, "sanity");
// How many words memory we "waste" which cannot hold a filler object.
@@ -235,7 +236,7 @@ void G1CollectedHeap::set_humongous_metadata(HeapRegion* first_hr,
uint first = first_hr->hrm_index();
uint last = first + num_regions - 1;
- HeapRegion* hr = nullptr;
+ G1HeapRegion* hr = nullptr;
for (uint i = first + 1; i <= last; ++i) {
hr = region_at(i);
hr->hr_clear(false /* clear_space */);
@@ -276,12 +277,12 @@ void G1CollectedHeap::set_humongous_metadata(HeapRegion* first_hr,
}
HeapWord*
-G1CollectedHeap::humongous_obj_allocate_initialize_regions(HeapRegion* first_hr,
+G1CollectedHeap::humongous_obj_allocate_initialize_regions(G1HeapRegion* first_hr,
uint num_regions,
size_t word_size) {
assert(first_hr != nullptr, "pre-condition");
assert(is_humongous(word_size), "word_size should be humongous");
- assert(num_regions * HeapRegion::GrainWords >= word_size, "pre-condition");
+ assert(num_regions * G1HeapRegion::GrainWords >= word_size, "pre-condition");
// Index of last region in the series.
uint first = first_hr->hrm_index();
@@ -317,15 +318,15 @@ G1CollectedHeap::humongous_obj_allocate_initialize_regions(HeapRegion* first_hr,
// Next, update the metadata for the regions.
set_humongous_metadata(first_hr, num_regions, word_size, true);
- HeapRegion* last_hr = region_at(last);
+ G1HeapRegion* last_hr = region_at(last);
size_t used = byte_size(first_hr->bottom(), last_hr->top());
increase_used(used);
for (uint i = first; i <= last; ++i) {
- HeapRegion *hr = region_at(i);
+ G1HeapRegion *hr = region_at(i);
_humongous_set.add(hr);
- _hr_printer.alloc(hr);
+ G1HeapRegionPrinter::alloc(hr);
}
return new_obj;
@@ -333,7 +334,7 @@ G1CollectedHeap::humongous_obj_allocate_initialize_regions(HeapRegion* first_hr,
size_t G1CollectedHeap::humongous_obj_size_in_regions(size_t word_size) {
assert(is_humongous(word_size), "Object of size " SIZE_FORMAT " must be humongous here", word_size);
- return align_up(word_size, HeapRegion::GrainWords) / HeapRegion::GrainWords;
+ return align_up(word_size, G1HeapRegion::GrainWords) / G1HeapRegion::GrainWords;
}
// If could fit into free regions w/o expansion, try.
@@ -347,7 +348,7 @@ HeapWord* G1CollectedHeap::humongous_obj_allocate(size_t word_size) {
uint obj_regions = (uint) humongous_obj_size_in_regions(word_size);
// Policy: First try to allocate a humongous object in the free list.
- HeapRegion* humongous_start = _hrm.allocate_humongous(obj_regions);
+ G1HeapRegion* humongous_start = _hrm.allocate_humongous(obj_regions);
if (humongous_start == nullptr) {
// Policy: We could not find enough regions for the humongous object in the
// free list. Look through the heap to find a mix of free and uncommitted regions.
@@ -472,12 +473,12 @@ template
void G1CollectedHeap::iterate_regions_in_range(MemRegion range, const Func& func) {
// Mark each G1 region touched by the range as old, add it to
// the old set, and set top.
- HeapRegion* curr_region = _hrm.addr_to_region(range.start());
- HeapRegion* end_region = _hrm.addr_to_region(range.last());
+ G1HeapRegion* curr_region = _hrm.addr_to_region(range.start());
+ G1HeapRegion* end_region = _hrm.addr_to_region(range.last());
while (curr_region != nullptr) {
bool is_last = curr_region == end_region;
- HeapRegion* next_region = is_last ? nullptr : _hrm.next_region_in_heap(curr_region);
+ G1HeapRegion* next_region = is_last ? nullptr : _hrm.next_region_in_heap(curr_region);
func(curr_region, is_last);
@@ -503,7 +504,7 @@ HeapWord* G1CollectedHeap::alloc_archive_region(size_t word_size, HeapWord* pref
size_t commits = 0;
// Attempt to allocate towards the end of the heap.
- HeapWord* start_addr = reserved.end() - align_up(word_size, HeapRegion::GrainWords);
+ HeapWord* start_addr = reserved.end() - align_up(word_size, G1HeapRegion::GrainWords);
MemRegion range = MemRegion(start_addr, word_size);
HeapWord* last_address = range.last();
if (!_hrm.allocate_containing_regions(range, &commits, workers())) {
@@ -512,19 +513,19 @@ HeapWord* G1CollectedHeap::alloc_archive_region(size_t word_size, HeapWord* pref
increase_used(word_size * HeapWordSize);
if (commits != 0) {
log_debug(gc, ergo, heap)("Attempt heap expansion (allocate archive regions). Total size: " SIZE_FORMAT "B",
- HeapRegion::GrainWords * HeapWordSize * commits);
+ G1HeapRegion::GrainWords * HeapWordSize * commits);
}
// Mark each G1 region touched by the range as old, add it to
// the old set, and set top.
- auto set_region_to_old = [&] (HeapRegion* r, bool is_last) {
+ auto set_region_to_old = [&] (G1HeapRegion* r, bool is_last) {
assert(r->is_empty(), "Region already in use (%u)", r->hrm_index());
HeapWord* top = is_last ? last_address + 1 : r->end();
r->set_top(top);
r->set_old();
- _hr_printer.alloc(r);
+ G1HeapRegionPrinter::alloc(r);
_old_set.add(r);
};
@@ -536,7 +537,7 @@ void G1CollectedHeap::populate_archive_regions_bot(MemRegion range) {
assert(!is_init_completed(), "Expect to be called at JVM init time");
iterate_regions_in_range(range,
- [&] (HeapRegion* r, bool is_last) {
+ [&] (G1HeapRegion* r, bool is_last) {
r->update_bot();
});
}
@@ -558,7 +559,7 @@ void G1CollectedHeap::dealloc_archive_regions(MemRegion range) {
size_used += range.byte_size();
// Free, empty and uncommit regions with CDS archive content.
- auto dealloc_archive_region = [&] (HeapRegion* r, bool is_last) {
+ auto dealloc_archive_region = [&] (G1HeapRegion* r, bool is_last) {
guarantee(r->is_old(), "Expected old region at index %u", r->hrm_index());
_old_set.remove(r);
r->set_free();
@@ -571,7 +572,7 @@ void G1CollectedHeap::dealloc_archive_regions(MemRegion range) {
if (shrink_count != 0) {
log_debug(gc, ergo, heap)("Attempt heap shrinking (CDS archive regions). Total size: " SIZE_FORMAT "B",
- HeapRegion::GrainWords * HeapWordSize * shrink_count);
+ G1HeapRegion::GrainWords * HeapWordSize * shrink_count);
// Explicit uncommit.
uncommit_regions(shrink_count);
}
@@ -649,7 +650,7 @@ HeapWord* G1CollectedHeap::attempt_allocation_humongous(size_t word_size) {
result = humongous_obj_allocate(word_size);
if (result != nullptr) {
policy()->old_gen_alloc_tracker()->
- add_allocated_humongous_bytes_since_last_gc(size_in_regions * HeapRegion::GrainBytes);
+ add_allocated_humongous_bytes_since_last_gc(size_in_regions * G1HeapRegion::GrainBytes);
return result;
}
@@ -665,7 +666,7 @@ HeapWord* G1CollectedHeap::attempt_allocation_humongous(size_t word_size) {
if (result != nullptr) {
size_t size_in_regions = humongous_obj_size_in_regions(word_size);
policy()->old_gen_alloc_tracker()->
- record_collection_pause_humongous_allocation(size_in_regions * HeapRegion::GrainBytes);
+ record_collection_pause_humongous_allocation(size_in_regions * G1HeapRegion::GrainBytes);
}
return result;
}
@@ -710,17 +711,12 @@ HeapWord* G1CollectedHeap::attempt_allocation_at_safepoint(size_t word_size,
}
class PostCompactionPrinterClosure: public HeapRegionClosure {
-private:
- G1HRPrinter* _hr_printer;
public:
- bool do_heap_region(HeapRegion* hr) {
+ bool do_heap_region(G1HeapRegion* hr) {
assert(!hr->is_young(), "not expecting to find young regions");
- _hr_printer->post_compaction(hr);
+ G1HeapRegionPrinter::post_compaction(hr);
return false;
}
-
- PostCompactionPrinterClosure(G1HRPrinter* hr_printer)
- : _hr_printer(hr_printer) { }
};
void G1CollectedHeap::print_heap_after_full_collection() {
@@ -728,8 +724,8 @@ void G1CollectedHeap::print_heap_after_full_collection() {
// We should do this after we potentially resize the heap so
// that all the COMMIT / UNCOMMIT events are generated before
// the compaction events.
- if (_hr_printer.is_active()) {
- PostCompactionPrinterClosure cl(hr_printer());
+ if (G1HeapRegionPrinter::is_active()) {
+ PostCompactionPrinterClosure cl;
heap_region_iterate(&cl);
}
}
@@ -995,8 +991,7 @@ HeapWord* G1CollectedHeap::expand_and_allocate(size_t word_size) {
bool G1CollectedHeap::expand(size_t expand_bytes, WorkerThreads* pretouch_workers, double* expand_time_ms) {
size_t aligned_expand_bytes = ReservedSpace::page_align_size_up(expand_bytes);
- aligned_expand_bytes = align_up(aligned_expand_bytes,
- HeapRegion::GrainBytes);
+ aligned_expand_bytes = align_up(aligned_expand_bytes, G1HeapRegion::GrainBytes);
log_debug(gc, ergo, heap)("Expand the heap. requested expansion amount: " SIZE_FORMAT "B expansion amount: " SIZE_FORMAT "B",
expand_bytes, aligned_expand_bytes);
@@ -1007,7 +1002,7 @@ bool G1CollectedHeap::expand(size_t expand_bytes, WorkerThreads* pretouch_worker
}
double expand_heap_start_time_sec = os::elapsedTime();
- uint regions_to_expand = (uint)(aligned_expand_bytes / HeapRegion::GrainBytes);
+ uint regions_to_expand = (uint)(aligned_expand_bytes / G1HeapRegion::GrainBytes);
assert(regions_to_expand > 0, "Must expand by at least one region");
uint expanded_by = _hrm.expand_by(regions_to_expand, pretouch_workers);
@@ -1017,7 +1012,7 @@ bool G1CollectedHeap::expand(size_t expand_bytes, WorkerThreads* pretouch_worker
assert(expanded_by > 0, "must have failed during commit.");
- size_t actual_expand_bytes = expanded_by * HeapRegion::GrainBytes;
+ size_t actual_expand_bytes = expanded_by * G1HeapRegion::GrainBytes;
assert(actual_expand_bytes <= aligned_expand_bytes, "post-condition");
policy()->record_new_heap_size(num_regions());
@@ -1040,12 +1035,11 @@ bool G1CollectedHeap::expand_single_region(uint node_index) {
void G1CollectedHeap::shrink_helper(size_t shrink_bytes) {
size_t aligned_shrink_bytes =
ReservedSpace::page_align_size_down(shrink_bytes);
- aligned_shrink_bytes = align_down(aligned_shrink_bytes,
- HeapRegion::GrainBytes);
- uint num_regions_to_remove = (uint)(shrink_bytes / HeapRegion::GrainBytes);
+ aligned_shrink_bytes = align_down(aligned_shrink_bytes, G1HeapRegion::GrainBytes);
+ uint num_regions_to_remove = (uint)(shrink_bytes / G1HeapRegion::GrainBytes);
uint num_regions_removed = _hrm.shrink_by(num_regions_to_remove);
- size_t shrunk_bytes = num_regions_removed * HeapRegion::GrainBytes;
+ size_t shrunk_bytes = num_regions_removed * G1HeapRegion::GrainBytes;
log_debug(gc, ergo, heap)("Shrink the heap. requested shrinking amount: " SIZE_FORMAT "B aligned shrinking amount: " SIZE_FORMAT "B actual amount shrunk: " SIZE_FORMAT "B",
shrink_bytes, aligned_shrink_bytes, shrunk_bytes);
@@ -1100,7 +1094,7 @@ class OldRegionSetChecker : public HeapRegionSetChecker {
guarantee(Heap_lock->owned_by_self(), "master old set MT safety protocol outside a safepoint");
}
}
- bool is_correct_type(HeapRegion* hr) { return hr->is_old(); }
+ bool is_correct_type(G1HeapRegion* hr) { return hr->is_old(); }
const char* get_description() { return "Old Regions"; }
};
@@ -1124,7 +1118,7 @@ class HumongousRegionSetChecker : public HeapRegionSetChecker {
"master humongous set MT safety protocol outside a safepoint");
}
}
- bool is_correct_type(HeapRegion* hr) { return hr->is_humongous(); }
+ bool is_correct_type(G1HeapRegion* hr) { return hr->is_humongous(); }
const char* get_description() { return "Humongous Regions"; }
};
@@ -1152,7 +1146,6 @@ G1CollectedHeap::G1CollectedHeap() :
_monitoring_support(nullptr),
_num_humongous_objects(0),
_num_humongous_reclaim_candidates(0),
- _hr_printer(),
_collector_state(),
_old_marking_cycles_started(0),
_old_marking_cycles_completed(0),
@@ -1184,13 +1177,13 @@ G1CollectedHeap::G1CollectedHeap() :
_heap_sizing_policy = G1HeapSizingPolicy::create(this, _policy->analytics());
- _humongous_object_threshold_in_words = humongous_threshold_for(HeapRegion::GrainWords);
+ _humongous_object_threshold_in_words = humongous_threshold_for(G1HeapRegion::GrainWords);
// Since filler arrays are never referenced, we can make them region sized.
// This simplifies filling up the region in case we have some potentially
// unreferenced (by Java code, but still in use by native code) pinned objects
// in there.
- _filler_array_max_size = HeapRegion::GrainWords;
+ _filler_array_max_size = G1HeapRegion::GrainWords;
// Override the default _stack_chunk_max_size so that no humongous stack chunks are created
_stack_chunk_max_size = _humongous_object_threshold_in_words;
@@ -1213,13 +1206,13 @@ G1RegionToSpaceMapper* G1CollectedHeap::create_aux_memory_mapper(const char* des
size_t translation_factor) {
size_t preferred_page_size = os::page_size_for_region_unaligned(size, 1);
// Allocate a new reserved space, preferring to use large pages.
- ReservedSpace rs(size, preferred_page_size, mtGC);
+ ReservedSpace rs(size, preferred_page_size);
size_t page_size = rs.page_size();
G1RegionToSpaceMapper* result =
G1RegionToSpaceMapper::create_mapper(rs,
size,
page_size,
- HeapRegion::GrainBytes,
+ G1HeapRegion::GrainBytes,
translation_factor,
mtGC);
@@ -1265,8 +1258,8 @@ jint G1CollectedHeap::initialize() {
size_t reserved_byte_size = G1Arguments::heap_reserved_size_bytes();
// Ensure that the sizes are properly aligned.
- Universe::check_alignment(init_byte_size, HeapRegion::GrainBytes, "g1 heap");
- Universe::check_alignment(reserved_byte_size, HeapRegion::GrainBytes, "g1 heap");
+ Universe::check_alignment(init_byte_size, G1HeapRegion::GrainBytes, "g1 heap");
+ Universe::check_alignment(reserved_byte_size, G1HeapRegion::GrainBytes, "g1 heap");
Universe::check_alignment(reserved_byte_size, HeapAlignment, "g1 heap");
// Reserve the maximum.
@@ -1275,7 +1268,7 @@ jint G1CollectedHeap::initialize() {
// is calculated by subtracting the requested size from the
// 32Gb boundary and using the result as the base address for
// heap reservation. If the requested size is not aligned to
- // HeapRegion::GrainBytes (i.e. the alignment that is passed
+ // G1HeapRegion::GrainBytes (i.e. the alignment that is passed
// into the ReservedHeapSpace constructor) then the actual
// base of the reserved heap may end up differing from the
// address that was requested (i.e. the preferred heap base).
@@ -1307,7 +1300,7 @@ jint G1CollectedHeap::initialize() {
G1RegionToSpaceMapper::create_mapper(heap_rs,
heap_rs.size(),
page_size,
- HeapRegion::GrainBytes,
+ G1HeapRegion::GrainBytes,
1,
mtJavaHeap);
if(heap_storage == nullptr) {
@@ -1355,8 +1348,8 @@ jint G1CollectedHeap::initialize() {
_rem_set->initialize(max_reserved_regions());
size_t max_cards_per_region = ((size_t)1 << (sizeof(CardIdx_t)*BitsPerByte-1)) - 1;
- guarantee(HeapRegion::CardsPerRegion > 0, "make sure it's initialized");
- guarantee(HeapRegion::CardsPerRegion < max_cards_per_region,
+ guarantee(G1HeapRegion::CardsPerRegion > 0, "make sure it's initialized");
+ guarantee(G1HeapRegion::CardsPerRegion < max_cards_per_region,
"too many cards per region");
HeapRegionRemSet::initialize(_reserved);
@@ -1366,7 +1359,7 @@ jint G1CollectedHeap::initialize() {
_bot = new G1BlockOffsetTable(reserved(), bot_storage);
{
- size_t granularity = HeapRegion::GrainBytes;
+ size_t granularity = G1HeapRegion::GrainBytes;
_region_attr.initialize(reserved(), granularity);
}
@@ -1377,7 +1370,7 @@ jint G1CollectedHeap::initialize() {
}
_workers->initialize_workers();
- _numa->set_region_info(HeapRegion::GrainBytes, page_size);
+ _numa->set_region_info(G1HeapRegion::GrainBytes, page_size);
// Create the G1ConcurrentMark data structure and thread.
// (Must do this late, so that "max_[reserved_]regions" is defined.)
@@ -1410,9 +1403,9 @@ jint G1CollectedHeap::initialize() {
_free_arena_memory_task = new G1MonotonicArenaFreeMemoryTask("Card Set Free Memory Task");
_service_thread->register_task(_free_arena_memory_task);
- // Here we allocate the dummy HeapRegion that is required by the
+ // Here we allocate the dummy G1HeapRegion that is required by the
// G1AllocRegion class.
- HeapRegion* dummy_region = _hrm.get_dummy_region();
+ G1HeapRegion* dummy_region = _hrm.get_dummy_region();
// We'll re-use the same region whether the alloc region will
// require BOT updates or not and, if it doesn't, then a non-young
@@ -1526,7 +1519,7 @@ void G1CollectedHeap::ref_processing_init() {
}
size_t G1CollectedHeap::capacity() const {
- return _hrm.length() * HeapRegion::GrainBytes;
+ return _hrm.length() * G1HeapRegion::GrainBytes;
}
size_t G1CollectedHeap::unused_committed_regions_in_bytes() const {
@@ -1547,7 +1540,7 @@ class SumUsedClosure: public HeapRegionClosure {
size_t _used;
public:
SumUsedClosure() : _used(0) {}
- bool do_heap_region(HeapRegion* r) {
+ bool do_heap_region(G1HeapRegion* r) {
_used += r->used();
return false;
}
@@ -1892,13 +1885,13 @@ bool G1CollectedHeap::is_in(const void* p) const {
// Iteration functions.
-// Iterates an ObjectClosure over all objects within a HeapRegion.
+// Iterates an ObjectClosure over all objects within a G1HeapRegion.
class IterateObjectClosureRegionClosure: public HeapRegionClosure {
ObjectClosure* _cl;
public:
IterateObjectClosureRegionClosure(ObjectClosure* cl) : _cl(cl) {}
- bool do_heap_region(HeapRegion* r) {
+ bool do_heap_region(G1HeapRegion* r) {
if (!r->is_continues_humongous()) {
r->object_iterate(_cl);
}
@@ -1991,7 +1984,7 @@ void G1CollectedHeap::par_iterate_regions_array(HeapRegionClosure* cl,
do {
uint region_idx = regions[cur_pos];
if (hr_claimer == nullptr || hr_claimer->claim_region(region_idx)) {
- HeapRegion* r = region_at(region_idx);
+ G1HeapRegion* r = region_at(region_idx);
bool result = cl->do_heap_region(r);
guarantee(!result, "Must not cancel iteration");
}
@@ -2004,9 +1997,9 @@ void G1CollectedHeap::par_iterate_regions_array(HeapRegionClosure* cl,
}
HeapWord* G1CollectedHeap::block_start(const void* addr) const {
- HeapRegion* hr = heap_region_containing(addr);
+ G1HeapRegion* hr = heap_region_containing(addr);
// The CollectedHeap API requires us to not fail for any given address within
- // the heap. HeapRegion::block_start() has been optimized to not accept addresses
+ // the heap. G1HeapRegion::block_start() has been optimized to not accept addresses
// outside of the allocated area.
if (addr >= hr->top()) {
return nullptr;
@@ -2015,16 +2008,16 @@ HeapWord* G1CollectedHeap::block_start(const void* addr) const {
}
bool G1CollectedHeap::block_is_obj(const HeapWord* addr) const {
- HeapRegion* hr = heap_region_containing(addr);
+ G1HeapRegion* hr = heap_region_containing(addr);
return hr->block_is_obj(addr, hr->parsable_bottom_acquire());
}
size_t G1CollectedHeap::tlab_capacity(Thread* ignored) const {
- return (_policy->young_list_target_length() - _survivor.length()) * HeapRegion::GrainBytes;
+ return (_policy->young_list_target_length() - _survivor.length()) * G1HeapRegion::GrainBytes;
}
size_t G1CollectedHeap::tlab_used(Thread* ignored) const {
- return _eden.length() * HeapRegion::GrainBytes;
+ return _eden.length() * G1HeapRegion::GrainBytes;
}
// For G1 TLABs should not contain humongous objects, so the maximum TLAB size
@@ -2038,7 +2031,7 @@ size_t G1CollectedHeap::unsafe_max_tlab_alloc(Thread* ignored) const {
}
size_t G1CollectedHeap::max_capacity() const {
- return max_regions() * HeapRegion::GrainBytes;
+ return max_regions() * G1HeapRegion::GrainBytes;
}
void G1CollectedHeap::prepare_for_verify() {
@@ -2057,14 +2050,14 @@ class PrintRegionClosure: public HeapRegionClosure {
outputStream* _st;
public:
PrintRegionClosure(outputStream* st) : _st(st) {}
- bool do_heap_region(HeapRegion* r) {
+ bool do_heap_region(G1HeapRegion* r) {
r->print_on(_st);
return false;
}
};
bool G1CollectedHeap::is_obj_dead_cond(const oop obj,
- const HeapRegion* hr,
+ const G1HeapRegion* hr,
const VerifyOption vo) const {
switch (vo) {
case VerifyOption::G1UseConcMarking: return is_obj_dead(obj, hr);
@@ -2101,13 +2094,13 @@ void G1CollectedHeap::print_on(outputStream* st) const {
p2i(_hrm.reserved().start()),
p2i(_hrm.reserved().end()));
st->cr();
- st->print(" region size " SIZE_FORMAT "K, ", HeapRegion::GrainBytes / K);
+ st->print(" region size " SIZE_FORMAT "K, ", G1HeapRegion::GrainBytes / K);
uint young_regions = young_regions_count();
st->print("%u young (" SIZE_FORMAT "K), ", young_regions,
- (size_t) young_regions * HeapRegion::GrainBytes / K);
+ (size_t) young_regions * G1HeapRegion::GrainBytes / K);
uint survivor_regions = survivor_regions_count();
st->print("%u survivors (" SIZE_FORMAT "K)", survivor_regions,
- (size_t) survivor_regions * HeapRegion::GrainBytes / K);
+ (size_t) survivor_regions * G1HeapRegion::GrainBytes / K);
st->cr();
if (_numa->is_enabled()) {
uint num_nodes = _numa->num_active_nodes();
@@ -2174,7 +2167,7 @@ G1HeapSummary G1CollectedHeap::create_g1_heap_summary() {
size_t heap_used = Heap_lock->owned_by_self() ? used() : used_unlocked();
size_t eden_capacity_bytes =
- (policy()->young_list_target_length() * HeapRegion::GrainBytes) - survivor_used_bytes;
+ (policy()->young_list_target_length() * G1HeapRegion::GrainBytes) - survivor_used_bytes;
VirtualSpaceSummary heap_summary = create_heap_space_summary();
return G1HeapSummary(heap_summary, heap_used, eden_used_bytes, eden_capacity_bytes,
@@ -2285,7 +2278,7 @@ void G1CollectedHeap::start_concurrent_cycle(bool concurrent_operation_is_full_m
CGC_lock->notify();
}
-bool G1CollectedHeap::is_potential_eager_reclaim_candidate(HeapRegion* r) const {
+bool G1CollectedHeap::is_potential_eager_reclaim_candidate(G1HeapRegion* r) const {
// We don't nominate objects with many remembered set entries, on
// the assumption that such objects are likely still live.
HeapRegionRemSet* rem_set = r->rem_set();
@@ -2297,7 +2290,7 @@ bool G1CollectedHeap::is_potential_eager_reclaim_candidate(HeapRegion* r) const
void G1CollectedHeap::verify_region_attr_remset_is_tracked() {
class VerifyRegionAttrRemSet : public HeapRegionClosure {
public:
- virtual bool do_heap_region(HeapRegion* r) {
+ virtual bool do_heap_region(G1HeapRegion* r) {
G1CollectedHeap* g1h = G1CollectedHeap::heap();
bool const remset_is_tracked = g1h->region_attr(r->bottom()).remset_is_tracked();
assert(r->rem_set()->is_tracked() == remset_is_tracked,
@@ -2404,7 +2397,7 @@ void G1CollectedHeap::expand_heap_after_young_collection(){
bool G1CollectedHeap::do_collection_pause_at_safepoint() {
assert_at_safepoint_on_vm_thread();
- guarantee(!is_gc_active(), "collection is not reentrant");
+ guarantee(!is_stw_gc_active(), "collection is not reentrant");
do_collection_pause_at_safepoint_helper();
return true;
@@ -2474,7 +2467,7 @@ void G1CollectedHeap::flush_region_pin_cache() {
void G1CollectedHeap::do_collection_pause_at_safepoint_helper() {
ResourceMark rm;
- IsGCActiveMark active_gc_mark;
+ IsSTWGCActiveMark active_gc_mark;
GCIdMark gc_id_mark;
SvcGCMarker sgcm(SvcGCMarker::MINOR);
@@ -2550,7 +2543,7 @@ class G1BulkUnregisterNMethodTask : public WorkerTask {
class UnregisterNMethodsHeapRegionClosure : public HeapRegionClosure {
public:
- bool do_heap_region(HeapRegion* hr) {
+ bool do_heap_region(G1HeapRegion* hr) {
hr->rem_set()->bulk_remove_code_roots();
return false;
}
@@ -2617,11 +2610,11 @@ void G1CollectedHeap::record_obj_copy_mem_stats() {
create_g1_evac_summary(&_old_evac_stats));
}
-void G1CollectedHeap::clear_bitmap_for_region(HeapRegion* hr) {
+void G1CollectedHeap::clear_bitmap_for_region(G1HeapRegion* hr) {
concurrent_mark()->clear_bitmap_for_region(hr);
}
-void G1CollectedHeap::free_region(HeapRegion* hr, FreeRegionList* free_list) {
+void G1CollectedHeap::free_region(G1HeapRegion* hr, FreeRegionList* free_list) {
assert(!hr->is_free(), "the region should not be free");
assert(!hr->is_empty(), "the region should not be empty");
assert(_hrm.is_available(hr->hrm_index()), "region should be committed");
@@ -2637,12 +2630,12 @@ void G1CollectedHeap::free_region(HeapRegion* hr, FreeRegionList* free_list) {
}
}
-void G1CollectedHeap::retain_region(HeapRegion* hr) {
+void G1CollectedHeap::retain_region(G1HeapRegion* hr) {
MutexLocker x(G1RareEvent_lock, Mutex::_no_safepoint_check_flag);
collection_set()->candidates()->add_retained_region_unsorted(hr);
}
-void G1CollectedHeap::free_humongous_region(HeapRegion* hr,
+void G1CollectedHeap::free_humongous_region(G1HeapRegion* hr,
FreeRegionList* free_list) {
assert(hr->is_humongous(), "this is only for humongous regions");
hr->clear_humongous();
@@ -2687,7 +2680,7 @@ void G1CollectedHeap::rebuild_free_region_list() {
class G1AbandonCollectionSetClosure : public HeapRegionClosure {
public:
- virtual bool do_heap_region(HeapRegion* r) {
+ virtual bool do_heap_region(G1HeapRegion* r) {
assert(r->in_collection_set(), "Region %u must have been in collection set", r->hrm_index());
G1CollectedHeap::heap()->clear_region_attr(r);
r->clear_young_index_in_cset();
@@ -2703,11 +2696,11 @@ void G1CollectedHeap::abandon_collection_set(G1CollectionSet* collection_set) {
collection_set->stop_incremental_building();
}
-bool G1CollectedHeap::is_old_gc_alloc_region(HeapRegion* hr) {
+bool G1CollectedHeap::is_old_gc_alloc_region(G1HeapRegion* hr) {
return _allocator->is_retained_old_region(hr);
}
-void G1CollectedHeap::set_region_short_lived_locked(HeapRegion* hr) {
+void G1CollectedHeap::set_region_short_lived_locked(G1HeapRegion* hr) {
_eden.add(hr);
_policy->set_region_eden(hr);
}
@@ -2719,7 +2712,7 @@ class NoYoungRegionsClosure: public HeapRegionClosure {
bool _success;
public:
NoYoungRegionsClosure() : _success(true) { }
- bool do_heap_region(HeapRegion* r) {
+ bool do_heap_region(G1HeapRegion* r) {
if (r->is_young()) {
log_error(gc, verify)("Region [" PTR_FORMAT ", " PTR_FORMAT ") tagged as young",
p2i(r->bottom()), p2i(r->end()));
@@ -2742,8 +2735,8 @@ bool G1CollectedHeap::check_young_list_empty() {
#endif // ASSERT
-// Remove the given HeapRegion from the appropriate region set.
-void G1CollectedHeap::prepare_region_for_full_compaction(HeapRegion* hr) {
+// Remove the given G1HeapRegion from the appropriate region set.
+void G1CollectedHeap::prepare_region_for_full_compaction(G1HeapRegion* hr) {
if (hr->is_humongous()) {
_humongous_set.remove(hr);
} else if (hr->is_old()) {
@@ -2800,7 +2793,7 @@ class RebuildRegionSetsClosure : public HeapRegionClosure {
}
}
- bool do_heap_region(HeapRegion* r) {
+ bool do_heap_region(G1HeapRegion* r) {
if (r->is_empty()) {
assert(r->rem_set()->is_empty(), "Empty regions should have empty remembered sets.");
// Add free regions to the free list
@@ -2850,19 +2843,18 @@ void G1CollectedHeap::rebuild_region_sets(bool free_list_only) {
// Methods for the mutator alloc region
-HeapRegion* G1CollectedHeap::new_mutator_alloc_region(size_t word_size,
- bool force,
+G1HeapRegion* G1CollectedHeap::new_mutator_alloc_region(size_t word_size,
uint node_index) {
assert_heap_locked_or_at_safepoint(true /* should_be_vm_thread */);
bool should_allocate = policy()->should_allocate_mutator_region();
- if (force || should_allocate) {
- HeapRegion* new_alloc_region = new_region(word_size,
- HeapRegionType::Eden,
- false /* do_expand */,
- node_index);
+ if (should_allocate) {
+ G1HeapRegion* new_alloc_region = new_region(word_size,
+ HeapRegionType::Eden,
+ false /* do_expand */,
+ node_index);
if (new_alloc_region != nullptr) {
set_region_short_lived_locked(new_alloc_region);
- _hr_printer.alloc(new_alloc_region, !should_allocate);
+ G1HeapRegionPrinter::alloc(new_alloc_region);
_policy->remset_tracker()->update_at_allocate(new_alloc_region);
return new_alloc_region;
}
@@ -2870,7 +2862,7 @@ HeapRegion* G1CollectedHeap::new_mutator_alloc_region(size_t word_size,
return nullptr;
}
-void G1CollectedHeap::retire_mutator_alloc_region(HeapRegion* alloc_region,
+void G1CollectedHeap::retire_mutator_alloc_region(G1HeapRegion* alloc_region,
size_t allocated_bytes) {
assert_heap_locked_or_at_safepoint(true /* should_be_vm_thread */);
assert(alloc_region->is_eden(), "all mutator alloc regions should be eden");
@@ -2878,7 +2870,7 @@ void G1CollectedHeap::retire_mutator_alloc_region(HeapRegion* alloc_region,
collection_set()->add_eden_region(alloc_region);
increase_used(allocated_bytes);
_eden.add_used_bytes(allocated_bytes);
- _hr_printer.retire(alloc_region);
+ G1HeapRegionPrinter::retire(alloc_region);
// We update the eden sizes here, when the region is retired,
// instead of when it's allocated, since this is the point that its
@@ -2896,7 +2888,7 @@ bool G1CollectedHeap::has_more_regions(G1HeapRegionAttr dest) {
}
}
-HeapRegion* G1CollectedHeap::new_gc_alloc_region(size_t word_size, G1HeapRegionAttr dest, uint node_index) {
+G1HeapRegion* G1CollectedHeap::new_gc_alloc_region(size_t word_size, G1HeapRegionAttr dest, uint node_index) {
assert(FreeList_lock->owned_by_self(), "pre-condition");
if (!has_more_regions(dest)) {
@@ -2910,10 +2902,10 @@ HeapRegion* G1CollectedHeap::new_gc_alloc_region(size_t word_size, G1HeapRegionA
type = HeapRegionType::Old;
}
- HeapRegion* new_alloc_region = new_region(word_size,
- type,
- true /* do_expand */,
- node_index);
+ G1HeapRegion* new_alloc_region = new_region(word_size,
+ type,
+ true /* do_expand */,
+ node_index);
if (new_alloc_region != nullptr) {
if (type.is_survivor()) {
@@ -2925,13 +2917,13 @@ HeapRegion* G1CollectedHeap::new_gc_alloc_region(size_t word_size, G1HeapRegionA
}
_policy->remset_tracker()->update_at_allocate(new_alloc_region);
register_region_with_region_attr(new_alloc_region);
- _hr_printer.alloc(new_alloc_region);
+ G1HeapRegionPrinter::alloc(new_alloc_region);
return new_alloc_region;
}
return nullptr;
}
-void G1CollectedHeap::retire_gc_alloc_region(HeapRegion* alloc_region,
+void G1CollectedHeap::retire_gc_alloc_region(G1HeapRegion* alloc_region,
size_t allocated_bytes,
G1HeapRegionAttr dest) {
_bytes_used_during_gc += allocated_bytes;
@@ -2946,17 +2938,17 @@ void G1CollectedHeap::retire_gc_alloc_region(HeapRegion* alloc_region,
if (during_im && allocated_bytes > 0) {
_cm->add_root_region(alloc_region);
}
- _hr_printer.retire(alloc_region);
+ G1HeapRegionPrinter::retire(alloc_region);
}
-HeapRegion* G1CollectedHeap::alloc_highest_free_region() {
+G1HeapRegion* G1CollectedHeap::alloc_highest_free_region() {
bool expanded = false;
uint index = _hrm.find_highest_free(&expanded);
if (index != G1_NO_HRM_INDEX) {
if (expanded) {
log_debug(gc, ergo, heap)("Attempt heap expansion (requested address range outside heap bounds). region size: " SIZE_FORMAT "B",
- HeapRegion::GrainWords * HeapWordSize);
+ G1HeapRegion::GrainWords * HeapWordSize);
}
return _hrm.allocate_free_regions_starting_at(index, 1);
}
@@ -2982,7 +2974,7 @@ class RegisterNMethodOopClosure: public OopClosure {
oop heap_oop = RawAccess<>::oop_load(p);
if (!CompressedOops::is_null(heap_oop)) {
oop obj = CompressedOops::decode_not_null(heap_oop);
- HeapRegion* hr = _g1h->heap_region_containing(obj);
+ G1HeapRegion* hr = _g1h->heap_region_containing(obj);
assert(!hr->is_continues_humongous(),
"trying to add code root " PTR_FORMAT " in continuation of humongous region " HR_FORMAT
" starting at " HR_FORMAT,
@@ -3051,7 +3043,7 @@ GrowableArray G1CollectedHeap::memory_pools() {
}
void G1CollectedHeap::fill_with_dummy_object(HeapWord* start, HeapWord* end, bool zap) {
- HeapRegion* region = heap_region_containing(start);
+ G1HeapRegion* region = heap_region_containing(start);
region->fill_with_dummy_object(start, pointer_delta(end, start), zap);
}
diff --git a/src/hotspot/share/gc/g1/g1CollectedHeap.hpp b/src/hotspot/share/gc/g1/g1CollectedHeap.hpp
index 3ee77786621c3..1cf89879fd735 100644
--- a/src/hotspot/share/gc/g1/g1CollectedHeap.hpp
+++ b/src/hotspot/share/gc/g1/g1CollectedHeap.hpp
@@ -40,7 +40,6 @@
#include "gc/g1/g1HeapRegionSet.hpp"
#include "gc/g1/g1HeapTransition.hpp"
#include "gc/g1/g1HeapVerifier.hpp"
-#include "gc/g1/g1HRPrinter.hpp"
#include "gc/g1/g1MonitoringSupport.hpp"
#include "gc/g1/g1MonotonicArenaFreeMemoryTask.hpp"
#include "gc/g1/g1MonotonicArenaFreePool.hpp"
@@ -80,7 +79,7 @@ class G1RemSet;
class G1ServiceTask;
class G1ServiceThread;
class GCMemoryManager;
-class HeapRegion;
+class G1HeapRegion;
class MemoryPool;
class nmethod;
class ReferenceProcessor;
@@ -197,7 +196,7 @@ class G1CollectedHeap : public CollectedHeap {
// Start a new incremental collection set for the next pause.
void start_new_collection_set();
- void prepare_region_for_full_compaction(HeapRegion* hr);
+ void prepare_region_for_full_compaction(G1HeapRegion* hr);
private:
// Rebuilds the region sets / lists so that they are repopulated to
@@ -265,8 +264,6 @@ class G1CollectedHeap : public CollectedHeap {
void update_parallel_gc_threads_cpu_time();
private:
- G1HRPrinter _hr_printer;
-
// Return true if an explicit GC should start a concurrent cycle instead
// of doing a STW full GC. A concurrent cycle should be started if:
// (a) cause == _g1_humongous_allocation,
@@ -385,20 +382,20 @@ class G1CollectedHeap : public CollectedHeap {
G1CollectionSet _collection_set;
- // Try to allocate a single non-humongous HeapRegion sufficient for
+ // Try to allocate a single non-humongous G1HeapRegion sufficient for
// an allocation of the given word_size. If do_expand is true,
// attempt to expand the heap if necessary to satisfy the allocation
// request. 'type' takes the type of region to be allocated. (Use constants
// Old, Eden, Humongous, Survivor defined in HeapRegionType.)
- HeapRegion* new_region(size_t word_size,
- HeapRegionType type,
- bool do_expand,
- uint node_index = G1NUMA::AnyNodeIndex);
+ G1HeapRegion* new_region(size_t word_size,
+ HeapRegionType type,
+ bool do_expand,
+ uint node_index = G1NUMA::AnyNodeIndex);
// Initialize a contiguous set of free regions of length num_regions
// and starting at index first so that they appear as a single
// humongous region.
- HeapWord* humongous_obj_allocate_initialize_regions(HeapRegion* first_hr,
+ HeapWord* humongous_obj_allocate_initialize_regions(G1HeapRegion* first_hr,
uint num_regions,
size_t word_size);
@@ -468,14 +465,14 @@ class G1CollectedHeap : public CollectedHeap {
// These methods are the "callbacks" from the G1AllocRegion class.
// For mutator alloc regions.
- HeapRegion* new_mutator_alloc_region(size_t word_size, bool force, uint node_index);
- void retire_mutator_alloc_region(HeapRegion* alloc_region,
+ G1HeapRegion* new_mutator_alloc_region(size_t word_size, uint node_index);
+ void retire_mutator_alloc_region(G1HeapRegion* alloc_region,
size_t allocated_bytes);
// For GC alloc regions.
bool has_more_regions(G1HeapRegionAttr dest);
- HeapRegion* new_gc_alloc_region(size_t word_size, G1HeapRegionAttr dest, uint node_index);
- void retire_gc_alloc_region(HeapRegion* alloc_region,
+ G1HeapRegion* new_gc_alloc_region(size_t word_size, G1HeapRegionAttr dest, uint node_index);
+ void retire_gc_alloc_region(G1HeapRegion* alloc_region,
size_t allocated_bytes, G1HeapRegionAttr dest);
// - if clear_all_soft_refs is true, all soft references should be
@@ -575,7 +572,7 @@ class G1CollectedHeap : public CollectedHeap {
// Expand the garbage-first heap by at least the given size (in bytes!).
// Returns true if the heap was expanded by the requested amount;
// false otherwise.
- // (Rounds up to a HeapRegion boundary.)
+ // (Rounds up to a G1HeapRegion boundary.)
bool expand(size_t expand_bytes, WorkerThreads* pretouch_workers = nullptr, double* expand_time_ms = nullptr);
bool expand_single_region(uint node_index);
@@ -596,7 +593,7 @@ class G1CollectedHeap : public CollectedHeap {
void gc_epilogue(bool full);
// Does the given region fulfill remembered set based eager reclaim candidate requirements?
- bool is_potential_eager_reclaim_candidate(HeapRegion* r) const;
+ bool is_potential_eager_reclaim_candidate(G1HeapRegion* r) const;
inline bool is_humongous_reclaim_candidate(uint region);
@@ -607,22 +604,22 @@ class G1CollectedHeap : public CollectedHeap {
// Register the given region to be part of the collection set.
inline void register_humongous_candidate_region_with_region_attr(uint index);
- void set_humongous_metadata(HeapRegion* first_hr,
+ void set_humongous_metadata(G1HeapRegion* first_hr,
uint num_regions,
size_t word_size,
bool update_remsets);
// We register a region with the fast "in collection set" test. We
// simply set to true the array slot corresponding to this region.
- void register_young_region_with_region_attr(HeapRegion* r) {
+ void register_young_region_with_region_attr(G1HeapRegion* r) {
_region_attr.set_in_young(r->hrm_index(), r->has_pinned_objects());
}
- inline void register_new_survivor_region_with_region_attr(HeapRegion* r);
- inline void register_region_with_region_attr(HeapRegion* r);
- inline void register_old_region_with_region_attr(HeapRegion* r);
- inline void register_optional_region_with_region_attr(HeapRegion* r);
+ inline void register_new_survivor_region_with_region_attr(G1HeapRegion* r);
+ inline void register_region_with_region_attr(G1HeapRegion* r);
+ inline void register_old_region_with_region_attr(G1HeapRegion* r);
+ inline void register_optional_region_with_region_attr(G1HeapRegion* r);
- void clear_region_attr(const HeapRegion* hr) {
+ void clear_region_attr(const G1HeapRegion* hr) {
_region_attr.clear(hr);
}
@@ -634,7 +631,7 @@ class G1CollectedHeap : public CollectedHeap {
// for all regions.
void verify_region_attr_remset_is_tracked() PRODUCT_RETURN;
- void clear_bitmap_for_region(HeapRegion* hr);
+ void clear_bitmap_for_region(G1HeapRegion* hr);
bool is_user_requested_concurrent_full_gc(GCCause::Cause cause);
@@ -669,14 +666,12 @@ class G1CollectedHeap : public CollectedHeap {
return _old_marking_cycles_completed;
}
- G1HRPrinter* hr_printer() { return &_hr_printer; }
-
// Allocates a new heap region instance.
- HeapRegion* new_heap_region(uint hrs_index, MemRegion mr);
+ G1HeapRegion* new_heap_region(uint hrs_index, MemRegion mr);
// Allocate the highest free region in the reserved heap. This will commit
// regions as necessary.
- HeapRegion* alloc_highest_free_region();
+ G1HeapRegion* alloc_highest_free_region();
// Frees a region by resetting its metadata and adding it to the free list
// passed as a parameter (this is usually a local list which will be appended
@@ -684,10 +679,10 @@ class G1CollectedHeap : public CollectedHeap {
// in another way).
// Callers must ensure they are the only one calling free on the given region
// at the same time.
- void free_region(HeapRegion* hr, FreeRegionList* free_list);
+ void free_region(G1HeapRegion* hr, FreeRegionList* free_list);
// Add the given region to the retained regions collection set candidates.
- void retain_region(HeapRegion* hr);
+ void retain_region(G1HeapRegion* hr);
// It dirties the cards that cover the block so that the post
// write barrier never queues anything when updating objects on this
// block. It is assumed (and in fact we assert) that the block
@@ -701,10 +696,10 @@ class G1CollectedHeap : public CollectedHeap {
// list later).
// The method assumes that only a single thread is ever calling
// this for a particular region at once.
- void free_humongous_region(HeapRegion* hr,
+ void free_humongous_region(G1HeapRegion* hr,
FreeRegionList* free_list);
- // Execute func(HeapRegion* r, bool is_last) on every region covered by the
+ // Execute func(G1HeapRegion* r, bool is_last) on every region covered by the
// given range.
template
void iterate_regions_in_range(MemRegion range, const Func& func);
@@ -730,7 +725,7 @@ class G1CollectedHeap : public CollectedHeap {
private:
// Shrink the garbage-first heap by at most the given size (in bytes!).
- // (Rounds down to a HeapRegion boundary.)
+ // (Rounds down to a G1HeapRegion boundary.)
void shrink(size_t shrink_bytes);
void shrink_helper(size_t expand_bytes);
@@ -754,7 +749,7 @@ class G1CollectedHeap : public CollectedHeap {
// false if unable to do the collection due to the GC locker being
// active, true otherwise.
// precondition: at safepoint on VM thread
- // precondition: !is_gc_active()
+ // precondition: !is_stw_gc_active()
bool do_collection_pause_at_safepoint();
// Helper for do_collection_pause_at_safepoint, containing the guts
@@ -923,7 +918,7 @@ class G1CollectedHeap : public CollectedHeap {
const G1CollectionSet* collection_set() const { return &_collection_set; }
G1CollectionSet* collection_set() { return &_collection_set; }
- inline bool is_collection_set_candidate(const HeapRegion* r) const;
+ inline bool is_collection_set_candidate(const G1HeapRegion* r) const;
void initialize_serviceability() override;
MemoryUsage memory_usage() override;
@@ -998,21 +993,21 @@ class G1CollectedHeap : public CollectedHeap {
uint num_used_regions() const { return num_regions() - num_free_regions(); }
#ifdef ASSERT
- bool is_on_master_free_list(HeapRegion* hr) {
+ bool is_on_master_free_list(G1HeapRegion* hr) {
return _hrm.is_free(hr);
}
#endif // ASSERT
- inline void old_set_add(HeapRegion* hr);
- inline void old_set_remove(HeapRegion* hr);
+ inline void old_set_add(G1HeapRegion* hr);
+ inline void old_set_remove(G1HeapRegion* hr);
size_t non_young_capacity_bytes() {
- return (old_regions_count() + humongous_regions_count()) * HeapRegion::GrainBytes;
+ return (old_regions_count() + humongous_regions_count()) * G1HeapRegion::GrainBytes;
}
// Determine whether the given region is one that we are using as an
// old GC alloc region.
- bool is_old_gc_alloc_region(HeapRegion* hr);
+ bool is_old_gc_alloc_region(G1HeapRegion* hr);
// Perform a collection of the heap; intended for use in implementing
// "System.gc". This probably implies as full a collection as the
@@ -1034,7 +1029,7 @@ class G1CollectedHeap : public CollectedHeap {
// Return "TRUE" iff the given object address is within the collection
// set. Assumes that the reference points into the heap.
- inline bool is_in_cset(const HeapRegion* hr) const;
+ inline bool is_in_cset(const G1HeapRegion* hr) const;
inline bool is_in_cset(oop obj) const;
inline bool is_in_cset(HeapWord* addr) const;
@@ -1081,13 +1076,13 @@ class G1CollectedHeap : public CollectedHeap {
void heap_region_iterate(HeapRegionIndexClosure* blk) const;
// Return the region with the given index. It assumes the index is valid.
- inline HeapRegion* region_at(uint index) const;
- inline HeapRegion* region_at_or_null(uint index) const;
+ inline G1HeapRegion* region_at(uint index) const;
+ inline G1HeapRegion* region_at_or_null(uint index) const;
// Iterate over the regions that the humongous object starting at the given
- // region and apply the given method with the signature f(HeapRegion*) on them.
+ // region and apply the given method with the signature f(G1HeapRegion*) on them.
template
- void humongous_obj_regions_iterate(HeapRegion* start, const Func& f);
+ void humongous_obj_regions_iterate(G1HeapRegion* start, const Func& f);
// Calculate the region index of the given address. Given address must be
// within the heap.
@@ -1135,12 +1130,12 @@ class G1CollectedHeap : public CollectedHeap {
size_t length,
uint worker_id) const;
- // Returns the HeapRegion that contains addr. addr must not be null.
- inline HeapRegion* heap_region_containing(const void* addr) const;
+ // Returns the G1HeapRegion that contains addr. addr must not be null.
+ inline G1HeapRegion* heap_region_containing(const void* addr) const;
- // Returns the HeapRegion that contains addr, or null if that is an uncommitted
+ // Returns the G1HeapRegion that contains addr, or null if that is an uncommitted
// region. addr must not be null.
- inline HeapRegion* heap_region_containing_or_null(const void* addr) const;
+ inline G1HeapRegion* heap_region_containing_or_null(const void* addr) const;
// A CollectedHeap is divided into a dense sequence of "blocks"; that is,
// each address in the (reserved) heap is a member of exactly
@@ -1202,7 +1197,7 @@ class G1CollectedHeap : public CollectedHeap {
return named_heap(CollectedHeap::G1);
}
- void set_region_short_lived_locked(HeapRegion* hr);
+ void set_region_short_lived_locked(G1HeapRegion* hr);
// add appropriate methods for any other surv rate groups
G1SurvivorRegions* survivor() { return &_survivor; }
@@ -1226,7 +1221,7 @@ class G1CollectedHeap : public CollectedHeap {
inline static bool is_obj_filler(const oop obj);
// Determine if an object is dead, given the object and also
// the region to which the object belongs.
- inline bool is_obj_dead(const oop obj, const HeapRegion* hr) const;
+ inline bool is_obj_dead(const oop obj, const G1HeapRegion* hr) const;
// Determine if an object is dead, given only the object itself.
// This will find the region to which the object belongs and
@@ -1234,7 +1229,7 @@ class G1CollectedHeap : public CollectedHeap {
// If obj is null it is not dead.
inline bool is_obj_dead(const oop obj) const;
- inline bool is_obj_dead_full(const oop obj, const HeapRegion* hr) const;
+ inline bool is_obj_dead_full(const oop obj, const G1HeapRegion* hr) const;
inline bool is_obj_dead_full(const oop obj) const;
// Mark the live object that failed evacuation in the bitmap.
@@ -1291,7 +1286,7 @@ class G1CollectedHeap : public CollectedHeap {
// are the same as those above.
bool is_obj_dead_cond(const oop obj,
- const HeapRegion* hr,
+ const G1HeapRegion* hr,
const VerifyOption vo) const;
bool is_obj_dead_cond(const oop obj,
diff --git a/src/hotspot/share/gc/g1/g1CollectedHeap.inline.hpp b/src/hotspot/share/gc/g1/g1CollectedHeap.inline.hpp
index 30ca270e1b568..ed594d72d06e7 100644
--- a/src/hotspot/share/gc/g1/g1CollectedHeap.inline.hpp
+++ b/src/hotspot/share/gc/g1/g1CollectedHeap.inline.hpp
@@ -103,17 +103,17 @@ inline size_t G1CollectedHeap::clamp_plab_size(size_t value) const {
// Inline functions for G1CollectedHeap
// Return the region with the given index. It assumes the index is valid.
-inline HeapRegion* G1CollectedHeap::region_at(uint index) const { return _hrm.at(index); }
+inline G1HeapRegion* G1CollectedHeap::region_at(uint index) const { return _hrm.at(index); }
// Return the region with the given index, or null if unmapped. It assumes the index is valid.
-inline HeapRegion* G1CollectedHeap::region_at_or_null(uint index) const { return _hrm.at_or_null(index); }
+inline G1HeapRegion* G1CollectedHeap::region_at_or_null(uint index) const { return _hrm.at_or_null(index); }
template
-inline void G1CollectedHeap::humongous_obj_regions_iterate(HeapRegion* start, const Func& f) {
+inline void G1CollectedHeap::humongous_obj_regions_iterate(G1HeapRegion* start, const Func& f) {
assert(start->is_starts_humongous(), "must be");
do {
- HeapRegion* next = _hrm.next_region_in_humongous(start);
+ G1HeapRegion* next = _hrm.next_region_in_humongous(start);
f(start);
start = next;
} while (start != nullptr);
@@ -123,29 +123,29 @@ inline uint G1CollectedHeap::addr_to_region(const void* addr) const {
assert(is_in_reserved(addr),
"Cannot calculate region index for address " PTR_FORMAT " that is outside of the heap [" PTR_FORMAT ", " PTR_FORMAT ")",
p2i(addr), p2i(reserved().start()), p2i(reserved().end()));
- return (uint)(pointer_delta(addr, reserved().start(), sizeof(uint8_t)) >> HeapRegion::LogOfHRGrainBytes);
+ return (uint)(pointer_delta(addr, reserved().start(), sizeof(uint8_t)) >> G1HeapRegion::LogOfHRGrainBytes);
}
inline HeapWord* G1CollectedHeap::bottom_addr_for_region(uint index) const {
- return _hrm.reserved().start() + index * HeapRegion::GrainWords;
+ return _hrm.reserved().start() + index * G1HeapRegion::GrainWords;
}
-inline HeapRegion* G1CollectedHeap::heap_region_containing(const void* addr) const {
+inline G1HeapRegion* G1CollectedHeap::heap_region_containing(const void* addr) const {
uint const region_idx = addr_to_region(addr);
return region_at(region_idx);
}
-inline HeapRegion* G1CollectedHeap::heap_region_containing_or_null(const void* addr) const {
+inline G1HeapRegion* G1CollectedHeap::heap_region_containing_or_null(const void* addr) const {
uint const region_idx = addr_to_region(addr);
return region_at_or_null(region_idx);
}
-inline void G1CollectedHeap::old_set_add(HeapRegion* hr) {
+inline void G1CollectedHeap::old_set_add(G1HeapRegion* hr) {
_old_set.add(hr);
}
-inline void G1CollectedHeap::old_set_remove(HeapRegion* hr) {
+inline void G1CollectedHeap::old_set_remove(G1HeapRegion* hr) {
_old_set.remove(hr);
}
@@ -160,7 +160,7 @@ G1CollectedHeap::dirty_young_block(HeapWord* start, size_t word_size) {
// Assign the containing region to containing_hr so that we don't
// have to keep calling heap_region_containing() in the
// asserts below.
- DEBUG_ONLY(HeapRegion* containing_hr = heap_region_containing(start);)
+ DEBUG_ONLY(G1HeapRegion* containing_hr = heap_region_containing(start);)
assert(word_size > 0, "pre-condition");
assert(containing_hr->is_in(start), "it should contain start");
assert(containing_hr->is_young(), "it should be young");
@@ -193,7 +193,7 @@ inline bool G1CollectedHeap::is_in_cset(HeapWord* addr) const {
return _region_attr.is_in_cset(addr);
}
-bool G1CollectedHeap::is_in_cset(const HeapRegion* hr) const {
+bool G1CollectedHeap::is_in_cset(const G1HeapRegion* hr) const {
return _region_attr.is_in_cset(hr);
}
@@ -215,23 +215,23 @@ void G1CollectedHeap::register_humongous_candidate_region_with_region_attr(uint
_region_attr.set_humongous_candidate(index);
}
-void G1CollectedHeap::register_new_survivor_region_with_region_attr(HeapRegion* r) {
+void G1CollectedHeap::register_new_survivor_region_with_region_attr(G1HeapRegion* r) {
_region_attr.set_new_survivor_region(r->hrm_index());
}
-void G1CollectedHeap::register_region_with_region_attr(HeapRegion* r) {
+void G1CollectedHeap::register_region_with_region_attr(G1HeapRegion* r) {
_region_attr.set_remset_is_tracked(r->hrm_index(), r->rem_set()->is_tracked());
_region_attr.set_is_pinned(r->hrm_index(), r->has_pinned_objects());
}
-void G1CollectedHeap::register_old_region_with_region_attr(HeapRegion* r) {
+void G1CollectedHeap::register_old_region_with_region_attr(G1HeapRegion* r) {
assert(!r->has_pinned_objects(), "must be");
assert(r->rem_set()->is_complete(), "must be");
_region_attr.set_in_old(r->hrm_index(), r->rem_set()->is_tracked());
_rem_set->exclude_region_from_scan(r->hrm_index());
}
-void G1CollectedHeap::register_optional_region_with_region_attr(HeapRegion* r) {
+void G1CollectedHeap::register_optional_region_with_region_attr(G1HeapRegion* r) {
_region_attr.set_optional(r->hrm_index(), r->rem_set()->is_tracked());
}
@@ -252,7 +252,7 @@ inline bool G1CollectedHeap::is_obj_filler(const oop obj) {
return k == Universe::fillerArrayKlass() || k == vmClasses::FillerObject_klass();
}
-inline bool G1CollectedHeap::is_obj_dead(const oop obj, const HeapRegion* hr) const {
+inline bool G1CollectedHeap::is_obj_dead(const oop obj, const G1HeapRegion* hr) const {
if (hr->is_in_parsable_area(obj)) {
// This object is in the parsable part of the heap, live unless scrubbed.
return is_obj_filler(obj);
@@ -265,7 +265,7 @@ inline bool G1CollectedHeap::is_obj_dead(const oop obj, const HeapRegion* hr) co
inline void G1CollectedHeap::pin_object(JavaThread* thread, oop obj) {
assert(obj != nullptr, "obj must not be null");
- assert(!is_gc_active(), "must not pin objects during a GC");
+ assert(!is_stw_gc_active(), "must not pin objects during a GC pause");
assert(obj->is_typeArray(), "must be typeArray");
uint obj_region_idx = heap_region_containing(obj)->hrm_index();
@@ -274,7 +274,7 @@ inline void G1CollectedHeap::pin_object(JavaThread* thread, oop obj) {
inline void G1CollectedHeap::unpin_object(JavaThread* thread, oop obj) {
assert(obj != nullptr, "obj must not be null");
- assert(!is_gc_active(), "must not unpin objects during a GC");
+ assert(!is_stw_gc_active(), "must not unpin objects during a GC pause");
uint obj_region_idx = heap_region_containing(obj)->hrm_index();
G1ThreadLocalData::pin_count_cache(thread).dec_count(obj_region_idx);
@@ -286,7 +286,7 @@ inline bool G1CollectedHeap::is_obj_dead(const oop obj) const {
return is_obj_dead(obj, heap_region_containing(obj));
}
-inline bool G1CollectedHeap::is_obj_dead_full(const oop obj, const HeapRegion* hr) const {
+inline bool G1CollectedHeap::is_obj_dead_full(const oop obj, const G1HeapRegion* hr) const {
return !is_marked(obj);
}
@@ -311,7 +311,7 @@ inline void G1CollectedHeap::set_humongous_is_live(oop obj) {
}
}
-inline bool G1CollectedHeap::is_collection_set_candidate(const HeapRegion* r) const {
+inline bool G1CollectedHeap::is_collection_set_candidate(const G1HeapRegion* r) const {
const G1CollectionSetCandidates* candidates = collection_set()->candidates();
return candidates->contains(r);
}
diff --git a/src/hotspot/share/gc/g1/g1CollectionSet.cpp b/src/hotspot/share/gc/g1/g1CollectionSet.cpp
index 9174221b21ed4..a70d003ab53e5 100644
--- a/src/hotspot/share/gc/g1/g1CollectionSet.cpp
+++ b/src/hotspot/share/gc/g1/g1CollectionSet.cpp
@@ -95,7 +95,7 @@ void G1CollectionSet::abandon_all_candidates() {
_optional_old_regions.clear();
}
-void G1CollectionSet::add_old_region(HeapRegion* hr) {
+void G1CollectionSet::add_old_region(G1HeapRegion* hr) {
assert_at_safepoint_on_vm_thread();
assert(_inc_build_state == Active,
@@ -134,7 +134,7 @@ void G1CollectionSet::iterate(HeapRegionClosure* cl) const {
OrderAccess::loadload();
for (uint i = 0; i < len; i++) {
- HeapRegion* r = _g1h->region_at(_collection_set_regions[i]);
+ G1HeapRegion* r = _g1h->region_at(_collection_set_regions[i]);
bool result = cl->do_heap_region(r);
if (result) {
cl->set_incomplete();
@@ -152,7 +152,7 @@ void G1CollectionSet::par_iterate(HeapRegionClosure* cl,
void G1CollectionSet::iterate_optional(HeapRegionClosure* cl) const {
assert_at_safepoint();
- for (HeapRegion* r : _optional_old_regions) {
+ for (G1HeapRegion* r : _optional_old_regions) {
bool result = cl->do_heap_region(r);
guarantee(!result, "Must not cancel iteration");
}
@@ -176,7 +176,7 @@ void G1CollectionSet::iterate_part_from(HeapRegionClosure* cl,
worker_id);
}
-void G1CollectionSet::add_young_region_common(HeapRegion* hr) {
+void G1CollectionSet::add_young_region_common(G1HeapRegion* hr) {
assert(hr->is_young(), "invariant");
assert(_inc_build_state == Active, "Precondition");
@@ -196,12 +196,12 @@ void G1CollectionSet::add_young_region_common(HeapRegion* hr) {
_collection_set_cur_length++;
}
-void G1CollectionSet::add_survivor_regions(HeapRegion* hr) {
+void G1CollectionSet::add_survivor_regions(G1HeapRegion* hr) {
assert(hr->is_survivor(), "Must only add survivor regions, but is %s", hr->get_type_str());
add_young_region_common(hr);
}
-void G1CollectionSet::add_eden_region(HeapRegion* hr) {
+void G1CollectionSet::add_eden_region(G1HeapRegion* hr) {
assert(hr->is_eden(), "Must only add eden regions, but is %s", hr->get_type_str());
add_young_region_common(hr);
}
@@ -213,7 +213,7 @@ class G1VerifyYoungAgesClosure : public HeapRegionClosure {
G1VerifyYoungAgesClosure() : HeapRegionClosure(), _valid(true) { }
- virtual bool do_heap_region(HeapRegion* r) {
+ virtual bool do_heap_region(G1HeapRegion* r) {
guarantee(r->is_young(), "Region must be young but is %s", r->get_type_str());
if (!r->has_surv_rate_group()) {
@@ -251,7 +251,7 @@ class G1PrintCollectionSetDetailClosure : public HeapRegionClosure {
public:
G1PrintCollectionSetDetailClosure(outputStream* st) : HeapRegionClosure(), _st(st) { }
- virtual bool do_heap_region(HeapRegion* r) {
+ virtual bool do_heap_region(G1HeapRegion* r) {
assert(r->in_collection_set(), "Region %u should be in collection set", r->hrm_index());
G1ConcurrentMark* cm = G1CollectedHeap::heap()->concurrent_mark();
_st->print_cr(" " HR_FORMAT ", TAMS: " PTR_FORMAT " PB: " PTR_FORMAT ", age: %4d",
@@ -387,7 +387,7 @@ void G1CollectionSet::finalize_old_part(double time_remaining_ms) {
}
void G1CollectionSet::move_candidates_to_collection_set(G1CollectionCandidateRegionList* regions) {
- for (HeapRegion* r : *regions) {
+ for (G1HeapRegion* r : *regions) {
_g1h->clear_region_attr(r);
add_old_region(r);
}
@@ -396,7 +396,7 @@ void G1CollectionSet::move_candidates_to_collection_set(G1CollectionCandidateReg
void G1CollectionSet::prepare_optional_regions(G1CollectionCandidateRegionList* regions){
uint cur_index = 0;
- for (HeapRegion* r : *regions) {
+ for (G1HeapRegion* r : *regions) {
assert(r->is_old(), "the region should be old");
assert(!r->in_collection_set(), "should not already be in the CSet");
@@ -412,7 +412,7 @@ void G1CollectionSet::move_pinned_marking_to_retained(G1CollectionCandidateRegio
}
candidates()->remove(regions);
- for (HeapRegion* r : *regions) {
+ for (G1HeapRegion* r : *regions) {
assert(r->has_pinned_objects(), "must be pinned");
assert(r->rem_set()->is_complete(), "must be complete");
candidates()->add_retained_region_unsorted(r);
@@ -427,7 +427,7 @@ void G1CollectionSet::drop_pinned_retained_regions(G1CollectionCandidateRegionLi
candidates()->remove(regions);
// We can now drop these region's remembered sets.
- for (HeapRegion* r : *regions) {
+ for (G1HeapRegion* r : *regions) {
r->rem_set()->clear(true /* only_cardset */);
}
}
@@ -457,7 +457,7 @@ bool G1CollectionSet::finalize_optional_for_evacuation(double remaining_pause_ti
}
void G1CollectionSet::abandon_optional_collection_set(G1ParScanThreadStateSet* pss) {
- for (HeapRegion* r : _optional_old_regions) {
+ for (G1HeapRegion* r : _optional_old_regions) {
pss->record_unused_optional_region(r);
// Clear collection set marker and make sure that the remembered set information
// is correct as we still need it later.
@@ -486,7 +486,7 @@ class G1VerifyYoungCSetIndicesClosure : public HeapRegionClosure {
FREE_C_HEAP_ARRAY(int, _heap_region_indices);
}
- virtual bool do_heap_region(HeapRegion* r) {
+ virtual bool do_heap_region(G1HeapRegion* r) {
const uint idx = r->young_index_in_cset();
assert(idx > 0, "Young index must be set for all regions in the incremental collection set but is not for region %u.", r->hrm_index());
diff --git a/src/hotspot/share/gc/g1/g1CollectionSet.hpp b/src/hotspot/share/gc/g1/g1CollectionSet.hpp
index fba166e9c75b0..e78a426e40302 100644
--- a/src/hotspot/share/gc/g1/g1CollectionSet.hpp
+++ b/src/hotspot/share/gc/g1/g1CollectionSet.hpp
@@ -35,7 +35,7 @@ class G1GCPhaseTimes;
class G1ParScanThreadStateSet;
class G1Policy;
class G1SurvivorRegions;
-class HeapRegion;
+class G1HeapRegion;
class HeapRegionClaimer;
class HeapRegionClosure;
@@ -168,10 +168,10 @@ class G1CollectionSet {
void verify_young_cset_indices() const NOT_DEBUG_RETURN;
// Update the incremental collection set information when adding a region.
- void add_young_region_common(HeapRegion* hr);
+ void add_young_region_common(G1HeapRegion* hr);
// Add the given old region to the head of the current collection set.
- void add_old_region(HeapRegion* hr);
+ void add_old_region(G1HeapRegion* hr);
void move_candidates_to_collection_set(G1CollectionCandidateRegionList* regions);
// Prepares old regions in the given set for optional collection later. Does not
@@ -271,10 +271,10 @@ class G1CollectionSet {
void abandon_optional_collection_set(G1ParScanThreadStateSet* pss);
// Add eden region to the collection set.
- void add_eden_region(HeapRegion* hr);
+ void add_eden_region(G1HeapRegion* hr);
// Add survivor region to the collection set.
- void add_survivor_regions(HeapRegion* hr);
+ void add_survivor_regions(G1HeapRegion* hr);
#ifndef PRODUCT
bool verify_young_ages();
diff --git a/src/hotspot/share/gc/g1/g1CollectionSetCandidates.cpp b/src/hotspot/share/gc/g1/g1CollectionSetCandidates.cpp
index b4649414b10e6..809f1ce9ce895 100644
--- a/src/hotspot/share/gc/g1/g1CollectionSetCandidates.cpp
+++ b/src/hotspot/share/gc/g1/g1CollectionSetCandidates.cpp
@@ -23,7 +23,7 @@
*/
#include "precompiled.hpp"
-#include "gc/g1/g1CollectionSetCandidates.hpp"
+#include "gc/g1/g1CollectionSetCandidates.inline.hpp"
#include "gc/g1/g1CollectionSetChooser.hpp"
#include "gc/g1/g1HeapRegion.inline.hpp"
#include "utilities/bitMap.inline.hpp"
@@ -38,13 +38,13 @@ void G1CollectionCandidateList::set(G1CollectionSetCandidateInfo* candidate_info
_candidates.appendAll(&a);
}
-void G1CollectionCandidateList::append_unsorted(HeapRegion* r) {
+void G1CollectionCandidateList::append_unsorted(G1HeapRegion* r) {
G1CollectionSetCandidateInfo c(r, r->calc_gc_efficiency());
_candidates.append(c);
}
void G1CollectionCandidateList::sort_by_efficiency() {
- _candidates.sort(compare);
+ _candidates.sort(compare_gc_efficiency);
}
void G1CollectionCandidateList::remove(G1CollectionCandidateRegionList* other) {
@@ -94,7 +94,22 @@ void G1CollectionCandidateList::verify() {
}
#endif
-int G1CollectionCandidateList::compare(G1CollectionSetCandidateInfo* ci1, G1CollectionSetCandidateInfo* ci2) {
+int G1CollectionCandidateList::compare_gc_efficiency(G1CollectionSetCandidateInfo* ci1, G1CollectionSetCandidateInfo* ci2) {
+ assert(ci1->_r != nullptr && ci2->_r != nullptr, "Should not be!");
+
+ double gc_eff1 = ci1->_gc_efficiency;
+ double gc_eff2 = ci2->_gc_efficiency;
+
+ if (gc_eff1 > gc_eff2) {
+ return -1;
+ } else if (gc_eff1 < gc_eff2) {
+ return 1;
+ } else {
+ return 0;
+ }
+}
+
+int G1CollectionCandidateList::compare_reclaimble_bytes(G1CollectionSetCandidateInfo* ci1, G1CollectionSetCandidateInfo* ci2) {
// Make sure that null entries are moved to the end.
if (ci1->_r == nullptr) {
if (ci2->_r == nullptr) {
@@ -106,12 +121,12 @@ int G1CollectionCandidateList::compare(G1CollectionSetCandidateInfo* ci1, G1Coll
return -1;
}
- double gc_eff1 = ci1->_gc_efficiency;
- double gc_eff2 = ci2->_gc_efficiency;
+ size_t reclaimable1 = ci1->_r->reclaimable_bytes();
+ size_t reclaimable2 = ci2->_r->reclaimable_bytes();
- if (gc_eff1 > gc_eff2) {
+ if (reclaimable1 > reclaimable2) {
return -1;
- } if (gc_eff1 < gc_eff2) {
+ } else if (reclaimable1 < reclaimable2) {
return 1;
} else {
return 0;
@@ -120,7 +135,7 @@ int G1CollectionCandidateList::compare(G1CollectionSetCandidateInfo* ci1, G1Coll
G1CollectionCandidateRegionList::G1CollectionCandidateRegionList() : _regions(2, mtGC) { }
-void G1CollectionCandidateRegionList::append(HeapRegion* r) {
+void G1CollectionCandidateRegionList::append(G1HeapRegion* r) {
assert(!_regions.contains(r), "must be");
_regions.append(r);
}
@@ -129,7 +144,7 @@ void G1CollectionCandidateRegionList::remove_prefix(G1CollectionCandidateRegionL
#ifdef ASSERT
// Check that the given list is a prefix of this list.
int i = 0;
- for (HeapRegion* r : *other) {
+ for (G1HeapRegion* r : *other) {
assert(_regions.at(i) == r, "must be in order, but element %d is not", i);
i++;
}
@@ -141,7 +156,7 @@ void G1CollectionCandidateRegionList::remove_prefix(G1CollectionCandidateRegionL
_regions.remove_till(other->length());
}
-HeapRegion* G1CollectionCandidateRegionList::at(uint index) {
+G1HeapRegion* G1CollectionCandidateRegionList::at(uint index) {
return _regions.at(index);
}
@@ -161,7 +176,7 @@ G1CollectionSetCandidates::~G1CollectionSetCandidates() {
FREE_C_HEAP_ARRAY(CandidateOrigin, _contains_map);
}
-bool G1CollectionSetCandidates::is_from_marking(HeapRegion* r) const {
+bool G1CollectionSetCandidates::is_from_marking(G1HeapRegion* r) const {
assert(contains(r), "must be");
return _contains_map[r->hrm_index()] == CandidateOrigin::Marking;
}
@@ -182,6 +197,17 @@ void G1CollectionSetCandidates::clear() {
_last_marking_candidates_length = 0;
}
+void G1CollectionSetCandidates::sort_marking_by_efficiency() {
+ G1CollectionCandidateListIterator iter = _marking_regions.begin();
+ for (; iter != _marking_regions.end(); ++iter) {
+ G1HeapRegion* hr = (*iter)->_r;
+ (*iter)->_gc_efficiency = hr->calc_gc_efficiency();
+ }
+ _marking_regions.sort_by_efficiency();
+
+ _marking_regions.verify();
+}
+
void G1CollectionSetCandidates::set_candidates_from_marking(G1CollectionSetCandidateInfo* candidate_infos,
uint num_infos) {
assert(_marking_regions.length() == 0, "must be empty before adding new ones");
@@ -190,7 +216,7 @@ void G1CollectionSetCandidates::set_candidates_from_marking(G1CollectionSetCandi
_marking_regions.set(candidate_infos, num_infos);
for (uint i = 0; i < num_infos; i++) {
- HeapRegion* r = candidate_infos[i]._r;
+ G1HeapRegion* r = candidate_infos[i]._r;
assert(!contains(r), "must not contain region %u", r->hrm_index());
_contains_map[r->hrm_index()] = CandidateOrigin::Marking;
}
@@ -207,7 +233,7 @@ void G1CollectionSetCandidates::sort_by_efficiency() {
_retained_regions.verify();
}
-void G1CollectionSetCandidates::add_retained_region_unsorted(HeapRegion* r) {
+void G1CollectionSetCandidates::add_retained_region_unsorted(G1HeapRegion* r) {
assert(!contains(r), "must not contain region %u", r->hrm_index());
_contains_map[r->hrm_index()] = CandidateOrigin::Retained;
_retained_regions.append_unsorted(r);
@@ -223,7 +249,7 @@ void G1CollectionSetCandidates::remove(G1CollectionCandidateRegionList* other) {
G1CollectionCandidateRegionList other_marking_regions;
G1CollectionCandidateRegionList other_retained_regions;
- for (HeapRegion* r : *other) {
+ for (G1HeapRegion* r : *other) {
if (is_from_marking(r)) {
other_marking_regions.append(r);
} else {
@@ -234,7 +260,7 @@ void G1CollectionSetCandidates::remove(G1CollectionCandidateRegionList* other) {
_marking_regions.remove(&other_marking_regions);
_retained_regions.remove(&other_retained_regions);
- for (HeapRegion* r : *other) {
+ for (G1HeapRegion* r : *other) {
assert(contains(r), "must contain region %u", r->hrm_index());
_contains_map[r->hrm_index()] = CandidateOrigin::Invalid;
}
@@ -263,7 +289,7 @@ void G1CollectionSetCandidates::verify_helper(G1CollectionCandidateList* list, u
list->verify();
for (uint i = 0; i < (uint)list->length(); i++) {
- HeapRegion* r = list->at(i)._r;
+ G1HeapRegion* r = list->at(i)._r;
if (is_from_marking(r)) {
from_marking++;
@@ -308,13 +334,13 @@ void G1CollectionSetCandidates::verify() {
}
#endif
-bool G1CollectionSetCandidates::contains(const HeapRegion* r) const {
+bool G1CollectionSetCandidates::contains(const G1HeapRegion* r) const {
const uint index = r->hrm_index();
assert(index < _max_regions, "must be");
return _contains_map[index] != CandidateOrigin::Invalid;
}
-const char* G1CollectionSetCandidates::get_short_type_str(const HeapRegion* r) const {
+const char* G1CollectionSetCandidates::get_short_type_str(const G1HeapRegion* r) const {
static const char* type_strings[] = {
"Ci", // Invalid
"Cm", // Marking
diff --git a/src/hotspot/share/gc/g1/g1CollectionSetCandidates.hpp b/src/hotspot/share/gc/g1/g1CollectionSetCandidates.hpp
index 4eb5523d0399e..7d24f02246183 100644
--- a/src/hotspot/share/gc/g1/g1CollectionSetCandidates.hpp
+++ b/src/hotspot/share/gc/g1/g1CollectionSetCandidates.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2019, 2023, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2019, 2024, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -26,7 +26,6 @@
#define SHARE_GC_G1_G1COLLECTIONSETCANDIDATES_HPP
#include "gc/g1/g1CollectionSetCandidates.hpp"
-#include "gc/g1/g1_globals.hpp"
#include "gc/shared/gc_globals.hpp"
#include "gc/shared/workerThread.hpp"
#include "memory/allocation.hpp"
@@ -36,29 +35,29 @@
class G1CollectionCandidateList;
class G1CollectionSetCandidates;
-class HeapRegion;
+class G1HeapRegion;
class HeapRegionClosure;
-using G1CollectionCandidateRegionListIterator = GrowableArrayIterator;
+using G1CollectionCandidateRegionListIterator = GrowableArrayIterator;
-// A set of HeapRegion*, a thin wrapper around GrowableArray.
+// A set of G1HeapRegion*, a thin wrapper around GrowableArray.
class G1CollectionCandidateRegionList {
- GrowableArray _regions;
+ GrowableArray _regions;
public:
G1CollectionCandidateRegionList();
- // Append a HeapRegion to the end of this list. The region must not be in the list
+ // Append a G1HeapRegion to the end of this list. The region must not be in the list
// already.
- void append(HeapRegion* r);
- // Remove the given list of HeapRegion* from this list. The given list must be a prefix
+ void append(G1HeapRegion* r);
+ // Remove the given list of G1HeapRegion* from this list. The given list must be a prefix
// of this list.
void remove_prefix(G1CollectionCandidateRegionList* list);
// Empty contents of the list.
void clear();
- HeapRegion* at(uint index);
+ G1HeapRegion* at(uint index);
uint length() const { return (uint)_regions.length(); }
@@ -67,12 +66,12 @@ class G1CollectionCandidateRegionList {
};
struct G1CollectionSetCandidateInfo {
- HeapRegion* _r;
+ G1HeapRegion* _r;
double _gc_efficiency;
uint _num_unreclaimed; // Number of GCs this region has been found unreclaimable.
G1CollectionSetCandidateInfo() : G1CollectionSetCandidateInfo(nullptr, 0.0) { }
- G1CollectionSetCandidateInfo(HeapRegion* r, double gc_efficiency) : _r(r), _gc_efficiency(gc_efficiency), _num_unreclaimed(0) { }
+ G1CollectionSetCandidateInfo(G1HeapRegion* r, double gc_efficiency) : _r(r), _gc_efficiency(gc_efficiency), _num_unreclaimed(0) { }
bool update_num_unreclaimed() {
++_num_unreclaimed;
@@ -106,8 +105,8 @@ class G1CollectionCandidateList : public CHeapObj {
// Put the given set of candidates into this list, preserving the efficiency ordering.
void set(G1CollectionSetCandidateInfo* candidate_infos, uint num_infos);
- // Add the given HeapRegion to this list at the end, (potentially) making the list unsorted.
- void append_unsorted(HeapRegion* r);
+ // Add the given G1HeapRegion to this list at the end, (potentially) making the list unsorted.
+ void append_unsorted(G1HeapRegion* r);
// Restore sorting order by decreasing gc efficiency, using the existing efficiency
// values.
void sort_by_efficiency();
@@ -129,7 +128,9 @@ class G1CollectionCandidateList : public CHeapObj {
// Comparison function to order regions in decreasing GC efficiency order. This
// will cause regions with a lot of live objects and large remembered sets to end
// up at the end of the list.
- static int compare(G1CollectionSetCandidateInfo* ci1, G1CollectionSetCandidateInfo* ci2);
+ static int compare_gc_efficiency(G1CollectionSetCandidateInfo* ci1, G1CollectionSetCandidateInfo* ci2);
+
+ static int compare_reclaimble_bytes(G1CollectionSetCandidateInfo* ci1, G1CollectionSetCandidateInfo* ci2);
G1CollectionCandidateListIterator begin() {
return G1CollectionCandidateListIterator(this, 0);
@@ -150,7 +151,7 @@ class G1CollectionSetCandidatesIterator : public StackObj {
G1CollectionSetCandidatesIterator(G1CollectionSetCandidates* which, uint position);
G1CollectionSetCandidatesIterator& operator++();
- HeapRegion* operator*();
+ G1HeapRegion* operator*();
bool operator==(const G1CollectionSetCandidatesIterator& rhs);
bool operator!=(const G1CollectionSetCandidatesIterator& rhs);
@@ -189,7 +190,7 @@ class G1CollectionSetCandidates : public CHeapObj {
// The number of regions from the last merge of candidates from the marking.
uint _last_marking_candidates_length;
- bool is_from_marking(HeapRegion* r) const;
+ bool is_from_marking(G1HeapRegion* r) const;
public:
G1CollectionSetCandidates();
@@ -213,16 +214,18 @@ class G1CollectionSetCandidates : public CHeapObj {
void sort_by_efficiency();
+ void sort_marking_by_efficiency();
+
// Add the given region to the set of retained regions without regards to the
// gc efficiency sorting. The retained regions must be re-sorted manually later.
- void add_retained_region_unsorted(HeapRegion* r);
+ void add_retained_region_unsorted(G1HeapRegion* r);
// Remove the given regions from the candidates. All given regions must be part
// of the candidates.
void remove(G1CollectionCandidateRegionList* other);
- bool contains(const HeapRegion* r) const;
+ bool contains(const G1HeapRegion* r) const;
- const char* get_short_type_str(const HeapRegion* r) const;
+ const char* get_short_type_str(const G1HeapRegion* r) const;
bool is_empty() const;
diff --git a/src/hotspot/share/gc/g1/g1CollectionSetCandidates.inline.hpp b/src/hotspot/share/gc/g1/g1CollectionSetCandidates.inline.hpp
index 638a50b474535..9d602ace8c3a5 100644
--- a/src/hotspot/share/gc/g1/g1CollectionSetCandidates.inline.hpp
+++ b/src/hotspot/share/gc/g1/g1CollectionSetCandidates.inline.hpp
@@ -61,7 +61,7 @@ inline G1CollectionSetCandidatesIterator& G1CollectionSetCandidatesIterator::ope
return *this;
}
-inline HeapRegion* G1CollectionSetCandidatesIterator::operator*() {
+inline G1HeapRegion* G1CollectionSetCandidatesIterator::operator*() {
uint length = _which->marking_regions_length();
if (_position < length) {
return _which->_marking_regions.at(_position)._r;
diff --git a/src/hotspot/share/gc/g1/g1CollectionSetChooser.cpp b/src/hotspot/share/gc/g1/g1CollectionSetChooser.cpp
index 92a342ec7579d..a45e7de67bf37 100644
--- a/src/hotspot/share/gc/g1/g1CollectionSetChooser.cpp
+++ b/src/hotspot/share/gc/g1/g1CollectionSetChooser.cpp
@@ -91,20 +91,20 @@ class G1BuildCandidateRegionsTask : public WorkerTask {
}
// Set element in array.
- void set(uint idx, HeapRegion* hr) {
+ void set(uint idx, G1HeapRegion* hr) {
assert(idx < _max_size, "Index %u out of bounds %u", idx, _max_size);
assert(_data[idx]._r == nullptr, "Value must not have been set.");
- _data[idx] = CandidateInfo(hr, hr->calc_gc_efficiency());
+ _data[idx] = CandidateInfo(hr, 0.0);
}
- void sort_by_efficiency() {
+ void sort_by_reclaimable_bytes() {
if (_cur_claim_idx == 0) {
return;
}
for (uint i = _cur_claim_idx; i < _max_size; i++) {
assert(_data[i]._r == nullptr, "must be");
}
- qsort(_data, _cur_claim_idx, sizeof(_data[0]), (_sort_Fn)G1CollectionCandidateList::compare);
+ qsort(_data, _cur_claim_idx, sizeof(_data[0]), (_sort_Fn)G1CollectionCandidateList::compare_reclaimble_bytes);
for (uint i = _cur_claim_idx; i < _max_size; i++) {
assert(_data[i]._r == nullptr, "must be");
}
@@ -124,7 +124,7 @@ class G1BuildCandidateRegionsTask : public WorkerTask {
uint _regions_added;
- void add_region(HeapRegion* hr) {
+ void add_region(G1HeapRegion* hr) {
if (_cur_chunk_idx == _cur_chunk_end) {
_array->claim_chunk(_cur_chunk_idx, _cur_chunk_end);
}
@@ -143,7 +143,7 @@ class G1BuildCandidateRegionsTask : public WorkerTask {
_cur_chunk_end(0),
_regions_added(0) { }
- bool do_heap_region(HeapRegion* r) {
+ bool do_heap_region(G1HeapRegion* r) {
// Candidates from marking are always old; also keep regions that are already
// collection set candidates (some retained regions) in that list.
if (!r->is_old() || r->is_collection_set_candidate()) {
@@ -152,8 +152,7 @@ class G1BuildCandidateRegionsTask : public WorkerTask {
}
// Can not add a region without a remembered set to the candidates.
- assert(!r->rem_set()->is_updating(), "must be");
- if (!r->rem_set()->is_complete()) {
+ if (!r->rem_set()->is_tracked()) {
return false;
}
@@ -213,7 +212,7 @@ class G1BuildCandidateRegionsTask : public WorkerTask {
uint max_to_prune = num_candidates - min_old_cset_length;
while (true) {
- HeapRegion* r = data[num_candidates - num_pruned - 1]._r;
+ G1HeapRegion* r = data[num_candidates - num_pruned - 1]._r;
size_t const reclaimable = r->reclaimable_bytes();
if (num_pruned >= max_to_prune ||
wasted_bytes + reclaimable > allowed_waste) {
@@ -249,7 +248,7 @@ class G1BuildCandidateRegionsTask : public WorkerTask {
}
void sort_and_prune_into(G1CollectionSetCandidates* candidates) {
- _result.sort_by_efficiency();
+ _result.sort_by_reclaimable_bytes();
prune(_result.array());
candidates->set_candidates_from_marking(_result.array(),
_num_regions_added);
diff --git a/src/hotspot/share/gc/g1/g1CollectionSetChooser.hpp b/src/hotspot/share/gc/g1/g1CollectionSetChooser.hpp
index 62f9b2a5cef02..f49287f823b25 100644
--- a/src/hotspot/share/gc/g1/g1CollectionSetChooser.hpp
+++ b/src/hotspot/share/gc/g1/g1CollectionSetChooser.hpp
@@ -40,7 +40,7 @@ class G1CollectionSetChooser : public AllStatic {
public:
static size_t mixed_gc_live_threshold_bytes() {
- return HeapRegion::GrainBytes * (size_t) G1MixedGCLiveThresholdPercent / 100;
+ return G1HeapRegion::GrainBytes * (size_t)G1MixedGCLiveThresholdPercent / 100;
}
static bool region_occupancy_low_enough_for_evac(size_t live_bytes) {
diff --git a/src/hotspot/share/gc/g1/g1ConcurrentMark.cpp b/src/hotspot/share/gc/g1/g1ConcurrentMark.cpp
index fdb380ae99f28..827c538a2e59c 100644
--- a/src/hotspot/share/gc/g1/g1ConcurrentMark.cpp
+++ b/src/hotspot/share/gc/g1/g1ConcurrentMark.cpp
@@ -37,6 +37,7 @@
#include "gc/g1/g1DirtyCardQueue.hpp"
#include "gc/g1/g1HeapRegion.inline.hpp"
#include "gc/g1/g1HeapRegionManager.hpp"
+#include "gc/g1/g1HeapRegionPrinter.hpp"
#include "gc/g1/g1HeapRegionRemSet.inline.hpp"
#include "gc/g1/g1HeapRegionSet.inline.hpp"
#include "gc/g1/g1HeapVerifier.hpp"
@@ -579,7 +580,7 @@ void G1ConcurrentMark::reset() {
_root_regions.reset();
}
-void G1ConcurrentMark::clear_statistics(HeapRegion* r) {
+void G1ConcurrentMark::clear_statistics(G1HeapRegion* r) {
uint region_idx = r->hrm_index();
for (uint j = 0; j < _max_num_tasks; ++j) {
_tasks[j]->clear_mark_stats_cache(region_idx);
@@ -588,7 +589,7 @@ void G1ConcurrentMark::clear_statistics(HeapRegion* r) {
_region_mark_stats[region_idx].clear();
}
-void G1ConcurrentMark::humongous_object_eagerly_reclaimed(HeapRegion* r) {
+void G1ConcurrentMark::humongous_object_eagerly_reclaimed(G1HeapRegion* r) {
assert_at_safepoint();
assert(r->is_starts_humongous(), "Got humongous continues region here");
@@ -601,7 +602,7 @@ void G1ConcurrentMark::humongous_object_eagerly_reclaimed(HeapRegion* r) {
// Clear any statistics about the region gathered so far.
_g1h->humongous_obj_regions_iterate(r,
- [&] (HeapRegion* r) {
+ [&] (G1HeapRegion* r) {
clear_statistics(r);
});
}
@@ -696,7 +697,7 @@ class G1ClearBitMapTask : public WorkerTask {
return false;
}
- HeapWord* region_clear_limit(HeapRegion* r) {
+ HeapWord* region_clear_limit(G1HeapRegion* r) {
// During a Concurrent Undo Mark cycle, the per region top_at_mark_start and
// live_words data are current wrt to the _mark_bitmap. We use this information
// to only clear ranges of the bitmap that require clearing.
@@ -720,7 +721,7 @@ class G1ClearBitMapTask : public WorkerTask {
_suspendible(suspendible)
{ }
- virtual bool do_heap_region(HeapRegion* r) {
+ virtual bool do_heap_region(G1HeapRegion* r) {
if (has_aborted()) {
return true;
}
@@ -782,7 +783,7 @@ class G1ClearBitMapTask : public WorkerTask {
void G1ConcurrentMark::clear_bitmap(WorkerThreads* workers, bool may_yield) {
assert(may_yield || SafepointSynchronize::is_at_safepoint(), "Non-yielding bitmap clear only allowed at safepoint.");
- size_t const num_bytes_to_clear = (HeapRegion::GrainBytes * _g1h->num_regions()) / G1CMBitMap::heap_map_factor();
+ size_t const num_bytes_to_clear = (G1HeapRegion::GrainBytes * _g1h->num_regions()) / G1CMBitMap::heap_map_factor();
size_t const num_chunks = align_up(num_bytes_to_clear, G1ClearBitMapTask::chunk_size()) / G1ClearBitMapTask::chunk_size();
uint const num_workers = (uint)MIN2(num_chunks, (size_t)workers->active_workers());
@@ -868,7 +869,7 @@ class NoteStartOfMarkHRClosure : public HeapRegionClosure {
public:
NoteStartOfMarkHRClosure() : HeapRegionClosure(), _cm(G1CollectedHeap::heap()->concurrent_mark()) { }
- bool do_heap_region(HeapRegion* r) override {
+ bool do_heap_region(G1HeapRegion* r) override {
if (r->is_old_or_humongous() && !r->is_collection_set_candidate() && !r->in_collection_set()) {
_cm->update_top_at_mark_start(r);
}
@@ -1034,7 +1035,7 @@ uint G1ConcurrentMark::calc_active_marking_workers() {
void G1ConcurrentMark::scan_root_region(const MemRegion* region, uint worker_id) {
#ifdef ASSERT
HeapWord* last = region->last();
- HeapRegion* hr = _g1h->heap_region_containing(last);
+ G1HeapRegion* hr = _g1h->heap_region_containing(last);
assert(hr->is_old() || top_at_mark_start(hr) == hr->bottom(),
"Root regions must be old or survivor/eden but region %u is %s", hr->hrm_index(), hr->get_type_str());
assert(top_at_mark_start(hr) == region->start(),
@@ -1098,11 +1099,11 @@ bool G1ConcurrentMark::wait_until_root_region_scan_finished() {
return root_regions()->wait_until_scan_finished();
}
-void G1ConcurrentMark::add_root_region(HeapRegion* r) {
+void G1ConcurrentMark::add_root_region(G1HeapRegion* r) {
root_regions()->add(top_at_mark_start(r), r->top());
}
-bool G1ConcurrentMark::is_root_region(HeapRegion* r) {
+bool G1ConcurrentMark::is_root_region(G1HeapRegion* r) {
return root_regions()->contains(MemRegion(top_at_mark_start(r), r->top()));
}
@@ -1232,11 +1233,11 @@ class G1UpdateRegionLivenessAndSelectForRebuildTask : public WorkerTask {
_num_humongous_regions_removed(0),
_local_cleanup_list(local_cleanup_list) {}
- void reclaim_empty_humongous_region(HeapRegion* hr) {
+ void reclaim_empty_humongous_region(G1HeapRegion* hr) {
assert(!hr->has_pinned_objects(), "precondition");
assert(hr->is_starts_humongous(), "precondition");
- auto on_humongous_region = [&] (HeapRegion* hr) {
+ auto on_humongous_region = [&] (G1HeapRegion* hr) {
assert(hr->used() > 0, "precondition");
assert(!hr->has_pinned_objects(), "precondition");
assert(hr->is_humongous(), "precondition");
@@ -1246,13 +1247,14 @@ class G1UpdateRegionLivenessAndSelectForRebuildTask : public WorkerTask {
hr->set_containing_set(nullptr);
hr->clear_cardtable();
_g1h->concurrent_mark()->clear_statistics(hr);
+ G1HeapRegionPrinter::mark_reclaim(hr);
_g1h->free_humongous_region(hr, _local_cleanup_list);
};
_g1h->humongous_obj_regions_iterate(hr, on_humongous_region);
}
- void reclaim_empty_old_region(HeapRegion* hr) {
+ void reclaim_empty_old_region(G1HeapRegion* hr) {
assert(hr->used() > 0, "precondition");
assert(!hr->has_pinned_objects(), "precondition");
assert(hr->is_old(), "precondition");
@@ -1262,10 +1264,11 @@ class G1UpdateRegionLivenessAndSelectForRebuildTask : public WorkerTask {
hr->set_containing_set(nullptr);
hr->clear_cardtable();
_g1h->concurrent_mark()->clear_statistics(hr);
+ G1HeapRegionPrinter::mark_reclaim(hr);
_g1h->free_region(hr, _local_cleanup_list);
}
- bool do_heap_region(HeapRegion* hr) override {
+ bool do_heap_region(G1HeapRegion* hr) override {
G1RemSetTrackingPolicy* tracker = _g1h->policy()->remset_tracker();
if (hr->is_starts_humongous()) {
// The liveness of this humongous obj decided by either its allocation
@@ -1274,7 +1277,7 @@ class G1UpdateRegionLivenessAndSelectForRebuildTask : public WorkerTask {
|| _cm->contains_live_object(hr->hrm_index());
if (is_live) {
const bool selected_for_rebuild = tracker->update_humongous_before_rebuild(hr);
- auto on_humongous_region = [&] (HeapRegion* hr) {
+ auto on_humongous_region = [&] (G1HeapRegion* hr) {
if (selected_for_rebuild) {
_num_selected_for_rebuild++;
}
@@ -1316,8 +1319,6 @@ class G1UpdateRegionLivenessAndSelectForRebuildTask : public WorkerTask {
~G1UpdateRegionLivenessAndSelectForRebuildTask() {
if (!_cleanup_list.is_empty()) {
log_debug(gc)("Reclaimed %u empty regions", _cleanup_list.length());
- // Now print the empty regions list.
- _g1h->hr_printer()->mark_reclaim(&_cleanup_list);
// And actually make them available.
_g1h->prepend_to_freelist(&_cleanup_list);
}
@@ -1359,7 +1360,7 @@ class G1UpdateRegionsAfterRebuild : public HeapRegionClosure {
_g1h(g1h) {
}
- virtual bool do_heap_region(HeapRegion* r) {
+ virtual bool do_heap_region(G1HeapRegion* r) {
// Update the remset tracking state from updating to complete
// if remembered sets have been rebuilt.
_g1h->policy()->remset_tracker()->update_after_rebuild(r);
@@ -1437,7 +1438,14 @@ void G1ConcurrentMark::remark() {
log_debug(gc, remset, tracking)("Remembered Set Tracking update regions total %u, selected %u",
_g1h->num_regions(), cl.total_selected_for_rebuild());
+
_needs_remembered_set_rebuild = (cl.total_selected_for_rebuild() > 0);
+
+ if (_needs_remembered_set_rebuild) {
+ // Prune rebuild candidates based on G1HeapWastePercent.
+ // Improves rebuild time in addition to remembered set memory usage.
+ G1CollectionSetChooser::build(_g1h->workers(), _g1h->num_regions(), _g1h->policy()->candidates());
+ }
}
if (log_is_enabled(Trace, gc, liveness)) {
@@ -1894,24 +1902,24 @@ void G1ConcurrentMark::flush_all_task_caches() {
hits, misses, percent_of(hits, sum));
}
-void G1ConcurrentMark::clear_bitmap_for_region(HeapRegion* hr) {
+void G1ConcurrentMark::clear_bitmap_for_region(G1HeapRegion* hr) {
assert_at_safepoint();
_mark_bitmap.clear_range(MemRegion(hr->bottom(), hr->end()));
}
-HeapRegion* G1ConcurrentMark::claim_region(uint worker_id) {
+G1HeapRegion* G1ConcurrentMark::claim_region(uint worker_id) {
// "checkpoint" the finger
HeapWord* finger = _finger;
while (finger < _heap.end()) {
assert(_g1h->is_in_reserved(finger), "invariant");
- HeapRegion* curr_region = _g1h->heap_region_containing_or_null(finger);
+ G1HeapRegion* curr_region = _g1h->heap_region_containing_or_null(finger);
// Make sure that the reads below do not float before loading curr_region.
OrderAccess::loadload();
// Above heap_region_containing may return null as we always scan claim
// until the end of the heap. In this case, just jump to the next region.
- HeapWord* end = curr_region != nullptr ? curr_region->end() : finger + HeapRegion::GrainWords;
+ HeapWord* end = curr_region != nullptr ? curr_region->end() : finger + G1HeapRegion::GrainWords;
// Is the gap between reading the finger and doing the CAS too long?
HeapWord* res = Atomic::cmpxchg(&_finger, finger, end);
@@ -1965,7 +1973,7 @@ class VerifyNoCSetOops {
guarantee(oopDesc::is_oop(task_entry.obj()),
"Non-oop " PTR_FORMAT ", phase: %s, info: %d",
p2i(task_entry.obj()), _phase, _info);
- HeapRegion* r = _g1h->heap_region_containing(task_entry.obj());
+ G1HeapRegion* r = _g1h->heap_region_containing(task_entry.obj());
guarantee(!(r->in_collection_set() || r->has_index_in_opt_cset()),
"obj " PTR_FORMAT " from %s (%d) in region %u in (optional) collection set",
p2i(task_entry.obj()), _phase, _info, r->hrm_index());
@@ -1990,9 +1998,9 @@ void G1ConcurrentMark::verify_no_collection_set_oops() {
// Verify the global finger
HeapWord* global_finger = finger();
if (global_finger != nullptr && global_finger < _heap.end()) {
- // Since we always iterate over all regions, we might get a null HeapRegion
+ // Since we always iterate over all regions, we might get a null G1HeapRegion
// here.
- HeapRegion* global_hr = _g1h->heap_region_containing_or_null(global_finger);
+ G1HeapRegion* global_hr = _g1h->heap_region_containing_or_null(global_finger);
guarantee(global_hr == nullptr || global_finger == global_hr->bottom(),
"global finger: " PTR_FORMAT " region: " HR_FORMAT,
p2i(global_finger), HR_FORMAT_PARAMS(global_hr));
@@ -2005,7 +2013,7 @@ void G1ConcurrentMark::verify_no_collection_set_oops() {
HeapWord* task_finger = task->finger();
if (task_finger != nullptr && task_finger < _heap.end()) {
// See above note on the global finger verification.
- HeapRegion* r = _g1h->heap_region_containing_or_null(task_finger);
+ G1HeapRegion* r = _g1h->heap_region_containing_or_null(task_finger);
guarantee(r == nullptr || task_finger == r->bottom() ||
!r->in_collection_set() || !r->has_index_in_opt_cset(),
"task finger: " PTR_FORMAT " region: " HR_FORMAT,
@@ -2132,7 +2140,7 @@ G1CMOopClosure::G1CMOopClosure(G1CollectedHeap* g1h,
_g1h(g1h), _task(task)
{ }
-void G1CMTask::setup_for_region(HeapRegion* hr) {
+void G1CMTask::setup_for_region(G1HeapRegion* hr) {
assert(hr != nullptr,
"claim_region() should have filtered out null regions");
_curr_region = hr;
@@ -2141,7 +2149,7 @@ void G1CMTask::setup_for_region(HeapRegion* hr) {
}
void G1CMTask::update_region_limit() {
- HeapRegion* hr = _curr_region;
+ G1HeapRegion* hr = _curr_region;
HeapWord* bottom = hr->bottom();
HeapWord* limit = _cm->top_at_mark_start(hr);
@@ -2733,7 +2741,7 @@ void G1CMTask::do_marking_step(double time_target_ms,
assert(_curr_region == nullptr, "invariant");
assert(_finger == nullptr, "invariant");
assert(_region_limit == nullptr, "invariant");
- HeapRegion* claimed_region = _cm->claim_region(_worker_id);
+ G1HeapRegion* claimed_region = _cm->claim_region(_worker_id);
if (claimed_region != nullptr) {
// Yes, we managed to claim one
setup_for_region(claimed_region);
@@ -2988,7 +2996,7 @@ G1PrintRegionLivenessInfoClosure::G1PrintRegionLivenessInfoClosure(const char* p
G1PPRL_SUM_ADDR_FORMAT("reserved")
G1PPRL_SUM_BYTE_FORMAT("region-size"),
p2i(reserved.start()), p2i(reserved.end()),
- HeapRegion::GrainBytes);
+ G1HeapRegion::GrainBytes);
log_trace(gc, liveness)(G1PPRL_LINE_PREFIX);
log_trace(gc, liveness)(G1PPRL_LINE_PREFIX
G1PPRL_TYPE_H_FORMAT
@@ -3016,7 +3024,7 @@ G1PrintRegionLivenessInfoClosure::G1PrintRegionLivenessInfoClosure(const char* p
"(bytes)", "", "(bytes)");
}
-bool G1PrintRegionLivenessInfoClosure::do_heap_region(HeapRegion* r) {
+bool G1PrintRegionLivenessInfoClosure::do_heap_region(G1HeapRegion* r) {
if (!log_is_enabled(Trace, gc, liveness)) {
return false;
}
diff --git a/src/hotspot/share/gc/g1/g1ConcurrentMark.hpp b/src/hotspot/share/gc/g1/g1ConcurrentMark.hpp
index b2376025b1605..f2206664b257a 100644
--- a/src/hotspot/share/gc/g1/g1ConcurrentMark.hpp
+++ b/src/hotspot/share/gc/g1/g1ConcurrentMark.hpp
@@ -507,7 +507,7 @@ class G1ConcurrentMark : public CHeapObj {
// method. So, this way, each task will spend very little time in
// claim_region() and is allowed to call the regular clock method
// frequently.
- HeapRegion* claim_region(uint worker_id);
+ G1HeapRegion* claim_region(uint worker_id);
// Determines whether we've run out of regions to scan. Note that
// the finger can point past the heap end in case the heap was expanded
@@ -564,25 +564,25 @@ class G1ConcurrentMark : public CHeapObj {
void set_live_bytes(uint region, size_t live_bytes) { _region_mark_stats[region]._live_words = live_bytes / HeapWordSize; }
// Update the TAMS for the given region to the current top.
- inline void update_top_at_mark_start(HeapRegion* r);
+ inline void update_top_at_mark_start(G1HeapRegion* r);
// Reset the TAMS for the given region to bottom of that region.
- inline void reset_top_at_mark_start(HeapRegion* r);
+ inline void reset_top_at_mark_start(G1HeapRegion* r);
- inline HeapWord* top_at_mark_start(const HeapRegion* r) const;
+ inline HeapWord* top_at_mark_start(const G1HeapRegion* r) const;
inline HeapWord* top_at_mark_start(uint region) const;
// Returns whether the given object been allocated since marking start (i.e. >= TAMS in that region).
inline bool obj_allocated_since_mark_start(oop obj) const;
// Sets the internal top_at_region_start for the given region to current top of the region.
- inline void update_top_at_rebuild_start(HeapRegion* r);
+ inline void update_top_at_rebuild_start(G1HeapRegion* r);
// TARS for the given region during remembered set rebuilding.
- inline HeapWord* top_at_rebuild_start(HeapRegion* r) const;
+ inline HeapWord* top_at_rebuild_start(G1HeapRegion* r) const;
// Clear statistics gathered during the concurrent cycle for the given region after
// it has been reclaimed.
- void clear_statistics(HeapRegion* r);
+ void clear_statistics(G1HeapRegion* r);
// Notification for eagerly reclaimed regions to clean up.
- void humongous_object_eagerly_reclaimed(HeapRegion* r);
+ void humongous_object_eagerly_reclaimed(G1HeapRegion* r);
// Manipulation of the global mark stack.
// The push and pop operations are used by tasks for transfers
// between task-local queues and the global mark stack.
@@ -659,8 +659,8 @@ class G1ConcurrentMark : public CHeapObj {
// them.
void scan_root_regions();
bool wait_until_root_region_scan_finished();
- void add_root_region(HeapRegion* r);
- bool is_root_region(HeapRegion* r);
+ void add_root_region(G1HeapRegion* r);
+ bool is_root_region(G1HeapRegion* r);
void root_region_scan_abort_and_wait();
private:
@@ -688,7 +688,7 @@ class G1ConcurrentMark : public CHeapObj {
// Clears marks for all objects in the given region in the marking
// bitmap. This should only be used to clean the bitmap during a
// safepoint.
- void clear_bitmap_for_region(HeapRegion* hr);
+ void clear_bitmap_for_region(G1HeapRegion* hr);
// Verify that there are no collection set oops on the stacks (taskqueues /
// global mark stack) and fingers (global / per-task).
@@ -758,7 +758,7 @@ class G1CMTask : public TerminatorTerminator {
G1CMOopClosure* _cm_oop_closure;
// Region this task is scanning, null if we're not scanning any
- HeapRegion* _curr_region;
+ G1HeapRegion* _curr_region;
// Local finger of this task, null if we're not scanning a region
HeapWord* _finger;
// Limit of the region this task is scanning, null if we're not scanning one
@@ -806,7 +806,7 @@ class G1CMTask : public TerminatorTerminator {
// Updates the local fields after this task has claimed
// a new region to scan
- void setup_for_region(HeapRegion* hr);
+ void setup_for_region(G1HeapRegion* hr);
// Makes the limit of the region up-to-date
void update_region_limit();
@@ -969,7 +969,7 @@ class G1PrintRegionLivenessInfoClosure : public HeapRegionClosure {
// The header and footer are printed in the constructor and
// destructor respectively.
G1PrintRegionLivenessInfoClosure(const char* phase_name);
- virtual bool do_heap_region(HeapRegion* r);
+ virtual bool do_heap_region(G1HeapRegion* r);
~G1PrintRegionLivenessInfoClosure();
};
#endif // SHARE_GC_G1_G1CONCURRENTMARK_HPP
diff --git a/src/hotspot/share/gc/g1/g1ConcurrentMark.inline.hpp b/src/hotspot/share/gc/g1/g1ConcurrentMark.inline.hpp
index 4e1b4e63fe89f..9f165cf1d22a0 100644
--- a/src/hotspot/share/gc/g1/g1ConcurrentMark.inline.hpp
+++ b/src/hotspot/share/gc/g1/g1ConcurrentMark.inline.hpp
@@ -71,7 +71,7 @@ inline bool G1ConcurrentMark::mark_in_bitmap(uint const worker_id, oop const obj
// Some callers may have stale objects to mark above TAMS after humongous reclaim.
// Can't assert that this is a valid object at this point, since it might be in the process of being copied by another thread.
- DEBUG_ONLY(HeapRegion* const hr = _g1h->heap_region_containing(obj);)
+ DEBUG_ONLY(G1HeapRegion* const hr = _g1h->heap_region_containing(obj);)
assert(!hr->is_continues_humongous(),
"Should not try to mark object " PTR_FORMAT " in Humongous continues region %u above TAMS " PTR_FORMAT,
p2i(obj), hr->hrm_index(), p2i(top_at_mark_start(hr)));
@@ -184,17 +184,17 @@ inline size_t G1CMTask::scan_objArray(objArrayOop obj, MemRegion mr) {
return mr.word_size();
}
-inline void G1ConcurrentMark::update_top_at_mark_start(HeapRegion* r) {
+inline void G1ConcurrentMark::update_top_at_mark_start(G1HeapRegion* r) {
uint const region = r->hrm_index();
assert(region < _g1h->max_reserved_regions(), "Tried to access TAMS for region %u out of bounds", region);
_top_at_mark_starts[region] = r->top();
}
-inline void G1ConcurrentMark::reset_top_at_mark_start(HeapRegion* r) {
+inline void G1ConcurrentMark::reset_top_at_mark_start(G1HeapRegion* r) {
_top_at_mark_starts[r->hrm_index()] = r->bottom();
}
-inline HeapWord* G1ConcurrentMark::top_at_mark_start(const HeapRegion* r) const {
+inline HeapWord* G1ConcurrentMark::top_at_mark_start(const G1HeapRegion* r) const {
return top_at_mark_start(r->hrm_index());
}
@@ -209,11 +209,11 @@ inline bool G1ConcurrentMark::obj_allocated_since_mark_start(oop obj) const {
return cast_from_oop(obj) >= top_at_mark_start(region);
}
-inline HeapWord* G1ConcurrentMark::top_at_rebuild_start(HeapRegion* r) const {
+inline HeapWord* G1ConcurrentMark::top_at_rebuild_start(G1HeapRegion* r) const {
return _top_at_rebuild_starts[r->hrm_index()];
}
-inline void G1ConcurrentMark::update_top_at_rebuild_start(HeapRegion* r) {
+inline void G1ConcurrentMark::update_top_at_rebuild_start(G1HeapRegion* r) {
uint const region = r->hrm_index();
assert(region < _g1h->max_reserved_regions(), "Tried to access TARS for region %u out of bounds", region);
assert(_top_at_rebuild_starts[region] == nullptr,
diff --git a/src/hotspot/share/gc/g1/g1ConcurrentMarkBitMap.cpp b/src/hotspot/share/gc/g1/g1ConcurrentMarkBitMap.cpp
index 438b58c2c02c7..b76ab6b867cea 100644
--- a/src/hotspot/share/gc/g1/g1ConcurrentMarkBitMap.cpp
+++ b/src/hotspot/share/gc/g1/g1ConcurrentMarkBitMap.cpp
@@ -42,6 +42,6 @@ void G1CMBitMapMappingChangedListener::on_commit(uint start_region, size_t num_r
return;
}
// We need to clear the bitmap on commit, removing any existing information.
- MemRegion mr(G1CollectedHeap::heap()->bottom_addr_for_region(start_region), num_regions * HeapRegion::GrainWords);
+ MemRegion mr(G1CollectedHeap::heap()->bottom_addr_for_region(start_region), num_regions * G1HeapRegion::GrainWords);
_bm->clear_range(mr);
}
diff --git a/src/hotspot/share/gc/g1/g1ConcurrentMarkBitMap.hpp b/src/hotspot/share/gc/g1/g1ConcurrentMarkBitMap.hpp
index 3df68aaf6c2e6..6a141a7919b63 100644
--- a/src/hotspot/share/gc/g1/g1ConcurrentMarkBitMap.hpp
+++ b/src/hotspot/share/gc/g1/g1ConcurrentMarkBitMap.hpp
@@ -36,7 +36,6 @@
class G1CMBitMap;
class G1CMTask;
class G1ConcurrentMark;
-class HeapRegion;
// Closure for iteration over bitmaps
class G1CMBitMapClosure {
diff --git a/src/hotspot/share/gc/g1/g1ConcurrentMarkObjArrayProcessor.cpp b/src/hotspot/share/gc/g1/g1ConcurrentMarkObjArrayProcessor.cpp
index 84823f8bf830d..ebf06be94c862 100644
--- a/src/hotspot/share/gc/g1/g1ConcurrentMarkObjArrayProcessor.cpp
+++ b/src/hotspot/share/gc/g1/g1ConcurrentMarkObjArrayProcessor.cpp
@@ -60,7 +60,7 @@ size_t G1CMObjArrayProcessor::process_slice(HeapWord* slice) {
// slide is fast enough for "smaller" objects in non-humongous regions, but is slower
// than directly using heap region table.
G1CollectedHeap* g1h = G1CollectedHeap::heap();
- HeapRegion* r = g1h->heap_region_containing(slice);
+ G1HeapRegion* r = g1h->heap_region_containing(slice);
HeapWord* const start_address = r->is_humongous() ?
r->humongous_start_region()->bottom() :
diff --git a/src/hotspot/share/gc/g1/g1ConcurrentRebuildAndScrub.cpp b/src/hotspot/share/gc/g1/g1ConcurrentRebuildAndScrub.cpp
index a176b847eab69..cacb28f84b1e2 100644
--- a/src/hotspot/share/gc/g1/g1ConcurrentRebuildAndScrub.cpp
+++ b/src/hotspot/share/gc/g1/g1ConcurrentRebuildAndScrub.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2022, 2023, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2022, 2024, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -30,7 +30,7 @@
#include "gc/g1/g1ConcurrentMarkBitMap.inline.hpp"
#include "gc/g1/g1HeapRegion.inline.hpp"
#include "gc/g1/g1HeapRegionManager.inline.hpp"
-#include "gc/g1/g1_globals.hpp"
+#include "gc/shared/gc_globals.hpp"
#include "gc/shared/suspendibleThreadSet.hpp"
#include "gc/shared/workerThread.hpp"
#include "logging/log.hpp"
@@ -104,7 +104,7 @@ class G1RebuildRSAndScrubTask : public WorkerTask {
// the value may be changed to null during rebuilding if the region has either:
// - been allocated after rebuild start, or
// - been reclaimed by a collection.
- bool should_rebuild_or_scrub(HeapRegion* hr) const {
+ bool should_rebuild_or_scrub(G1HeapRegion* hr) const {
return _cm->top_at_rebuild_start(hr) != nullptr;
}
@@ -112,7 +112,7 @@ class G1RebuildRSAndScrubTask : public WorkerTask {
// G1RebuildRemSetChunkSize. The heap region is needed check whether the region has
// been reclaimed during yielding.
// Returns true if marking has been aborted or false if completed.
- bool scan_large_object(HeapRegion* hr, const oop obj, MemRegion scan_range) {
+ bool scan_large_object(G1HeapRegion* hr, const oop obj, MemRegion scan_range) {
HeapWord* start = scan_range.start();
HeapWord* limit = scan_range.end();
do {
@@ -140,7 +140,7 @@ class G1RebuildRSAndScrubTask : public WorkerTask {
// Scan for references into regions that need remembered set update for the given
// live object. Returns the offset to the next object.
- size_t scan_object(HeapRegion* hr, HeapWord* current) {
+ size_t scan_object(G1HeapRegion* hr, HeapWord* current) {
oop obj = cast_to_oop(current);
size_t obj_size = obj->size();
@@ -166,7 +166,7 @@ class G1RebuildRSAndScrubTask : public WorkerTask {
}
// Scrub a range of dead objects starting at scrub_start. Will never scrub past limit.
- HeapWord* scrub_to_next_live(HeapRegion* hr, HeapWord* scrub_start, HeapWord* limit) {
+ HeapWord* scrub_to_next_live(G1HeapRegion* hr, HeapWord* scrub_start, HeapWord* limit) {
assert(!_bitmap->is_marked(scrub_start), "Should not scrub live object");
HeapWord* scrub_end = _bitmap->get_next_marked_addr(scrub_start, limit);
@@ -178,7 +178,7 @@ class G1RebuildRSAndScrubTask : public WorkerTask {
// Scan the given region from bottom to parsable_bottom. Returns whether marking has
// been aborted.
- bool scan_and_scrub_to_pb(HeapRegion* hr, HeapWord* start, HeapWord* const limit) {
+ bool scan_and_scrub_to_pb(G1HeapRegion* hr, HeapWord* start, HeapWord* const limit) {
while (start < limit) {
if (_bitmap->is_marked(start)) {
@@ -205,7 +205,7 @@ class G1RebuildRSAndScrubTask : public WorkerTask {
// Scan the given region from parsable_bottom to tars. Returns whether marking has
// been aborted.
- bool scan_from_pb_to_tars(HeapRegion* hr, HeapWord* start, HeapWord* const limit) {
+ bool scan_from_pb_to_tars(G1HeapRegion* hr, HeapWord* start, HeapWord* const limit) {
while (start < limit) {
start += scan_object(hr, start);
@@ -225,7 +225,7 @@ class G1RebuildRSAndScrubTask : public WorkerTask {
// Scan and scrub the given region to tars. Returns whether marking has
// been aborted.
- bool scan_and_scrub_region(HeapRegion* hr, HeapWord* const pb) {
+ bool scan_and_scrub_region(G1HeapRegion* hr, HeapWord* const pb) {
assert(should_rebuild_or_scrub(hr), "must be");
log_trace(gc, marking)("Scrub and rebuild region: " HR_FORMAT " pb: " PTR_FORMAT " TARS: " PTR_FORMAT " TAMS: " PTR_FORMAT,
@@ -255,7 +255,7 @@ class G1RebuildRSAndScrubTask : public WorkerTask {
// Scan a humongous region for remembered set updates. Scans in chunks to avoid
// stalling safepoints. Returns whether the concurrent marking phase has been aborted.
- bool scan_humongous_region(HeapRegion* hr, HeapWord* const pb) {
+ bool scan_humongous_region(G1HeapRegion* hr, HeapWord* const pb) {
assert(should_rebuild_or_scrub(hr), "must be");
if (!_should_rebuild_remset) {
@@ -294,7 +294,7 @@ class G1RebuildRSAndScrubTask : public WorkerTask {
_should_rebuild_remset(should_rebuild_remset),
_processed_words(0) { }
- bool do_heap_region(HeapRegion* hr) {
+ bool do_heap_region(G1HeapRegion* hr) {
// Avoid stalling safepoints and stop iteration if mark cycle has been aborted.
_cm->do_yield_check();
if (_cm->has_aborted()) {
diff --git a/src/hotspot/share/gc/g1/g1ConcurrentRefine.cpp b/src/hotspot/share/gc/g1/g1ConcurrentRefine.cpp
index 9ae9ad793c424..2fabeec980547 100644
--- a/src/hotspot/share/gc/g1/g1ConcurrentRefine.cpp
+++ b/src/hotspot/share/gc/g1/g1ConcurrentRefine.cpp
@@ -262,7 +262,7 @@ class G1ConcurrentRefine::RemSetSamplingClosure : public HeapRegionClosure {
explicit RemSetSamplingClosure(G1CollectionSet* cset) :
_cset(cset), _sampled_card_rs_length(0), _sampled_code_root_rs_length(0) {}
- bool do_heap_region(HeapRegion* r) override {
+ bool do_heap_region(G1HeapRegion* r) override {
HeapRegionRemSet* rem_set = r->rem_set();
_sampled_card_rs_length += rem_set->occupied();
_sampled_code_root_rs_length += rem_set->code_roots_list_length();
@@ -317,7 +317,7 @@ bool G1ConcurrentRefine::adjust_threads_periodically() {
size_t used_bytes = _policy->estimate_used_young_bytes_locked();
Heap_lock->unlock();
adjust_young_list_target_length();
- size_t young_bytes = _policy->young_list_target_length() * HeapRegion::GrainBytes;
+ size_t young_bytes = _policy->young_list_target_length() * G1HeapRegion::GrainBytes;
size_t available_bytes = young_bytes - MIN2(young_bytes, used_bytes);
adjust_threads_wanted(available_bytes);
_needs_adjust = false;
diff --git a/src/hotspot/share/gc/g1/g1ConcurrentRefineThreadsNeeded.cpp b/src/hotspot/share/gc/g1/g1ConcurrentRefineThreadsNeeded.cpp
index 33943eb5113ab..b2bb5c73ad59b 100644
--- a/src/hotspot/share/gc/g1/g1ConcurrentRefineThreadsNeeded.cpp
+++ b/src/hotspot/share/gc/g1/g1ConcurrentRefineThreadsNeeded.cpp
@@ -60,7 +60,7 @@ void G1ConcurrentRefineThreadsNeeded::update(uint active_threads,
// Estimate time until next GC, based on remaining bytes available for
// allocation and the allocation rate.
double alloc_region_rate = analytics->predict_alloc_rate_ms();
- double alloc_bytes_rate = alloc_region_rate * HeapRegion::GrainBytes;
+ double alloc_bytes_rate = alloc_region_rate * G1HeapRegion::GrainBytes;
if (alloc_bytes_rate == 0.0) {
// A zero rate indicates we don't yet have data to use for predictions.
// Since we don't have any idea how long until the next GC, use a time of
diff --git a/src/hotspot/share/gc/g1/g1EdenRegions.hpp b/src/hotspot/share/gc/g1/g1EdenRegions.hpp
index c76099c284a19..ef37619c140df 100644
--- a/src/hotspot/share/gc/g1/g1EdenRegions.hpp
+++ b/src/hotspot/share/gc/g1/g1EdenRegions.hpp
@@ -41,7 +41,7 @@ class G1EdenRegions {
public:
G1EdenRegions() : _length(0), _used_bytes(0), _regions_on_node() { }
- uint add(HeapRegion* hr) {
+ uint add(G1HeapRegion* hr) {
assert(!hr->is_eden(), "should not already be set");
_length++;
return _regions_on_node.add(hr);
diff --git a/src/hotspot/share/gc/g1/g1EvacFailureRegions.inline.hpp b/src/hotspot/share/gc/g1/g1EvacFailureRegions.inline.hpp
index 28be8562b523f..0d4f5091d9eae 100644
--- a/src/hotspot/share/gc/g1/g1EvacFailureRegions.inline.hpp
+++ b/src/hotspot/share/gc/g1/g1EvacFailureRegions.inline.hpp
@@ -60,7 +60,7 @@ bool G1EvacFailureRegions::record(uint worker_id, uint region_idx, bool cause_pi
_evac_failed_regions[offset] = region_idx;
G1CollectedHeap* g1h = G1CollectedHeap::heap();
- HeapRegion* hr = g1h->region_at(region_idx);
+ G1HeapRegion* hr = g1h->region_at(region_idx);
hr->note_evacuation_failure();
}
diff --git a/src/hotspot/share/gc/g1/g1FullCollector.cpp b/src/hotspot/share/gc/g1/g1FullCollector.cpp
index 81a54a9c31f32..0fba8c1017f02 100644
--- a/src/hotspot/share/gc/g1/g1FullCollector.cpp
+++ b/src/hotspot/share/gc/g1/g1FullCollector.cpp
@@ -149,7 +149,7 @@ G1FullCollector::G1FullCollector(G1CollectedHeap* heap,
}
_serial_compaction_point.set_preserved_stack(_preserved_marks_set.get(0));
_humongous_compaction_point.set_preserved_stack(_preserved_marks_set.get(0));
- _region_attr_table.initialize(heap->reserved(), HeapRegion::GrainBytes);
+ _region_attr_table.initialize(heap->reserved(), G1HeapRegion::GrainBytes);
}
G1FullCollector::~G1FullCollector() {
@@ -170,7 +170,7 @@ class PrepareRegionsClosure : public HeapRegionClosure {
public:
PrepareRegionsClosure(G1FullCollector* collector) : _collector(collector) { }
- bool do_heap_region(HeapRegion* hr) {
+ bool do_heap_region(G1HeapRegion* hr) {
hr->prepare_for_full_gc();
G1CollectedHeap::heap()->prepare_region_for_full_compaction(hr);
_collector->before_marking_update_attribute_table(hr);
@@ -255,7 +255,7 @@ void G1FullCollector::complete_collection() {
_heap->print_heap_after_full_collection();
}
-void G1FullCollector::before_marking_update_attribute_table(HeapRegion* hr) {
+void G1FullCollector::before_marking_update_attribute_table(G1HeapRegion* hr) {
if (hr->is_free()) {
_region_attr_table.set_free(hr->hrm_index());
} else if (hr->is_humongous() || hr->has_pinned_objects()) {
@@ -419,7 +419,7 @@ void G1FullCollector::phase2c_prepare_serial_compaction() {
G1FullGCCompactionPoint* serial_cp = serial_compaction_point();
assert(!serial_cp->is_initialized(), "sanity!");
- HeapRegion* start_hr = _heap->region_at(start_serial);
+ G1HeapRegion* start_hr = _heap->region_at(start_serial);
serial_cp->add(start_hr);
serial_cp->initialize(start_hr);
@@ -428,7 +428,7 @@ void G1FullCollector::phase2c_prepare_serial_compaction() {
for (uint i = start_serial + 1; i < _heap->max_reserved_regions(); i++) {
if (is_compaction_target(i)) {
- HeapRegion* current = _heap->region_at(i);
+ G1HeapRegion* current = _heap->region_at(i);
set_compaction_top(current, current->bottom());
serial_cp->add(current);
current->apply_to_marked_objects(mark_bitmap(), &re_prepare);
@@ -449,7 +449,7 @@ void G1FullCollector::phase2d_prepare_humongous_compaction() {
G1FullGCCompactionPoint* humongous_cp = humongous_compaction_point();
while (region_index < max_reserved_regions) {
- HeapRegion* hr = _heap->region_at_or_null(region_index);
+ G1HeapRegion* hr = _heap->region_at_or_null(region_index);
if (hr == nullptr) {
region_index++;
diff --git a/src/hotspot/share/gc/g1/g1FullCollector.hpp b/src/hotspot/share/gc/g1/g1FullCollector.hpp
index b1ceb599fd2f1..11effe85993e9 100644
--- a/src/hotspot/share/gc/g1/g1FullCollector.hpp
+++ b/src/hotspot/share/gc/g1/g1FullCollector.hpp
@@ -45,7 +45,7 @@ class G1FullGCMarker;
class G1FullGCScope;
class G1FullGCCompactionPoint;
class GCMemoryManager;
-class HeapRegion;
+class G1HeapRegion;
class ReferenceProcessor;
// Subject-to-discovery closure for reference processing during Full GC. During
@@ -87,7 +87,7 @@ class G1FullCollector : StackObj {
G1IsAliveClosure _is_alive;
ReferenceProcessorIsAliveMutator _is_alive_mutator;
G1RegionMarkStats* _live_stats;
- GrowableArrayCHeap _humongous_compaction_regions;
+ GrowableArrayCHeap _humongous_compaction_regions;
static uint calc_active_workers();
@@ -125,7 +125,7 @@ class G1FullCollector : StackObj {
return _live_stats[region_index]._live_words;
}
- void before_marking_update_attribute_table(HeapRegion* hr);
+ void before_marking_update_attribute_table(G1HeapRegion* hr);
inline bool is_compacting(oop obj) const;
inline bool is_skip_compacting(uint region_index) const;
@@ -138,14 +138,14 @@ class G1FullCollector : StackObj {
inline void update_from_compacting_to_skip_compacting(uint region_idx);
inline void update_from_skip_compacting_to_compacting(uint region_idx);
- inline void set_compaction_top(HeapRegion* r, HeapWord* value);
- inline HeapWord* compaction_top(HeapRegion* r) const;
+ inline void set_compaction_top(G1HeapRegion* r, HeapWord* value);
+ inline HeapWord* compaction_top(G1HeapRegion* r) const;
inline void set_has_compaction_targets();
inline bool has_compaction_targets() const;
- inline void add_humongous_region(HeapRegion* hr);
- inline GrowableArrayCHeap& humongous_compaction_regions();
+ inline void add_humongous_region(G1HeapRegion* hr);
+ inline GrowableArrayCHeap& humongous_compaction_regions();
uint truncate_parallel_cps();
diff --git a/src/hotspot/share/gc/g1/g1FullCollector.inline.hpp b/src/hotspot/share/gc/g1/g1FullCollector.inline.hpp
index 7aa04449bcf7b..1a8d625796149 100644
--- a/src/hotspot/share/gc/g1/g1FullCollector.inline.hpp
+++ b/src/hotspot/share/gc/g1/g1FullCollector.inline.hpp
@@ -62,11 +62,11 @@ void G1FullCollector::update_from_skip_compacting_to_compacting(uint region_idx)
_region_attr_table.set_compacting(region_idx);
}
-void G1FullCollector::set_compaction_top(HeapRegion* r, HeapWord* value) {
+void G1FullCollector::set_compaction_top(G1HeapRegion* r, HeapWord* value) {
Atomic::store(&_compaction_tops[r->hrm_index()], value);
}
-HeapWord* G1FullCollector::compaction_top(HeapRegion* r) const {
+HeapWord* G1FullCollector::compaction_top(G1HeapRegion* r) const {
return Atomic::load(&_compaction_tops[r->hrm_index()]);
}
@@ -90,11 +90,11 @@ bool G1FullCollector::has_humongous() {
return _has_humongous;
}
-void G1FullCollector::add_humongous_region(HeapRegion* hr) {
+void G1FullCollector::add_humongous_region(G1HeapRegion* hr) {
_humongous_compaction_regions.append(hr);
}
-GrowableArrayCHeap& G1FullCollector::humongous_compaction_regions() {
+GrowableArrayCHeap& G1FullCollector::humongous_compaction_regions() {
return _humongous_compaction_regions;
}
diff --git a/src/hotspot/share/gc/g1/g1FullGCAdjustTask.cpp b/src/hotspot/share/gc/g1/g1FullGCAdjustTask.cpp
index 44e41c3cb8942..75da25199eb7f 100644
--- a/src/hotspot/share/gc/g1/g1FullGCAdjustTask.cpp
+++ b/src/hotspot/share/gc/g1/g1FullGCAdjustTask.cpp
@@ -61,7 +61,7 @@ class G1AdjustRegionClosure : public HeapRegionClosure {
_bitmap(collector->mark_bitmap()),
_worker_id(worker_id) { }
- bool do_heap_region(HeapRegion* r) {
+ bool do_heap_region(G1HeapRegion* r) {
G1AdjustClosure cl(_collector);
if (r->is_humongous()) {
// Special handling for humongous regions to get somewhat better
diff --git a/src/hotspot/share/gc/g1/g1FullGCCompactTask.cpp b/src/hotspot/share/gc/g1/g1FullGCCompactTask.cpp
index 5e3a9687f9808..05f669592433b 100644
--- a/src/hotspot/share/gc/g1/g1FullGCCompactTask.cpp
+++ b/src/hotspot/share/gc/g1/g1FullGCCompactTask.cpp
@@ -66,7 +66,7 @@ void G1FullGCCompactTask::copy_object_to_new_location(oop obj) {
assert(cast_to_oop(destination)->klass() != nullptr, "should have a class");
}
-void G1FullGCCompactTask::compact_region(HeapRegion* hr) {
+void G1FullGCCompactTask::compact_region(G1HeapRegion* hr) {
assert(!hr->has_pinned_objects(), "Should be no region with pinned objects in compaction queue");
assert(!hr->is_humongous(), "Should be no humongous regions in compaction queue");
@@ -87,8 +87,8 @@ void G1FullGCCompactTask::compact_region(HeapRegion* hr) {
void G1FullGCCompactTask::work(uint worker_id) {
Ticks start = Ticks::now();
- GrowableArray* compaction_queue = collector()->compaction_point(worker_id)->regions();
- for (GrowableArrayIterator it = compaction_queue->begin();
+ GrowableArray* compaction_queue = collector()->compaction_point(worker_id)->regions();
+ for (GrowableArrayIterator it = compaction_queue->begin();
it != compaction_queue->end();
++it) {
compact_region(*it);
@@ -97,8 +97,8 @@ void G1FullGCCompactTask::work(uint worker_id) {
void G1FullGCCompactTask::serial_compaction() {
GCTraceTime(Debug, gc, phases) tm("Phase 4: Serial Compaction", collector()->scope()->timer());
- GrowableArray* compaction_queue = collector()->serial_compaction_point()->regions();
- for (GrowableArrayIterator it = compaction_queue->begin();
+ GrowableArray* compaction_queue = collector()->serial_compaction_point()->regions();
+ for (GrowableArrayIterator it = compaction_queue->begin();
it != compaction_queue->end();
++it) {
compact_region(*it);
@@ -108,13 +108,13 @@ void G1FullGCCompactTask::serial_compaction() {
void G1FullGCCompactTask::humongous_compaction() {
GCTraceTime(Debug, gc, phases) tm("Phase 4: Humonguous Compaction", collector()->scope()->timer());
- for (HeapRegion* hr : collector()->humongous_compaction_regions()) {
+ for (G1HeapRegion* hr : collector()->humongous_compaction_regions()) {
assert(collector()->is_compaction_target(hr->hrm_index()), "Sanity");
compact_humongous_obj(hr);
}
}
-void G1FullGCCompactTask::compact_humongous_obj(HeapRegion* src_hr) {
+void G1FullGCCompactTask::compact_humongous_obj(G1HeapRegion* src_hr) {
assert(src_hr->is_starts_humongous(), "Should be start region of the humongous object");
oop obj = cast_to_oop(src_hr->bottom());
@@ -146,7 +146,7 @@ void G1FullGCCompactTask::free_non_overlapping_regions(uint src_start_idx, uint
dest_end_idx + 1;
for (uint i = non_overlapping_start; i <= src_end_idx; ++i) {
- HeapRegion* hr = _g1h->region_at(i);
+ G1HeapRegion* hr = _g1h->region_at(i);
_g1h->free_humongous_region(hr, nullptr);
}
}
diff --git a/src/hotspot/share/gc/g1/g1FullGCCompactTask.hpp b/src/hotspot/share/gc/g1/g1FullGCCompactTask.hpp
index 7f7f144397baf..341542f6d6f70 100644
--- a/src/hotspot/share/gc/g1/g1FullGCCompactTask.hpp
+++ b/src/hotspot/share/gc/g1/g1FullGCCompactTask.hpp
@@ -40,8 +40,8 @@ class G1FullGCCompactTask : public G1FullGCTask {
HeapRegionClaimer _claimer;
G1CollectedHeap* _g1h;
- void compact_region(HeapRegion* hr);
- void compact_humongous_obj(HeapRegion* hr);
+ void compact_region(G1HeapRegion* hr);
+ void compact_humongous_obj(G1HeapRegion* hr);
void free_non_overlapping_regions(uint src_start_idx, uint dest_start_idx, uint num_regions);
static void copy_object_to_new_location(oop obj);
diff --git a/src/hotspot/share/gc/g1/g1FullGCCompactionPoint.cpp b/src/hotspot/share/gc/g1/g1FullGCCompactionPoint.cpp
index 7b19aa431beb1..019484c810a54 100644
--- a/src/hotspot/share/gc/g1/g1FullGCCompactionPoint.cpp
+++ b/src/hotspot/share/gc/g1/g1FullGCCompactionPoint.cpp
@@ -35,7 +35,7 @@ G1FullGCCompactionPoint::G1FullGCCompactionPoint(G1FullCollector* collector, Pre
_current_region(nullptr),
_compaction_top(nullptr),
_preserved_stack(preserved_stack) {
- _compaction_regions = new (mtGC) GrowableArray(32, mtGC);
+ _compaction_regions = new (mtGC) GrowableArray(32, mtGC);
_compaction_region_iterator = _compaction_regions->begin();
}
@@ -61,22 +61,22 @@ bool G1FullGCCompactionPoint::is_initialized() {
return _current_region != nullptr;
}
-void G1FullGCCompactionPoint::initialize(HeapRegion* hr) {
+void G1FullGCCompactionPoint::initialize(G1HeapRegion* hr) {
_current_region = hr;
initialize_values();
}
-HeapRegion* G1FullGCCompactionPoint::current_region() {
+G1HeapRegion* G1FullGCCompactionPoint::current_region() {
return *_compaction_region_iterator;
}
-HeapRegion* G1FullGCCompactionPoint::next_region() {
- HeapRegion* next = *(++_compaction_region_iterator);
+G1HeapRegion* G1FullGCCompactionPoint::next_region() {
+ G1HeapRegion* next = *(++_compaction_region_iterator);
assert(next != nullptr, "Must return valid region");
return next;
}
-GrowableArray* G1FullGCCompactionPoint::regions() {
+GrowableArray* G1FullGCCompactionPoint::regions() {
return _compaction_regions;
}
@@ -117,16 +117,16 @@ void G1FullGCCompactionPoint::forward(oop object, size_t size) {
_current_region->update_bot_for_block(_compaction_top - size, _compaction_top);
}
-void G1FullGCCompactionPoint::add(HeapRegion* hr) {
+void G1FullGCCompactionPoint::add(G1HeapRegion* hr) {
_compaction_regions->append(hr);
}
void G1FullGCCompactionPoint::remove_at_or_above(uint bottom) {
- HeapRegion* cur = current_region();
+ G1HeapRegion* cur = current_region();
assert(cur->hrm_index() >= bottom, "Sanity!");
int start_index = 0;
- for (HeapRegion* r : *_compaction_regions) {
+ for (G1HeapRegion* r : *_compaction_regions) {
if (r->hrm_index() < bottom) {
start_index++;
}
@@ -136,20 +136,20 @@ void G1FullGCCompactionPoint::remove_at_or_above(uint bottom) {
_compaction_regions->trunc_to(start_index);
}
-void G1FullGCCompactionPoint::add_humongous(HeapRegion* hr) {
+void G1FullGCCompactionPoint::add_humongous(G1HeapRegion* hr) {
assert(hr->is_starts_humongous(), "Sanity!");
_collector->add_humongous_region(hr);
G1CollectedHeap* g1h = G1CollectedHeap::heap();
g1h->humongous_obj_regions_iterate(hr,
- [&] (HeapRegion* r) {
+ [&] (G1HeapRegion* r) {
add(r);
_collector->update_from_skip_compacting_to_compacting(r->hrm_index());
});
}
-void G1FullGCCompactionPoint::forward_humongous(HeapRegion* hr) {
+void G1FullGCCompactionPoint::forward_humongous(G1HeapRegion* hr) {
assert(hr->is_starts_humongous(), "Sanity!");
oop obj = cast_to_oop(hr->bottom());
@@ -171,7 +171,7 @@ void G1FullGCCompactionPoint::forward_humongous(HeapRegion* hr) {
// Preserve the mark for the humongous object as the region was initially not compacting.
preserved_stack()->push_if_necessary(obj, obj->mark());
- HeapRegion* dest_hr = _compaction_regions->at(range_begin);
+ G1HeapRegion* dest_hr = _compaction_regions->at(range_begin);
obj->forward_to(cast_to_oop(dest_hr->bottom()));
assert(obj->is_forwarded(), "Object must be forwarded!");
@@ -184,7 +184,7 @@ void G1FullGCCompactionPoint::forward_humongous(HeapRegion* hr) {
return;
}
-uint G1FullGCCompactionPoint::find_contiguous_before(HeapRegion* hr, uint num_regions) {
+uint G1FullGCCompactionPoint::find_contiguous_before(G1HeapRegion* hr, uint num_regions) {
assert(num_regions > 0, "Sanity!");
assert(has_regions(), "Sanity!");
diff --git a/src/hotspot/share/gc/g1/g1FullGCCompactionPoint.hpp b/src/hotspot/share/gc/g1/g1FullGCCompactionPoint.hpp
index 6d87e44b213ab..43131f9092ded 100644
--- a/src/hotspot/share/gc/g1/g1FullGCCompactionPoint.hpp
+++ b/src/hotspot/share/gc/g1/g1FullGCCompactionPoint.hpp
@@ -31,22 +31,22 @@
#include "utilities/pair.hpp"
class G1FullCollector;
-class HeapRegion;
+class G1HeapRegion;
class PreservedMarks;
class G1FullGCCompactionPoint : public CHeapObj {
G1FullCollector* _collector;
- HeapRegion* _current_region;
- HeapWord* _compaction_top;
+ G1HeapRegion* _current_region;
+ HeapWord* _compaction_top;
PreservedMarks* _preserved_stack;
- GrowableArray* _compaction_regions;
- GrowableArrayIterator _compaction_region_iterator;
+ GrowableArray* _compaction_regions;
+ GrowableArrayIterator _compaction_region_iterator;
bool object_will_fit(size_t size);
void initialize_values();
void switch_region();
- HeapRegion* next_region();
- uint find_contiguous_before(HeapRegion* hr, uint num_regions);
+ G1HeapRegion* next_region();
+ uint find_contiguous_before(G1HeapRegion* hr, uint num_regions);
public:
G1FullGCCompactionPoint(G1FullCollector* collector, PreservedMarks* preserved_stack);
@@ -54,17 +54,17 @@ class G1FullGCCompactionPoint : public CHeapObj {
bool has_regions();
bool is_initialized();
- void initialize(HeapRegion* hr);
+ void initialize(G1HeapRegion* hr);
void update();
void forward(oop object, size_t size);
- void forward_humongous(HeapRegion* hr);
- void add(HeapRegion* hr);
- void add_humongous(HeapRegion* hr);
+ void forward_humongous(G1HeapRegion* hr);
+ void add(G1HeapRegion* hr);
+ void add_humongous(G1HeapRegion* hr);
void remove_at_or_above(uint bottom);
- HeapRegion* current_region();
+ G1HeapRegion* current_region();
- GrowableArray* regions();
+ GrowableArray* regions();
PreservedMarks* preserved_stack() const {
assert(_preserved_stack != nullptr, "must be initialized");
diff --git a/src/hotspot/share/gc/g1/g1FullGCHeapRegionAttr.hpp b/src/hotspot/share/gc/g1/g1FullGCHeapRegionAttr.hpp
index 6acac0b0fa506..2fc7d74f331f1 100644
--- a/src/hotspot/share/gc/g1/g1FullGCHeapRegionAttr.hpp
+++ b/src/hotspot/share/gc/g1/g1FullGCHeapRegionAttr.hpp
@@ -34,7 +34,7 @@
// the table specifies whether a Full GC cycle should be compacting or skip
// compacting a region.
// Reasons for not compacting a region:
-// (1) the HeapRegion itself can not be moved during this phase of the full gc
+// (1) the G1HeapRegion itself can not be moved during this phase of the full gc
// (e.g. Humongous regions).
// (2) the occupancy of the region is too high to be considered eligible for compaction.
class G1FullGCHeapRegionAttr : public G1BiasedMappedArray {
diff --git a/src/hotspot/share/gc/g1/g1FullGCPrepareTask.cpp b/src/hotspot/share/gc/g1/g1FullGCPrepareTask.cpp
index e116367133d91..88460a07f4499 100644
--- a/src/hotspot/share/gc/g1/g1FullGCPrepareTask.cpp
+++ b/src/hotspot/share/gc/g1/g1FullGCPrepareTask.cpp
@@ -43,7 +43,7 @@ G1DetermineCompactionQueueClosure::G1DetermineCompactionQueueClosure(G1FullColle
_collector(collector),
_cur_worker(0) { }
-bool G1FullGCPrepareTask::G1CalculatePointersClosure::do_heap_region(HeapRegion* hr) {
+bool G1FullGCPrepareTask::G1CalculatePointersClosure::do_heap_region(G1HeapRegion* hr) {
uint region_idx = hr->hrm_index();
assert(_collector->is_compaction_target(region_idx), "must be");
@@ -78,7 +78,7 @@ void G1FullGCPrepareTask::work(uint worker_id) {
G1FullGCCompactionPoint* compaction_point = collector()->compaction_point(worker_id);
G1CalculatePointersClosure closure(collector(), compaction_point);
- for (GrowableArrayIterator it = compaction_point->regions()->begin();
+ for (GrowableArrayIterator it = compaction_point->regions()->begin();
it != compaction_point->regions()->end();
++it) {
closure.do_heap_region(*it);
@@ -113,7 +113,7 @@ size_t G1FullGCPrepareTask::G1PrepareCompactLiveClosure::apply(oop object) {
return size;
}
-void G1FullGCPrepareTask::G1CalculatePointersClosure::prepare_for_compaction(HeapRegion* hr) {
+void G1FullGCPrepareTask::G1CalculatePointersClosure::prepare_for_compaction(G1HeapRegion* hr) {
if (!_collector->is_free(hr->hrm_index())) {
G1PrepareCompactLiveClosure prepare_compact(_cp);
hr->apply_to_marked_objects(_bitmap, &prepare_compact);
diff --git a/src/hotspot/share/gc/g1/g1FullGCPrepareTask.hpp b/src/hotspot/share/gc/g1/g1FullGCPrepareTask.hpp
index 9618986dcdd0c..a4dbbf2edbaea 100644
--- a/src/hotspot/share/gc/g1/g1FullGCPrepareTask.hpp
+++ b/src/hotspot/share/gc/g1/g1FullGCPrepareTask.hpp
@@ -33,7 +33,7 @@ class G1CollectedHeap;
class G1CMBitMap;
class G1FullCollector;
class G1FullGCCompactionPoint;
-class HeapRegion;
+class G1HeapRegion;
// Determines the regions in the heap that should be part of the compaction and
// distributes them among the compaction queues in round-robin fashion.
@@ -42,9 +42,9 @@ class G1DetermineCompactionQueueClosure : public HeapRegionClosure {
G1FullCollector* _collector;
uint _cur_worker;
- inline void free_empty_humongous_region(HeapRegion* hr);
+ inline void free_empty_humongous_region(G1HeapRegion* hr);
- inline bool should_compact(HeapRegion* hr) const;
+ inline bool should_compact(G1HeapRegion* hr) const;
// Returns the current worker id to assign a compaction point to, and selects
// the next one round-robin style.
@@ -52,12 +52,12 @@ class G1DetermineCompactionQueueClosure : public HeapRegionClosure {
inline G1FullGCCompactionPoint* next_compaction_point();
- inline void add_to_compaction_queue(HeapRegion* hr);
+ inline void add_to_compaction_queue(G1HeapRegion* hr);
public:
G1DetermineCompactionQueueClosure(G1FullCollector* collector);
- inline bool do_heap_region(HeapRegion* hr) override;
+ inline bool do_heap_region(G1HeapRegion* hr) override;
};
class G1FullGCPrepareTask : public G1FullGCTask {
@@ -80,13 +80,13 @@ class G1FullGCPrepareTask : public G1FullGCTask {
G1CMBitMap* _bitmap;
G1FullGCCompactionPoint* _cp;
- void prepare_for_compaction(HeapRegion* hr);
+ void prepare_for_compaction(G1HeapRegion* hr);
public:
G1CalculatePointersClosure(G1FullCollector* collector,
G1FullGCCompactionPoint* cp);
- bool do_heap_region(HeapRegion* hr);
+ bool do_heap_region(G1HeapRegion* hr);
};
class G1PrepareCompactLiveClosure : public StackObj {
diff --git a/src/hotspot/share/gc/g1/g1FullGCPrepareTask.inline.hpp b/src/hotspot/share/gc/g1/g1FullGCPrepareTask.inline.hpp
index 5cff8e5f412a1..2fb61cc593480 100644
--- a/src/hotspot/share/gc/g1/g1FullGCPrepareTask.inline.hpp
+++ b/src/hotspot/share/gc/g1/g1FullGCPrepareTask.inline.hpp
@@ -33,13 +33,13 @@
#include "gc/g1/g1FullGCScope.hpp"
#include "gc/g1/g1HeapRegion.inline.hpp"
-void G1DetermineCompactionQueueClosure::free_empty_humongous_region(HeapRegion* hr) {
+void G1DetermineCompactionQueueClosure::free_empty_humongous_region(G1HeapRegion* hr) {
_g1h->free_humongous_region(hr, nullptr);
_collector->set_free(hr->hrm_index());
add_to_compaction_queue(hr);
}
-inline bool G1DetermineCompactionQueueClosure::should_compact(HeapRegion* hr) const {
+inline bool G1DetermineCompactionQueueClosure::should_compact(G1HeapRegion* hr) const {
// There is no need to iterate and forward objects in non-movable regions ie.
// prepare them for compaction.
if (hr->is_humongous() || hr->has_pinned_objects()) {
@@ -61,7 +61,7 @@ inline G1FullGCCompactionPoint* G1DetermineCompactionQueueClosure::next_compacti
return _collector->compaction_point(next_worker());
}
-inline void G1DetermineCompactionQueueClosure::add_to_compaction_queue(HeapRegion* hr) {
+inline void G1DetermineCompactionQueueClosure::add_to_compaction_queue(G1HeapRegion* hr) {
_collector->set_compaction_top(hr, hr->bottom());
_collector->set_has_compaction_targets();
@@ -73,12 +73,12 @@ inline void G1DetermineCompactionQueueClosure::add_to_compaction_queue(HeapRegio
cp->add(hr);
}
-static bool has_pinned_objects(HeapRegion* hr) {
+static bool has_pinned_objects(G1HeapRegion* hr) {
return hr->has_pinned_objects() ||
(hr->is_humongous() && hr->humongous_start_region()->has_pinned_objects());
}
-inline bool G1DetermineCompactionQueueClosure::do_heap_region(HeapRegion* hr) {
+inline bool G1DetermineCompactionQueueClosure::do_heap_region(G1HeapRegion* hr) {
if (should_compact(hr)) {
assert(!hr->is_humongous(), "moving humongous objects not supported.");
add_to_compaction_queue(hr);
diff --git a/src/hotspot/share/gc/g1/g1FullGCResetMetadataTask.cpp b/src/hotspot/share/gc/g1/g1FullGCResetMetadataTask.cpp
index ddbaaa0b75ef2..4c93aca84928f 100644
--- a/src/hotspot/share/gc/g1/g1FullGCResetMetadataTask.cpp
+++ b/src/hotspot/share/gc/g1/g1FullGCResetMetadataTask.cpp
@@ -31,12 +31,12 @@ G1FullGCResetMetadataTask::G1ResetMetadataClosure::G1ResetMetadataClosure(G1Full
_g1h(G1CollectedHeap::heap()),
_collector(collector) { }
-void G1FullGCResetMetadataTask::G1ResetMetadataClosure::reset_region_metadata(HeapRegion* hr) {
+void G1FullGCResetMetadataTask::G1ResetMetadataClosure::reset_region_metadata(G1HeapRegion* hr) {
hr->rem_set()->clear();
hr->clear_cardtable();
}
-bool G1FullGCResetMetadataTask::G1ResetMetadataClosure::do_heap_region(HeapRegion* hr) {
+bool G1FullGCResetMetadataTask::G1ResetMetadataClosure::do_heap_region(G1HeapRegion* hr) {
uint const region_idx = hr->hrm_index();
if (!_collector->is_compaction_target(region_idx)) {
assert(!hr->is_free(), "all free regions should be compaction targets");
@@ -54,7 +54,7 @@ bool G1FullGCResetMetadataTask::G1ResetMetadataClosure::do_heap_region(HeapRegio
return false;
}
-void G1FullGCResetMetadataTask::G1ResetMetadataClosure::scrub_skip_compacting_region(HeapRegion* hr, bool update_bot_for_live) {
+void G1FullGCResetMetadataTask::G1ResetMetadataClosure::scrub_skip_compacting_region(G1HeapRegion* hr, bool update_bot_for_live) {
assert(hr->needs_scrubbing_during_full_gc(), "must be");
HeapWord* limit = hr->top();
@@ -82,7 +82,7 @@ void G1FullGCResetMetadataTask::G1ResetMetadataClosure::scrub_skip_compacting_re
}
}
-void G1FullGCResetMetadataTask::G1ResetMetadataClosure::reset_skip_compacting(HeapRegion* hr) {
+void G1FullGCResetMetadataTask::G1ResetMetadataClosure::reset_skip_compacting(G1HeapRegion* hr) {
#ifdef ASSERT
uint region_index = hr->hrm_index();
assert(_collector->is_skip_compacting(region_index), "Only call on is_skip_compacting regions");
diff --git a/src/hotspot/share/gc/g1/g1FullGCResetMetadataTask.hpp b/src/hotspot/share/gc/g1/g1FullGCResetMetadataTask.hpp
index f424906d56377..98d7af6e2f388 100644
--- a/src/hotspot/share/gc/g1/g1FullGCResetMetadataTask.hpp
+++ b/src/hotspot/share/gc/g1/g1FullGCResetMetadataTask.hpp
@@ -35,18 +35,18 @@ class G1FullGCResetMetadataTask : public G1FullGCTask {
G1CollectedHeap* _g1h;
G1FullCollector* _collector;
- void reset_region_metadata(HeapRegion* hr);
+ void reset_region_metadata(G1HeapRegion* hr);
// Scrub all runs of dead objects within the given region by putting filler
// objects and updating the corresponding BOT. If update_bot_for_live is true,
// also update the BOT for live objects.
- void scrub_skip_compacting_region(HeapRegion* hr, bool update_bot_for_live);
+ void scrub_skip_compacting_region(G1HeapRegion* hr, bool update_bot_for_live);
- void reset_skip_compacting(HeapRegion* r);
+ void reset_skip_compacting(G1HeapRegion* r);
public:
G1ResetMetadataClosure(G1FullCollector* collector);
- bool do_heap_region(HeapRegion* hr);
+ bool do_heap_region(G1HeapRegion* hr);
};
public:
diff --git a/src/hotspot/share/gc/g1/g1FullGCScope.cpp b/src/hotspot/share/gc/g1/g1FullGCScope.cpp
index 5bbc5c4e2d4ac..e0e79e8b60b25 100644
--- a/src/hotspot/share/gc/g1/g1FullGCScope.cpp
+++ b/src/hotspot/share/gc/g1/g1FullGCScope.cpp
@@ -52,8 +52,8 @@ G1FullGCScope::G1FullGCScope(G1MonitoringSupport* monitoring_support,
_monitoring_scope(monitoring_support),
_heap_printer(_g1h),
_region_compaction_threshold(do_maximal_compaction ?
- HeapRegion::GrainWords :
- (1 - MarkSweepDeadRatio / 100.0) * HeapRegion::GrainWords) { }
+ G1HeapRegion::GrainWords :
+ (1 - MarkSweepDeadRatio / 100.0) * G1HeapRegion::GrainWords) { }
bool G1FullGCScope::should_clear_soft_refs() {
return _soft_refs.should_clear();
diff --git a/src/hotspot/share/gc/g1/g1FullGCScope.hpp b/src/hotspot/share/gc/g1/g1FullGCScope.hpp
index f3d89c7646f61..ab3b99fea0494 100644
--- a/src/hotspot/share/gc/g1/g1FullGCScope.hpp
+++ b/src/hotspot/share/gc/g1/g1FullGCScope.hpp
@@ -52,7 +52,7 @@ class G1FullGCScope : public StackObj {
SvcGCMarker _svc_marker;
STWGCTimer _timer;
G1FullGCTracer* _tracer;
- IsGCActiveMark _active;
+ IsSTWGCActiveMark _active;
G1FullGCJFRTracerMark _tracer_mark;
ClearedAllSoftRefs _soft_refs;
G1FullGCMonitoringScope _monitoring_scope;
diff --git a/src/hotspot/share/gc/g1/g1HRPrinter.hpp b/src/hotspot/share/gc/g1/g1HRPrinter.hpp
deleted file mode 100644
index 5c1736b13ab56..0000000000000
--- a/src/hotspot/share/gc/g1/g1HRPrinter.hpp
+++ /dev/null
@@ -1,130 +0,0 @@
-/*
- * Copyright (c) 2011, 2021, Oracle and/or its affiliates. All rights reserved.
- * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
- *
- * This code is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License version 2 only, as
- * published by the Free Software Foundation.
- *
- * This code is distributed in the hope that it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
- * version 2 for more details (a copy is included in the LICENSE file that
- * accompanied this code).
- *
- * You should have received a copy of the GNU General Public License version
- * 2 along with this work; if not, write to the Free Software Foundation,
- * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
- *
- * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
- * or visit www.oracle.com if you need additional information or have any
- * questions.
- *
- */
-
-#ifndef SHARE_GC_G1_G1HRPRINTER_HPP
-#define SHARE_GC_G1_G1HRPRINTER_HPP
-
-#include "gc/g1/g1HeapRegion.hpp"
-#include "logging/log.hpp"
-
-class FreeRegionList;
-
-class G1HRPrinter {
-
-private:
-
- // Print an action event.
- static void print(const char* action, HeapRegion* hr) {
- log_trace(gc, region)("G1HR %s(%s) [" PTR_FORMAT ", " PTR_FORMAT ", " PTR_FORMAT "]",
- action, hr->get_type_str(), p2i(hr->bottom()), p2i(hr->top()), p2i(hr->end()));
- }
-
- void mark_reclaim(HeapRegion* hr) {
- print("MARK-RECLAIM", hr);
- }
-
-public:
- // In some places we iterate over a list in order to generate output
- // for the list's elements. By exposing this we can avoid this
- // iteration if the printer is not active.
- bool is_active() { return log_is_enabled(Trace, gc, region); }
-
- // The methods below are convenient wrappers for the print() method.
-
- void alloc(HeapRegion* hr, bool force = false) {
- if (is_active()) {
- print((force) ? "ALLOC-FORCE" : "ALLOC", hr);
- }
- }
-
- void retire(HeapRegion* hr) {
- if (is_active()) {
- print("RETIRE", hr);
- }
- }
-
- void reuse(HeapRegion* hr) {
- if (is_active()) {
- print("REUSE", hr);
- }
- }
-
- void cset(HeapRegion* hr) {
- if (is_active()) {
- print("CSET", hr);
- }
- }
-
- void evac_failure(HeapRegion* hr) {
- if (is_active()) {
- print("EVAC-FAILURE", hr);
- }
- }
-
- void mark_reclaim(FreeRegionList* free_list);
-
- void eager_reclaim(HeapRegion* hr) {
- if (is_active()) {
- print("EAGER-RECLAIM", hr);
- }
- }
-
- void evac_reclaim(HeapRegion* hr) {
- if (is_active()) {
- print("EVAC-RECLAIM", hr);
- }
- }
-
- void post_compaction(HeapRegion* hr) {
- if (is_active()) {
- print("POST-COMPACTION", hr);
- }
- }
-
- void commit(HeapRegion* hr) {
- if (is_active()) {
- print("COMMIT", hr);
- }
- }
-
- void active(HeapRegion* hr) {
- if (is_active()) {
- print("ACTIVE", hr);
- }
- }
-
- void inactive(HeapRegion* hr) {
- if (is_active()) {
- print("INACTIVE", hr);
- }
- }
-
- void uncommit(HeapRegion* hr) {
- if (is_active()) {
- print("UNCOMMIT", hr);
- }
- }
-};
-
-#endif // SHARE_GC_G1_G1HRPRINTER_HPP
diff --git a/src/hotspot/share/gc/g1/g1HeapRegion.cpp b/src/hotspot/share/gc/g1/g1HeapRegion.cpp
index 291648393a928..94382a3b256d1 100644
--- a/src/hotspot/share/gc/g1/g1HeapRegion.cpp
+++ b/src/hotspot/share/gc/g1/g1HeapRegion.cpp
@@ -48,21 +48,21 @@
#include "runtime/globals_extension.hpp"
#include "utilities/powerOfTwo.hpp"
-uint HeapRegion::LogOfHRGrainBytes = 0;
-uint HeapRegion::LogCardsPerRegion = 0;
-size_t HeapRegion::GrainBytes = 0;
-size_t HeapRegion::GrainWords = 0;
-size_t HeapRegion::CardsPerRegion = 0;
+uint G1HeapRegion::LogOfHRGrainBytes = 0;
+uint G1HeapRegion::LogCardsPerRegion = 0;
+size_t G1HeapRegion::GrainBytes = 0;
+size_t G1HeapRegion::GrainWords = 0;
+size_t G1HeapRegion::CardsPerRegion = 0;
-size_t HeapRegion::max_region_size() {
+size_t G1HeapRegion::max_region_size() {
return HeapRegionBounds::max_size();
}
-size_t HeapRegion::min_region_size_in_words() {
+size_t G1HeapRegion::min_region_size_in_words() {
return HeapRegionBounds::min_size() >> LogHeapWordSize;
}
-void HeapRegion::setup_heap_region_size(size_t max_heap_size) {
+void G1HeapRegion::setup_heap_region_size(size_t max_heap_size) {
size_t region_size = G1HeapRegionSize;
// G1HeapRegionSize = 0 means decide ergonomically.
if (region_size == 0) {
@@ -98,7 +98,7 @@ void HeapRegion::setup_heap_region_size(size_t max_heap_size) {
}
}
-void HeapRegion::handle_evacuation_failure(bool retain) {
+void G1HeapRegion::handle_evacuation_failure(bool retain) {
uninstall_surv_rate_group();
clear_young_index_in_cset();
clear_index_in_opt_cset();
@@ -108,13 +108,13 @@ void HeapRegion::handle_evacuation_failure(bool retain) {
_rem_set->clear(true /* only_cardset */, retain /* keep_tracked */);
}
-void HeapRegion::unlink_from_list() {
+void G1HeapRegion::unlink_from_list() {
set_next(nullptr);
set_prev(nullptr);
set_containing_set(nullptr);
}
-void HeapRegion::hr_clear(bool clear_space) {
+void G1HeapRegion::hr_clear(bool clear_space) {
set_top(bottom());
clear_young_index_in_cset();
clear_index_in_opt_cset();
@@ -132,12 +132,12 @@ void HeapRegion::hr_clear(bool clear_space) {
if (clear_space) clear(SpaceDecorator::Mangle);
}
-void HeapRegion::clear_cardtable() {
+void G1HeapRegion::clear_cardtable() {
G1CardTable* ct = G1CollectedHeap::heap()->card_table();
ct->clear_MemRegion(MemRegion(bottom(), end()));
}
-double HeapRegion::calc_gc_efficiency() {
+double G1HeapRegion::calc_gc_efficiency() {
// GC efficiency is the ratio of how much space would be
// reclaimed over how long we predict it would take to reclaim it.
G1Policy* policy = G1CollectedHeap::heap()->policy();
@@ -149,38 +149,41 @@ double HeapRegion::calc_gc_efficiency() {
return (double)reclaimable_bytes() / region_elapsed_time_ms;
}
-void HeapRegion::set_free() {
- report_region_type_change(G1HeapRegionTraceType::Free);
+void G1HeapRegion::set_free() {
+ if (!is_free()) {
+ report_region_type_change(G1HeapRegionTraceType::Free);
+ }
_type.set_free();
}
-void HeapRegion::set_eden() {
+void G1HeapRegion::set_eden() {
report_region_type_change(G1HeapRegionTraceType::Eden);
_type.set_eden();
}
-void HeapRegion::set_eden_pre_gc() {
+void G1HeapRegion::set_eden_pre_gc() {
report_region_type_change(G1HeapRegionTraceType::Eden);
_type.set_eden_pre_gc();
}
-void HeapRegion::set_survivor() {
+void G1HeapRegion::set_survivor() {
report_region_type_change(G1HeapRegionTraceType::Survivor);
_type.set_survivor();
}
-void HeapRegion::move_to_old() {
+void G1HeapRegion::move_to_old() {
+ G1HeapRegionTraceType::Type prev_trace_type = _type.get_trace_type();
if (_type.relabel_as_old()) {
- report_region_type_change(G1HeapRegionTraceType::Old);
+ report_region_type_change(prev_trace_type);
}
}
-void HeapRegion::set_old() {
+void G1HeapRegion::set_old() {
report_region_type_change(G1HeapRegionTraceType::Old);
_type.set_old();
}
-void HeapRegion::set_starts_humongous(HeapWord* obj_top, size_t fill_size) {
+void G1HeapRegion::set_starts_humongous(HeapWord* obj_top, size_t fill_size) {
assert(!is_humongous(), "sanity / pre-condition");
assert(top() == bottom(), "should be empty");
@@ -194,7 +197,7 @@ void HeapRegion::set_starts_humongous(HeapWord* obj_top, size_t fill_size) {
}
}
-void HeapRegion::set_continues_humongous(HeapRegion* first_hr) {
+void G1HeapRegion::set_continues_humongous(G1HeapRegion* first_hr) {
assert(!is_humongous(), "sanity / pre-condition");
assert(top() == bottom(), "should be empty");
assert(first_hr->is_starts_humongous(), "pre-condition");
@@ -204,18 +207,18 @@ void HeapRegion::set_continues_humongous(HeapRegion* first_hr) {
_humongous_start_region = first_hr;
}
-void HeapRegion::clear_humongous() {
+void G1HeapRegion::clear_humongous() {
assert(is_humongous(), "pre-condition");
- assert(capacity() == HeapRegion::GrainBytes, "pre-condition");
+ assert(capacity() == G1HeapRegion::GrainBytes, "pre-condition");
_humongous_start_region = nullptr;
}
-void HeapRegion::prepare_remset_for_scan() {
+void G1HeapRegion::prepare_remset_for_scan() {
_rem_set->reset_table_scanner();
}
-HeapRegion::HeapRegion(uint hrm_index,
+G1HeapRegion::G1HeapRegion(uint hrm_index,
G1BlockOffsetTable* bot,
MemRegion mr,
G1CardSetConfiguration* config) :
@@ -248,7 +251,7 @@ HeapRegion::HeapRegion(uint hrm_index,
initialize();
}
-void HeapRegion::initialize(bool clear_space, bool mangle_space) {
+void G1HeapRegion::initialize(bool clear_space, bool mangle_space) {
assert(_rem_set->is_empty(), "Remembered set must be empty");
if (clear_space) {
@@ -260,7 +263,7 @@ void HeapRegion::initialize(bool clear_space, bool mangle_space) {
hr_clear(false /*clear_space*/);
}
-void HeapRegion::report_region_type_change(G1HeapRegionTraceType::Type to) {
+void G1HeapRegion::report_region_type_change(G1HeapRegionTraceType::Type to) {
HeapRegionTracer::send_region_type_change(_hrm_index,
get_trace_type(),
to,
@@ -268,7 +271,7 @@ void HeapRegion::report_region_type_change(G1HeapRegionTraceType::Type to) {
used());
}
- void HeapRegion::note_evacuation_failure() {
+ void G1HeapRegion::note_evacuation_failure() {
// PB must be bottom - we only evacuate old gen regions after scrubbing, and
// young gen regions never have their PB set to anything other than bottom.
assert(parsable_bottom_acquire() == bottom(), "must be");
@@ -276,25 +279,25 @@ void HeapRegion::report_region_type_change(G1HeapRegionTraceType::Type to) {
_garbage_bytes = 0;
}
-void HeapRegion::note_self_forward_chunk_done(size_t garbage_bytes) {
+void G1HeapRegion::note_self_forward_chunk_done(size_t garbage_bytes) {
Atomic::add(&_garbage_bytes, garbage_bytes, memory_order_relaxed);
}
// Code roots support
-void HeapRegion::add_code_root(nmethod* nm) {
+void G1HeapRegion::add_code_root(nmethod* nm) {
rem_set()->add_code_root(nm);
}
-void HeapRegion::remove_code_root(nmethod* nm) {
+void G1HeapRegion::remove_code_root(nmethod* nm) {
rem_set()->remove_code_root(nm);
}
-void HeapRegion::code_roots_do(NMethodClosure* blk) const {
+void G1HeapRegion::code_roots_do(NMethodClosure* blk) const {
rem_set()->code_roots_do(blk);
}
class VerifyCodeRootOopClosure: public OopClosure {
- const HeapRegion* _hr;
+ const G1HeapRegion* _hr;
bool _failures;
bool _has_oops_in_region;
@@ -321,7 +324,7 @@ class VerifyCodeRootOopClosure: public OopClosure {
}
public:
- VerifyCodeRootOopClosure(const HeapRegion* hr):
+ VerifyCodeRootOopClosure(const G1HeapRegion* hr):
_hr(hr), _failures(false), _has_oops_in_region(false) {}
void do_oop(narrowOop* p) { do_oop_work(p); }
@@ -332,10 +335,10 @@ class VerifyCodeRootOopClosure: public OopClosure {
};
class VerifyCodeRootNMethodClosure: public NMethodClosure {
- const HeapRegion* _hr;
+ const G1HeapRegion* _hr;
bool _failures;
public:
- VerifyCodeRootNMethodClosure(const HeapRegion* hr) :
+ VerifyCodeRootNMethodClosure(const G1HeapRegion* hr) :
_hr(hr), _failures(false) {}
void do_nmethod(nmethod* nm) {
@@ -358,7 +361,7 @@ class VerifyCodeRootNMethodClosure: public NMethodClosure {
bool failures() { return _failures; }
};
-bool HeapRegion::verify_code_roots(VerifyOption vo) const {
+bool G1HeapRegion::verify_code_roots(VerifyOption vo) const {
if (!G1VerifyHeapRegionCodeRoots) {
// We're not verifying code roots.
return false;
@@ -403,9 +406,9 @@ bool HeapRegion::verify_code_roots(VerifyOption vo) const {
return nm_cl.failures();
}
-void HeapRegion::print() const { print_on(tty); }
+void G1HeapRegion::print() const { print_on(tty); }
-void HeapRegion::print_on(outputStream* st) const {
+void G1HeapRegion::print_on(outputStream* st) const {
st->print("|%4u", this->_hrm_index);
st->print("|" PTR_FORMAT ", " PTR_FORMAT ", " PTR_FORMAT,
p2i(bottom()), p2i(top()), p2i(end()));
@@ -519,13 +522,13 @@ class G1VerifyLiveAndRemSetClosure : public BasicOopIterateClosure {
return _failures->record_failure();
}
- void print_containing_obj(outputStream* out, HeapRegion* from) {
+ void print_containing_obj(outputStream* out, G1HeapRegion* from) {
log_error(gc, verify)("Field " PTR_FORMAT " of obj " PTR_FORMAT " in region " HR_FORMAT,
p2i(_p), p2i(_containing_obj), HR_FORMAT_PARAMS(from));
print_object(out, _containing_obj);
}
- void print_referenced_obj(outputStream* out, HeapRegion* to, const char* explanation) {
+ void print_referenced_obj(outputStream* out, G1HeapRegion* to, const char* explanation) {
log_error(gc, verify)("points to %sobj " PTR_FORMAT " in region " HR_FORMAT " remset %s",
explanation, p2i(_obj), HR_FORMAT_PARAMS(to), to->rem_set()->get_state_str());
print_object(out, _obj);
@@ -558,13 +561,13 @@ class G1VerifyLiveAndRemSetClosure : public BasicOopIterateClosure {
log.error("----------");
}
- HeapRegion* from = this->_g1h->heap_region_containing(this->_p);
+ G1HeapRegion* from = this->_g1h->heap_region_containing(this->_p);
this->print_containing_obj(&ls, from);
if (!_is_in_heap) {
log.error("points to address " PTR_FORMAT " outside of heap", p2i(this->_obj));
} else {
- HeapRegion* to = this->_g1h->heap_region_containing(this->_obj);
+ G1HeapRegion* to = this->_g1h->heap_region_containing(this->_obj);
this->print_referenced_obj(&ls, to, "dead ");
}
log.error("----------");
@@ -575,8 +578,8 @@ class G1VerifyLiveAndRemSetClosure : public BasicOopIterateClosure {
struct RemSetChecker : public Checker {
using CardValue = CardTable::CardValue;
- HeapRegion* _from;
- HeapRegion* _to;
+ G1HeapRegion* _from;
+ G1HeapRegion* _to;
CardValue _cv_obj;
CardValue _cv_field;
@@ -658,7 +661,7 @@ class G1VerifyLiveAndRemSetClosure : public BasicOopIterateClosure {
virtual inline void do_oop(oop* p) { do_oop_work(p); }
};
-bool HeapRegion::verify_liveness_and_remset(VerifyOption vo) const {
+bool G1HeapRegion::verify_liveness_and_remset(VerifyOption vo) const {
G1CollectedHeap* g1h = G1CollectedHeap::heap();
G1VerifyFailureCounter failures;
@@ -691,7 +694,7 @@ bool HeapRegion::verify_liveness_and_remset(VerifyOption vo) const {
return failures.count() != 0;
}
-bool HeapRegion::verify(VerifyOption vo) const {
+bool G1HeapRegion::verify(VerifyOption vo) const {
// We cast p to an oop, so region-bottom must be an obj-start.
assert(!is_humongous() || is_starts_humongous(), "invariant");
@@ -710,7 +713,7 @@ bool HeapRegion::verify(VerifyOption vo) const {
return verify_code_roots(vo);
}
-void HeapRegion::clear(bool mangle_space) {
+void G1HeapRegion::clear(bool mangle_space) {
set_top(bottom());
if (ZapUnusedHeapArea && mangle_space) {
@@ -719,12 +722,12 @@ void HeapRegion::clear(bool mangle_space) {
}
#ifndef PRODUCT
-void HeapRegion::mangle_unused_area() {
+void G1HeapRegion::mangle_unused_area() {
SpaceMangler::mangle_region(MemRegion(top(), end()));
}
#endif
-void HeapRegion::object_iterate(ObjectClosure* blk) {
+void G1HeapRegion::object_iterate(ObjectClosure* blk) {
HeapWord* p = bottom();
while (p < top()) {
if (block_is_obj(p, parsable_bottom())) {
@@ -734,7 +737,7 @@ void HeapRegion::object_iterate(ObjectClosure* blk) {
}
}
-void HeapRegion::fill_with_dummy_object(HeapWord* address, size_t word_size, bool zap) {
+void G1HeapRegion::fill_with_dummy_object(HeapWord* address, size_t word_size, bool zap) {
// Keep the BOT in sync for old generation regions.
if (is_old()) {
update_bot_for_block(address, address + word_size);
@@ -743,7 +746,7 @@ void HeapRegion::fill_with_dummy_object(HeapWord* address, size_t word_size, boo
CollectedHeap::fill_with_object(address, word_size, zap);
}
-void HeapRegion::fill_range_with_dead_objects(HeapWord* start, HeapWord* end) {
+void G1HeapRegion::fill_range_with_dead_objects(HeapWord* start, HeapWord* end) {
size_t range_size = pointer_delta(end, start);
// We must be a bit careful with regions that contain pinned objects. While the
diff --git a/src/hotspot/share/gc/g1/g1HeapRegion.hpp b/src/hotspot/share/gc/g1/g1HeapRegion.hpp
index d6be0837069ae..dc2c71bcbb450 100644
--- a/src/hotspot/share/gc/g1/g1HeapRegion.hpp
+++ b/src/hotspot/share/gc/g1/g1HeapRegion.hpp
@@ -41,7 +41,7 @@ class G1CollectedHeap;
class G1CMBitMap;
class G1Predictions;
class HeapRegionRemSet;
-class HeapRegion;
+class G1HeapRegion;
class HeapRegionSetBase;
class nmethod;
@@ -54,7 +54,7 @@ class nmethod;
// sentinel value for hrm_index
#define G1_NO_HRM_INDEX ((uint) -1)
-// A HeapRegion is the smallest piece of a G1CollectedHeap that
+// A G1HeapRegion is the smallest piece of a G1CollectedHeap that
// can be collected independently.
// Each heap region is self contained. top() and end() can never
@@ -66,7 +66,7 @@ class nmethod;
// the last will point to their own end. The last ContinuesHumongous
// region may have top() equal the end of object if there isn't
// room for filler objects to pad out to the end of the region.
-class HeapRegion : public CHeapObj {
+class G1HeapRegion : public CHeapObj {
friend class VMStructs;
HeapWord* const _bottom;
@@ -130,10 +130,10 @@ class HeapRegion : public CHeapObj {
// Try to allocate at least min_word_size and up to desired_size from this region.
// Returns null if not possible, otherwise sets actual_word_size to the amount of
// space allocated.
- // This version assumes that all allocation requests to this HeapRegion are properly
+ // This version assumes that all allocation requests to this G1HeapRegion are properly
// synchronized.
inline HeapWord* allocate_impl(size_t min_word_size, size_t desired_word_size, size_t* actual_word_size);
- // Try to allocate at least min_word_size and up to desired_size from this HeapRegion.
+ // Try to allocate at least min_word_size and up to desired_size from this G1HeapRegion.
// Returns null if not possible, otherwise sets actual_word_size to the amount of
// space allocated.
// This version synchronizes with other calls to par_allocate_impl().
@@ -177,12 +177,12 @@ class HeapRegion : public CHeapObj {
// Update skip-compacting heap region to be consistent after Full GC.
void reset_skip_compacting_after_full_gc();
- // All allocated blocks are occupied by objects in a HeapRegion.
+ // All allocated blocks are occupied by objects in a G1HeapRegion.
bool block_is_obj(const HeapWord* p, HeapWord* pb) const;
// Returns the object size for all valid block starts. If parsable_bottom (pb)
// is given, calculates the block size based on that parsable_bottom, not the
- // current value of this HeapRegion.
+ // current value of this G1HeapRegion.
size_t block_size(const HeapWord* p) const;
size_t block_size(const HeapWord* p, HeapWord* pb) const;
@@ -205,7 +205,7 @@ class HeapRegion : public CHeapObj {
HeapRegionType _type;
// For a humongous region, region in which it starts.
- HeapRegion* _humongous_start_region;
+ G1HeapRegion* _humongous_start_region;
static const uint InvalidCSetIndex = UINT_MAX;
@@ -214,8 +214,8 @@ class HeapRegion : public CHeapObj {
uint _index_in_opt_cset;
// Fields used by the HeapRegionSetBase class and subclasses.
- HeapRegion* _next;
- HeapRegion* _prev;
+ G1HeapRegion* _next;
+ G1HeapRegion* _prev;
#ifdef ASSERT
HeapRegionSetBase* _containing_set;
#endif // ASSERT
@@ -273,7 +273,7 @@ class HeapRegion : public CHeapObj {
inline HeapWord* next_live_in_unparsable(const HeapWord* p, HeapWord* limit) const;
public:
- HeapRegion(uint hrm_index,
+ G1HeapRegion(uint hrm_index,
G1BlockOffsetTable* bot,
MemRegion mr,
G1CardSetConfiguration* config);
@@ -282,7 +282,7 @@ class HeapRegion : public CHeapObj {
// sequence, otherwise -1.
uint hrm_index() const { return _hrm_index; }
- // Initializing the HeapRegion not only resets the data structure, but also
+ // Initializing the G1HeapRegion not only resets the data structure, but also
// resets the BOT for that heap region.
// The default values for clear_space means that we will do the clearing if
// there's clearing to be done ourselves. We also always mangle the space.
@@ -400,7 +400,7 @@ class HeapRegion : public CHeapObj {
void set_old();
// For a humongous region, region in which it starts.
- HeapRegion* humongous_start_region() const {
+ G1HeapRegion* humongous_start_region() const {
return _humongous_start_region;
}
@@ -415,7 +415,7 @@ class HeapRegion : public CHeapObj {
// Makes the current region be a "continues humongous'
// region. first_hr is the "start humongous" region of the series
// which this region will be part of.
- void set_continues_humongous(HeapRegion* first_hr);
+ void set_continues_humongous(G1HeapRegion* first_hr);
// Unsets the humongous-related fields on the region.
void clear_humongous();
@@ -434,11 +434,11 @@ class HeapRegion : public CHeapObj {
// Getter and setter for the next and prev fields used to link regions into
// linked lists.
- void set_next(HeapRegion* next) { _next = next; }
- HeapRegion* next() { return _next; }
+ void set_next(G1HeapRegion* next) { _next = next; }
+ G1HeapRegion* next() { return _next; }
- void set_prev(HeapRegion* prev) { _prev = prev; }
- HeapRegion* prev() { return _prev; }
+ void set_prev(G1HeapRegion* prev) { _prev = prev; }
+ G1HeapRegion* prev() { return _prev; }
void unlink_from_list();
@@ -466,8 +466,8 @@ class HeapRegion : public CHeapObj {
#endif // ASSERT
- // Reset the HeapRegion to default values and clear its remembered set.
- // If clear_space is true, clear the HeapRegion's memory.
+ // Reset the G1HeapRegion to default values and clear its remembered set.
+ // If clear_space is true, clear the G1HeapRegion's memory.
// Callers must ensure this is not called by multiple threads at the same time.
void hr_clear(bool clear_space);
// Clear the card table corresponding to this region.
@@ -568,7 +568,7 @@ class HeapRegionClosure : public StackObj {
HeapRegionClosure(): _is_complete(true) {}
// Typically called on each region until it returns true.
- virtual bool do_heap_region(HeapRegion* r) = 0;
+ virtual bool do_heap_region(G1HeapRegion* r) = 0;
// True after iteration if the closure was applied to all heap regions
// and returned "false" in all cases.
diff --git a/src/hotspot/share/gc/g1/g1HeapRegion.inline.hpp b/src/hotspot/share/gc/g1/g1HeapRegion.inline.hpp
index 401540804ba68..8e5e594b5cac2 100644
--- a/src/hotspot/share/gc/g1/g1HeapRegion.inline.hpp
+++ b/src/hotspot/share/gc/g1/g1HeapRegion.inline.hpp
@@ -42,7 +42,7 @@
#include "utilities/align.hpp"
#include "utilities/globalDefinitions.hpp"
-inline HeapWord* HeapRegion::allocate_impl(size_t min_word_size,
+inline HeapWord* G1HeapRegion::allocate_impl(size_t min_word_size,
size_t desired_word_size,
size_t* actual_size) {
HeapWord* obj = top();
@@ -59,7 +59,7 @@ inline HeapWord* HeapRegion::allocate_impl(size_t min_word_size,
}
}
-inline HeapWord* HeapRegion::par_allocate_impl(size_t min_word_size,
+inline HeapWord* G1HeapRegion::par_allocate_impl(size_t min_word_size,
size_t desired_word_size,
size_t* actual_size) {
do {
@@ -83,11 +83,11 @@ inline HeapWord* HeapRegion::par_allocate_impl(size_t min_word_size,
} while (true);
}
-inline HeapWord* HeapRegion::block_start(const void* addr) const {
+inline HeapWord* G1HeapRegion::block_start(const void* addr) const {
return block_start(addr, parsable_bottom_acquire());
}
-inline HeapWord* HeapRegion::advance_to_block_containing_addr(const void* addr,
+inline HeapWord* G1HeapRegion::advance_to_block_containing_addr(const void* addr,
HeapWord* const pb,
HeapWord* first_block) const {
HeapWord* cur_block = first_block;
@@ -104,25 +104,25 @@ inline HeapWord* HeapRegion::advance_to_block_containing_addr(const void* addr,
}
}
-inline HeapWord* HeapRegion::block_start(const void* addr, HeapWord* const pb) const {
+inline HeapWord* G1HeapRegion::block_start(const void* addr, HeapWord* const pb) const {
assert(addr >= bottom() && addr < top(), "invalid address");
HeapWord* first_block = _bot->block_start_reaching_into_card(addr);
return advance_to_block_containing_addr(addr, pb, first_block);
}
-inline bool HeapRegion::is_in_parsable_area(const void* const addr) const {
+inline bool G1HeapRegion::is_in_parsable_area(const void* const addr) const {
return is_in_parsable_area(addr, parsable_bottom());
}
-inline bool HeapRegion::is_in_parsable_area(const void* const addr, const void* const pb) {
+inline bool G1HeapRegion::is_in_parsable_area(const void* const addr, const void* const pb) {
return addr >= pb;
}
-inline bool HeapRegion::is_marked_in_bitmap(oop obj) const {
+inline bool G1HeapRegion::is_marked_in_bitmap(oop obj) const {
return G1CollectedHeap::heap()->concurrent_mark()->mark_bitmap()->is_marked(obj);
}
-inline bool HeapRegion::block_is_obj(const HeapWord* const p, HeapWord* const pb) const {
+inline bool G1HeapRegion::block_is_obj(const HeapWord* const p, HeapWord* const pb) const {
assert(p >= bottom() && p < top(), "precondition");
assert(!is_continues_humongous(), "p must point to block-start");
@@ -141,24 +141,24 @@ inline bool HeapRegion::block_is_obj(const HeapWord* const p, HeapWord* const pb
return is_marked_in_bitmap(cast_to_oop(p));
}
-inline HeapWord* HeapRegion::next_live_in_unparsable(G1CMBitMap* const bitmap, const HeapWord* p, HeapWord* const limit) const {
+inline HeapWord* G1HeapRegion::next_live_in_unparsable(G1CMBitMap* const bitmap, const HeapWord* p, HeapWord* const limit) const {
return bitmap->get_next_marked_addr(p, limit);
}
-inline HeapWord* HeapRegion::next_live_in_unparsable(const HeapWord* p, HeapWord* const limit) const {
+inline HeapWord* G1HeapRegion::next_live_in_unparsable(const HeapWord* p, HeapWord* const limit) const {
G1CMBitMap* bitmap = G1CollectedHeap::heap()->concurrent_mark()->mark_bitmap();
return next_live_in_unparsable(bitmap, p, limit);
}
-inline bool HeapRegion::is_collection_set_candidate() const {
+inline bool G1HeapRegion::is_collection_set_candidate() const {
return G1CollectedHeap::heap()->is_collection_set_candidate(this);
}
-inline size_t HeapRegion::block_size(const HeapWord* p) const {
+inline size_t G1HeapRegion::block_size(const HeapWord* p) const {
return block_size(p, parsable_bottom());
}
-inline size_t HeapRegion::block_size(const HeapWord* p, HeapWord* const pb) const {
+inline size_t G1HeapRegion::block_size(const HeapWord* p, HeapWord* const pb) const {
assert(p < top(), "precondition");
if (!block_is_obj(p, pb)) {
@@ -168,26 +168,26 @@ inline size_t HeapRegion::block_size(const HeapWord* p, HeapWord* const pb) cons
return cast_to_oop(p)->size();
}
-inline void HeapRegion::prepare_for_full_gc() {
+inline void G1HeapRegion::prepare_for_full_gc() {
// After marking and class unloading the heap temporarily contains dead objects
// with unloaded klasses. Moving parsable_bottom makes some (debug) code correctly
// skip dead objects.
_parsable_bottom = top();
}
-inline void HeapRegion::reset_compacted_after_full_gc(HeapWord* new_top) {
+inline void G1HeapRegion::reset_compacted_after_full_gc(HeapWord* new_top) {
set_top(new_top);
reset_after_full_gc_common();
}
-inline void HeapRegion::reset_skip_compacting_after_full_gc() {
+inline void G1HeapRegion::reset_skip_compacting_after_full_gc() {
assert(!is_free(), "must be");
reset_after_full_gc_common();
}
-inline void HeapRegion::reset_after_full_gc_common() {
+inline void G1HeapRegion::reset_after_full_gc_common() {
// After a full gc the mark information in a movable region is invalid. Reset marking
// information.
G1CollectedHeap::heap()->concurrent_mark()->reset_top_at_mark_start(this);
@@ -204,7 +204,7 @@ inline void HeapRegion::reset_after_full_gc_common() {
}
template
-inline void HeapRegion::apply_to_marked_objects(G1CMBitMap* bitmap, ApplyToMarkedClosure* closure) {
+inline void G1HeapRegion::apply_to_marked_objects(G1CMBitMap* bitmap, ApplyToMarkedClosure* closure) {
HeapWord* limit = top();
HeapWord* next_addr = bottom();
@@ -224,24 +224,24 @@ inline void HeapRegion::apply_to_marked_objects(G1CMBitMap* bitmap, ApplyToMarke
assert(next_addr == limit, "Should stop the scan at the limit.");
}
-inline HeapWord* HeapRegion::par_allocate(size_t min_word_size,
+inline HeapWord* G1HeapRegion::par_allocate(size_t min_word_size,
size_t desired_word_size,
size_t* actual_word_size) {
return par_allocate_impl(min_word_size, desired_word_size, actual_word_size);
}
-inline HeapWord* HeapRegion::allocate(size_t word_size) {
+inline HeapWord* G1HeapRegion::allocate(size_t word_size) {
size_t temp;
return allocate(word_size, word_size, &temp);
}
-inline HeapWord* HeapRegion::allocate(size_t min_word_size,
+inline HeapWord* G1HeapRegion::allocate(size_t min_word_size,
size_t desired_word_size,
size_t* actual_word_size) {
return allocate_impl(min_word_size, desired_word_size, actual_word_size);
}
-inline void HeapRegion::update_bot() {
+inline void G1HeapRegion::update_bot() {
HeapWord* next_addr = bottom();
HeapWord* prev_addr;
@@ -253,7 +253,7 @@ inline void HeapRegion::update_bot() {
assert(next_addr == top(), "Should stop the scan at the limit.");
}
-inline void HeapRegion::update_bot_for_block(HeapWord* start, HeapWord* end) {
+inline void G1HeapRegion::update_bot_for_block(HeapWord* start, HeapWord* end) {
assert(is_in(start), "The start address must be in this region: " HR_FORMAT
" start " PTR_FORMAT " end " PTR_FORMAT,
HR_FORMAT_PARAMS(this),
@@ -262,20 +262,20 @@ inline void HeapRegion::update_bot_for_block(HeapWord* start, HeapWord* end) {
_bot->update_for_block(start, end);
}
-inline HeapWord* HeapRegion::parsable_bottom() const {
+inline HeapWord* G1HeapRegion::parsable_bottom() const {
assert(!is_init_completed() || SafepointSynchronize::is_at_safepoint(), "only during initialization or safepoint");
return _parsable_bottom;
}
-inline HeapWord* HeapRegion::parsable_bottom_acquire() const {
+inline HeapWord* G1HeapRegion::parsable_bottom_acquire() const {
return Atomic::load_acquire(&_parsable_bottom);
}
-inline void HeapRegion::reset_parsable_bottom() {
+inline void G1HeapRegion::reset_parsable_bottom() {
Atomic::release_store(&_parsable_bottom, bottom());
}
-inline void HeapRegion::note_end_of_marking(HeapWord* top_at_mark_start, size_t marked_bytes) {
+inline void G1HeapRegion::note_end_of_marking(HeapWord* top_at_mark_start, size_t marked_bytes) {
assert_at_safepoint();
if (top_at_mark_start != bottom()) {
@@ -287,23 +287,23 @@ inline void HeapRegion::note_end_of_marking(HeapWord* top_at_mark_start, size_t
}
}
-inline void HeapRegion::note_end_of_scrubbing() {
+inline void G1HeapRegion::note_end_of_scrubbing() {
reset_parsable_bottom();
}
-inline bool HeapRegion::needs_scrubbing() const {
+inline bool G1HeapRegion::needs_scrubbing() const {
return is_old();
}
-inline bool HeapRegion::in_collection_set() const {
+inline bool G1HeapRegion::in_collection_set() const {
return G1CollectedHeap::heap()->is_in_cset(this);
}
template
-HeapWord* HeapRegion::do_oops_on_memregion_in_humongous(MemRegion mr,
+HeapWord* G1HeapRegion::do_oops_on_memregion_in_humongous(MemRegion mr,
Closure* cl) {
assert(is_humongous(), "precondition");
- HeapRegion* sr = humongous_start_region();
+ G1HeapRegion* sr = humongous_start_region();
oop obj = cast_to_oop(sr->bottom());
// If concurrent and klass_or_null is null, then space has been
@@ -342,7 +342,7 @@ HeapWord* HeapRegion::do_oops_on_memregion_in_humongous(MemRegion mr,
}
template
-inline HeapWord* HeapRegion::oops_on_memregion_iterate_in_unparsable(MemRegion mr, HeapWord* block_start, Closure* cl) {
+inline HeapWord* G1HeapRegion::oops_on_memregion_iterate_in_unparsable(MemRegion mr, HeapWord* block_start, Closure* cl) {
HeapWord* const start = mr.start();
HeapWord* const end = mr.end();
@@ -387,7 +387,7 @@ inline HeapWord* HeapRegion::oops_on_memregion_iterate_in_unparsable(MemRegion m
// we expect that the amount of GCs executed during scrubbing is very low so such
// tests would be unnecessary almost all the time.
template
-inline HeapWord* HeapRegion::oops_on_memregion_iterate(MemRegion mr, Closure* cl) {
+inline HeapWord* G1HeapRegion::oops_on_memregion_iterate(MemRegion mr, Closure* cl) {
// Cache the boundaries of the memory region in some const locals
HeapWord* const start = mr.start();
HeapWord* const end = mr.end();
@@ -451,7 +451,7 @@ inline HeapWord* HeapRegion::oops_on_memregion_iterate(MemRegion mr, Closure* cl
}
template
-HeapWord* HeapRegion::oops_on_memregion_seq_iterate_careful(MemRegion mr,
+HeapWord* G1HeapRegion::oops_on_memregion_seq_iterate_careful(MemRegion mr,
Closure* cl) {
assert(MemRegion(bottom(), top()).contains(mr), "Card region not in heap region");
@@ -472,26 +472,26 @@ HeapWord* HeapRegion::oops_on_memregion_seq_iterate_careful(MemRegion mr,
return oops_on_memregion_iterate(mr, cl);
}
-inline uint HeapRegion::age_in_surv_rate_group() const {
+inline uint G1HeapRegion::age_in_surv_rate_group() const {
assert(has_surv_rate_group(), "pre-condition");
assert(has_valid_age_in_surv_rate(), "pre-condition");
return _surv_rate_group->age_in_group(_age_index);
}
-inline bool HeapRegion::has_valid_age_in_surv_rate() const {
+inline bool G1HeapRegion::has_valid_age_in_surv_rate() const {
return _surv_rate_group->is_valid_age_index(_age_index);
}
-inline bool HeapRegion::has_surv_rate_group() const {
+inline bool G1HeapRegion::has_surv_rate_group() const {
return _surv_rate_group != nullptr;
}
-inline double HeapRegion::surv_rate_prediction(G1Predictions const& predictor) const {
+inline double G1HeapRegion::surv_rate_prediction(G1Predictions const& predictor) const {
assert(has_surv_rate_group(), "pre-condition");
return _surv_rate_group->surv_rate_pred(predictor, age_in_surv_rate_group());
}
-inline void HeapRegion::install_surv_rate_group(G1SurvRateGroup* surv_rate_group) {
+inline void G1HeapRegion::install_surv_rate_group(G1SurvRateGroup* surv_rate_group) {
assert(surv_rate_group != nullptr, "pre-condition");
assert(!has_surv_rate_group(), "pre-condition");
assert(is_young(), "pre-condition");
@@ -500,7 +500,7 @@ inline void HeapRegion::install_surv_rate_group(G1SurvRateGroup* surv_rate_group
_age_index = surv_rate_group->next_age_index();
}
-inline void HeapRegion::uninstall_surv_rate_group() {
+inline void G1HeapRegion::uninstall_surv_rate_group() {
if (has_surv_rate_group()) {
assert(has_valid_age_in_surv_rate(), "pre-condition");
assert(is_young(), "pre-condition");
@@ -512,12 +512,12 @@ inline void HeapRegion::uninstall_surv_rate_group() {
}
}
-inline void HeapRegion::record_surv_words_in_group(size_t words_survived) {
+inline void G1HeapRegion::record_surv_words_in_group(size_t words_survived) {
uint age = age_in_surv_rate_group();
_surv_rate_group->record_surviving_words(age, words_survived);
}
-inline void HeapRegion::add_pinned_object_count(size_t value) {
+inline void G1HeapRegion::add_pinned_object_count(size_t value) {
assert(value != 0, "wasted effort");
assert(!is_free(), "trying to pin free region %u, adding %zu", hrm_index(), value);
Atomic::add(&_pinned_object_count, value, memory_order_relaxed);
diff --git a/src/hotspot/share/gc/g1/g1HeapRegionAttr.hpp b/src/hotspot/share/gc/g1/g1HeapRegionAttr.hpp
index c3c7b94e287b7..7ac688641cdf5 100644
--- a/src/hotspot/share/gc/g1/g1HeapRegionAttr.hpp
+++ b/src/hotspot/share/gc/g1/g1HeapRegionAttr.hpp
@@ -121,7 +121,7 @@ struct G1HeapRegionAttr {
// processing on it.
//
// This means that this does NOT completely correspond to the information stored
-// in a HeapRegion, but only to what is interesting for the current young collection.
+// in a G1HeapRegion, but only to what is interesting for the current young collection.
class G1HeapRegionAttrBiasedMappedArray : public G1BiasedMappedArray {
protected:
G1HeapRegionAttr default_value() const { return G1HeapRegionAttr(G1HeapRegionAttr::NotInCSet); }
@@ -180,10 +180,10 @@ class G1HeapRegionAttrBiasedMappedArray : public G1BiasedMappedArrayhrm_index()).is_in_cset(); }
+ bool is_in_cset(const G1HeapRegion* hr) const { return get_by_index(hr->hrm_index()).is_in_cset(); }
G1HeapRegionAttr at(HeapWord* addr) const { return get_by_address(addr); }
void clear() { G1BiasedMappedArray::clear(); }
- void clear(const HeapRegion* hr) { return set_by_index(hr->hrm_index(), G1HeapRegionAttr(G1HeapRegionAttr::NotInCSet)); }
+ void clear(const G1HeapRegion* hr) { return set_by_index(hr->hrm_index(), G1HeapRegionAttr(G1HeapRegionAttr::NotInCSet)); }
};
#endif // SHARE_GC_G1_G1HEAPREGIONATTR_HPP
diff --git a/src/hotspot/share/gc/g1/g1HeapRegionEventSender.cpp b/src/hotspot/share/gc/g1/g1HeapRegionEventSender.cpp
index 5dce9b2aa909e..6f01524a14b61 100644
--- a/src/hotspot/share/gc/g1/g1HeapRegionEventSender.cpp
+++ b/src/hotspot/share/gc/g1/g1HeapRegionEventSender.cpp
@@ -32,7 +32,7 @@
class DumpEventInfoClosure : public HeapRegionClosure {
public:
- bool do_heap_region(HeapRegion* r) {
+ bool do_heap_region(G1HeapRegion* r) {
EventG1HeapRegionInformation evt;
evt.set_index(r->hrm_index());
evt.set_type(r->get_trace_type());
diff --git a/src/hotspot/share/gc/g1/g1HeapRegionManager.cpp b/src/hotspot/share/gc/g1/g1HeapRegionManager.cpp
index 0fb8e7499d68f..2cf6a4088d3f5 100644
--- a/src/hotspot/share/gc/g1/g1HeapRegionManager.cpp
+++ b/src/hotspot/share/gc/g1/g1HeapRegionManager.cpp
@@ -29,6 +29,7 @@
#include "gc/g1/g1CommittedRegionMap.inline.hpp"
#include "gc/g1/g1HeapRegion.hpp"
#include "gc/g1/g1HeapRegionManager.inline.hpp"
+#include "gc/g1/g1HeapRegionPrinter.hpp"
#include "gc/g1/g1HeapRegionSet.inline.hpp"
#include "gc/g1/g1NUMAStats.hpp"
#include "jfr/jfrEvents.hpp"
@@ -57,7 +58,7 @@ class MasterFreeRegionListChecker : public HeapRegionSetChecker {
guarantee(Heap_lock->owned_by_self(), "master free list MT safety protocol outside a safepoint");
}
}
- bool is_correct_type(HeapRegion* hr) { return hr->is_free(); }
+ bool is_correct_type(G1HeapRegion* hr) { return hr->is_free(); }
const char* get_description() { return "Free Regions"; }
};
@@ -84,13 +85,13 @@ void HeapRegionManager::initialize(G1RegionToSpaceMapper* heap_storage,
_bot_mapper = bot;
_cardtable_mapper = cardtable;
- _regions.initialize(heap_storage->reserved(), HeapRegion::GrainBytes);
+ _regions.initialize(heap_storage->reserved(), G1HeapRegion::GrainBytes);
_committed_map.initialize(reserved_length());
}
-HeapRegion* HeapRegionManager::allocate_free_region(HeapRegionType type, uint requested_node_index) {
- HeapRegion* hr = nullptr;
+G1HeapRegion* HeapRegionManager::allocate_free_region(HeapRegionType type, uint requested_node_index) {
+ G1HeapRegion* hr = nullptr;
bool from_head = !type.is_young();
G1NUMA* numa = G1NUMA::numa();
@@ -117,7 +118,7 @@ HeapRegion* HeapRegionManager::allocate_free_region(HeapRegionType type, uint re
return hr;
}
-HeapRegion* HeapRegionManager::allocate_humongous_from_free_list(uint num_regions) {
+G1HeapRegion* HeapRegionManager::allocate_humongous_from_free_list(uint num_regions) {
uint candidate = find_contiguous_in_free_list(num_regions);
if (candidate == G1_NO_HRM_INDEX) {
return nullptr;
@@ -125,7 +126,7 @@ HeapRegion* HeapRegionManager::allocate_humongous_from_free_list(uint num_region
return allocate_free_regions_starting_at(candidate, num_regions);
}
-HeapRegion* HeapRegionManager::allocate_humongous_allow_expand(uint num_regions) {
+G1HeapRegion* HeapRegionManager::allocate_humongous_allow_expand(uint num_regions) {
uint candidate = find_contiguous_allow_expand(num_regions);
if (candidate == G1_NO_HRM_INDEX) {
return nullptr;
@@ -134,7 +135,7 @@ HeapRegion* HeapRegionManager::allocate_humongous_allow_expand(uint num_regions)
return allocate_free_regions_starting_at(candidate, num_regions);
}
-HeapRegion* HeapRegionManager::allocate_humongous(uint num_regions) {
+G1HeapRegion* HeapRegionManager::allocate_humongous(uint num_regions) {
// Special case a single region to avoid expensive search.
if (num_regions == 1) {
return allocate_free_region(HeapRegionType::Humongous, G1NUMA::AnyNodeIndex);
@@ -142,20 +143,20 @@ HeapRegion* HeapRegionManager::allocate_humongous(uint num_regions) {
return allocate_humongous_from_free_list(num_regions);
}
-HeapRegion* HeapRegionManager::expand_and_allocate_humongous(uint num_regions) {
+G1HeapRegion* HeapRegionManager::expand_and_allocate_humongous(uint num_regions) {
return allocate_humongous_allow_expand(num_regions);
}
#ifdef ASSERT
-bool HeapRegionManager::is_free(HeapRegion* hr) const {
+bool HeapRegionManager::is_free(G1HeapRegion* hr) const {
return _free_list.contains(hr);
}
#endif
-HeapRegion* HeapRegionManager::new_heap_region(uint hrm_index) {
+G1HeapRegion* HeapRegionManager::new_heap_region(uint hrm_index) {
G1CollectedHeap* g1h = G1CollectedHeap::heap();
HeapWord* bottom = g1h->bottom_addr_for_region(hrm_index);
- MemRegion mr(bottom, bottom + HeapRegion::GrainWords);
+ MemRegion mr(bottom, bottom + G1HeapRegion::GrainWords);
assert(reserved().contains(mr), "invariant");
return g1h->new_heap_region(hrm_index, mr);
}
@@ -163,14 +164,14 @@ HeapRegion* HeapRegionManager::new_heap_region(uint hrm_index) {
void HeapRegionManager::expand(uint start, uint num_regions, WorkerThreads* pretouch_workers) {
commit_regions(start, num_regions, pretouch_workers);
for (uint i = start; i < start + num_regions; i++) {
- HeapRegion* hr = _regions.get_by_index(i);
+ G1HeapRegion* hr = _regions.get_by_index(i);
if (hr == nullptr) {
hr = new_heap_region(i);
OrderAccess::storestore();
_regions.set_by_index(i, hr);
_allocated_heapregions_length = MAX2(_allocated_heapregions_length, i + 1);
}
- G1CollectedHeap::heap()->hr_printer()->commit(hr);
+ G1HeapRegionPrinter::commit(hr);
}
activate_regions(start, num_regions);
}
@@ -193,13 +194,12 @@ void HeapRegionManager::uncommit_regions(uint start, uint num_regions) {
guarantee(num_regions > 0, "No point in calling this for zero regions");
uint end = start + num_regions;
- G1HRPrinter* printer = G1CollectedHeap::heap()->hr_printer();
- if (printer->is_active()) {
+ if (G1HeapRegionPrinter::is_active()) {
for (uint i = start; i < end; i++) {
// Can't use at() here since region is no longer marked available.
- HeapRegion* hr = _regions.get_by_index(i);
+ G1HeapRegion* hr = _regions.get_by_index(i);
assert(hr != nullptr, "Region should still be present");
- printer->uncommit(hr);
+ G1HeapRegionPrinter::uncommit(hr);
}
}
@@ -218,12 +218,12 @@ void HeapRegionManager::uncommit_regions(uint start, uint num_regions) {
void HeapRegionManager::initialize_regions(uint start, uint num_regions) {
for (uint i = start; i < start + num_regions; i++) {
assert(is_available(i), "Just made region %u available but is apparently not.", i);
- HeapRegion* hr = at(i);
+ G1HeapRegion* hr = at(i);
hr->initialize();
hr->set_node_index(G1NUMA::numa()->index_for_region(hr));
insert_into_free_list(hr);
- G1CollectedHeap::heap()->hr_printer()->active(hr);
+ G1HeapRegionPrinter::active(hr);
}
}
@@ -248,9 +248,9 @@ void HeapRegionManager::deactivate_regions(uint start, uint num_regions) {
// Reset NUMA index to and print state change.
uint end = start + num_regions;
for (uint i = start; i < end; i++) {
- HeapRegion* hr = at(i);
+ G1HeapRegion* hr = at(i);
hr->set_node_index(G1NUMA::UnknownNodeIndex);
- G1CollectedHeap::heap()->hr_printer()->inactive(hr);
+ G1HeapRegionPrinter::inactive(hr);
}
_committed_map.deactivate(start, end);
@@ -430,7 +430,7 @@ void HeapRegionManager::assert_contiguous_range(uint start, uint num_regions) {
// General sanity check, regions found should either be available and empty
// or not available so that we can make them available and use them.
for (uint i = start; i < (start + num_regions); i++) {
- HeapRegion* hr = _regions.get_by_index(i);
+ G1HeapRegion* hr = _regions.get_by_index(i);
assert(!is_available(i) || hr->is_free(),
"Found region sequence starting at " UINT32_FORMAT ", length " UINT32_FORMAT
" that is not free at " UINT32_FORMAT ". Hr is " PTR_FORMAT ", type is %s",
@@ -486,11 +486,11 @@ uint HeapRegionManager::find_contiguous_allow_expand(uint num_regions) {
return find_contiguous_in_range(0, reserved_length(), num_regions);
}
-HeapRegion* HeapRegionManager::next_region_in_heap(const HeapRegion* r) const {
+G1HeapRegion* HeapRegionManager::next_region_in_heap(const G1HeapRegion* r) const {
guarantee(r != nullptr, "Start region must be a valid region");
guarantee(is_available(r->hrm_index()), "Trying to iterate starting from region %u which is not in the heap", r->hrm_index());
for (uint i = r->hrm_index() + 1; i < _allocated_heapregions_length; i++) {
- HeapRegion* hr = _regions.get_by_index(i);
+ G1HeapRegion* hr = _regions.get_by_index(i);
if (is_available(i)) {
return hr;
}
@@ -505,7 +505,7 @@ void HeapRegionManager::iterate(HeapRegionClosure* blk) const {
if (!is_available(i)) {
continue;
}
- guarantee(at(i) != nullptr, "Tried to access region %u that has a null HeapRegion*", i);
+ guarantee(at(i) != nullptr, "Tried to access region %u that has a null G1HeapRegion*", i);
bool res = blk->do_heap_region(at(i));
if (res) {
blk->set_incomplete();
@@ -534,7 +534,7 @@ uint HeapRegionManager::find_highest_free(bool* expanded) {
// entry which is either free or not yet committed. If not yet
// committed, expand at that index.
for (uint curr = reserved_length(); curr-- > 0;) {
- HeapRegion *hr = _regions.get_by_index(curr);
+ G1HeapRegion* hr = _regions.get_by_index(curr);
if (hr == nullptr || !is_available(curr)) {
// Found uncommitted and free region, expand to make it available for use.
expand_exact(curr, 1, nullptr);
@@ -563,7 +563,7 @@ bool HeapRegionManager::allocate_containing_regions(MemRegion range, size_t* com
commits++;
expand_exact(curr_index, 1, pretouch_workers);
}
- HeapRegion* curr_region = _regions.get_by_index(curr_index);
+ G1HeapRegion* curr_region = _regions.get_by_index(curr_index);
if (!curr_region->is_free()) {
return false;
}
@@ -587,7 +587,7 @@ void HeapRegionManager::par_iterate(HeapRegionClosure* blk, HeapRegionClaimer* h
if (!is_available(index)) {
continue;
}
- HeapRegion* r = _regions.get_by_index(index);
+ G1HeapRegion* r = _regions.get_by_index(index);
// We'll ignore regions already claimed.
if (hrclaimer->is_region_claimed(index)) {
continue;
@@ -699,7 +699,7 @@ void HeapRegionManager::verify() {
continue;
}
num_committed++;
- HeapRegion* hr = _regions.get_by_index(i);
+ G1HeapRegion* hr = _regions.get_by_index(i);
guarantee(hr != nullptr, "invariant: i: %u", i);
guarantee(!prev_committed || hr->bottom() == prev_end,
"invariant i: %u " HR_FORMAT " prev_end: " PTR_FORMAT,
@@ -800,9 +800,9 @@ class G1RebuildFreeListTask : public WorkerTask {
return;
}
- FreeRegionList *free_list = worker_freelist(worker_id);
+ FreeRegionList* free_list = worker_freelist(worker_id);
for (uint i = start; i < end; i++) {
- HeapRegion *region = _hrm->at_or_null(i);
+ G1HeapRegion* region = _hrm->at_or_null(i);
if (region != nullptr && region->is_free()) {
// Need to clear old links to allow to be added to new freelist.
region->unlink_from_list();
diff --git a/src/hotspot/share/gc/g1/g1HeapRegionManager.hpp b/src/hotspot/share/gc/g1/g1HeapRegionManager.hpp
index 8ecb5d6d66f64..3162cc39a9938 100644
--- a/src/hotspot/share/gc/g1/g1HeapRegionManager.hpp
+++ b/src/hotspot/share/gc/g1/g1HeapRegionManager.hpp
@@ -32,19 +32,19 @@
#include "memory/allocation.hpp"
#include "services/memoryUsage.hpp"
-class HeapRegion;
+class G1HeapRegion;
class HeapRegionClosure;
class HeapRegionClaimer;
class FreeRegionList;
class WorkerThreads;
-class G1HeapRegionTable : public G1BiasedMappedArray {
+class G1HeapRegionTable : public G1BiasedMappedArray {
protected:
- virtual HeapRegion* default_value() const { return nullptr; }
+ virtual G1HeapRegion* default_value() const { return nullptr; }
};
// This class keeps track of the actual heap memory, auxiliary data
-// and its metadata (i.e., HeapRegion instances) and the list of free regions.
+// and its metadata (i.e., G1HeapRegion instances) and the list of free regions.
//
// This allows maximum flexibility for deciding what to commit or uncommit given
// a request from outside.
@@ -55,9 +55,9 @@ class G1HeapRegionTable : public G1BiasedMappedArray {
// regions that are consecutive in the array should also be adjacent in the
// address space (i.e., region(i).end() == region(i+1).bottom().
//
-// We create a HeapRegion when we commit the region's address space
+// We create a G1HeapRegion when we commit the region's address space
// for the first time. When we uncommit the address space of a
-// region we retain the HeapRegion to be able to re-use it in the
+// region we retain the G1HeapRegion to be able to re-use it in the
// future (in case we recommit it).
//
// We keep track of four lengths:
@@ -81,7 +81,7 @@ class HeapRegionManager: public CHeapObj {
// can either be active (ready for use) or inactive (ready for uncommit).
G1CommittedRegionMap _committed_map;
- // Internal only. The highest heap region +1 we allocated a HeapRegion instance for.
+ // Internal only. The highest heap region +1 we allocated a G1HeapRegion instance for.
uint _allocated_heapregions_length;
HeapWord* heap_bottom() const { return _regions.bottom_address_mapped(); }
@@ -137,12 +137,12 @@ class HeapRegionManager: public CHeapObj {
void reactivate_regions(uint start, uint num_regions);
void uncommit_regions(uint start, uint num_regions);
- // Allocate a new HeapRegion for the given index.
- HeapRegion* new_heap_region(uint hrm_index);
+ // Allocate a new G1HeapRegion for the given index.
+ G1HeapRegion* new_heap_region(uint hrm_index);
// Humongous allocation helpers
- HeapRegion* allocate_humongous_from_free_list(uint num_regions);
- HeapRegion* allocate_humongous_allow_expand(uint num_regions);
+ G1HeapRegion* allocate_humongous_from_free_list(uint num_regions);
+ G1HeapRegion* allocate_humongous_allow_expand(uint num_regions);
// Expand helper for cases when the regions to expand are well defined.
void expand_exact(uint start, uint num_regions, WorkerThreads* pretouch_workers);
@@ -153,7 +153,7 @@ class HeapRegionManager: public CHeapObj {
#ifdef ASSERT
public:
- bool is_free(HeapRegion* hr) const;
+ bool is_free(G1HeapRegion* hr) const;
#endif
public:
// Empty constructor, we'll initialize it with the initialize() method.
@@ -165,32 +165,32 @@ class HeapRegionManager: public CHeapObj {
G1RegionToSpaceMapper* cardtable);
// Return the "dummy" region used for G1AllocRegion. This is currently a hardwired
- // new HeapRegion that owns HeapRegion at index 0. Since at the moment we commit
+ // new G1HeapRegion that owns G1HeapRegion at index 0. Since at the moment we commit
// the heap from the lowest address, this region (and its associated data
// structures) are available and we do not need to check further.
- HeapRegion* get_dummy_region() { return new_heap_region(0); }
+ G1HeapRegion* get_dummy_region() { return new_heap_region(0); }
- // Return the HeapRegion at the given index. Assume that the index
+ // Return the G1HeapRegion at the given index. Assume that the index
// is valid.
- inline HeapRegion* at(uint index) const;
+ inline G1HeapRegion* at(uint index) const;
- // Return the HeapRegion at the given index, null if the index
+ // Return the G1HeapRegion at the given index, null if the index
// is for an unavailable region.
- inline HeapRegion* at_or_null(uint index) const;
+ inline G1HeapRegion* at_or_null(uint index) const;
// Returns whether the given region is available for allocation.
inline bool is_available(uint region) const;
// Return the next region (by index) that is part of the same
// humongous object that hr is part of.
- inline HeapRegion* next_region_in_humongous(HeapRegion* hr) const;
+ inline G1HeapRegion* next_region_in_humongous(G1HeapRegion* hr) const;
// If addr is within the committed space return its corresponding
- // HeapRegion, otherwise return null.
- inline HeapRegion* addr_to_region(HeapWord* addr) const;
+ // G1HeapRegion, otherwise return null.
+ inline G1HeapRegion* addr_to_region(HeapWord* addr) const;
// Insert the given region into the free region list.
- inline void insert_into_free_list(HeapRegion* hr);
+ inline void insert_into_free_list(G1HeapRegion* hr);
// Rebuild the free region list from scratch.
void rebuild_free_list(WorkerThreads* workers);
@@ -201,15 +201,15 @@ class HeapRegionManager: public CHeapObj {
}
// Allocate a free region with specific node index. If fails allocate with next node index.
- HeapRegion* allocate_free_region(HeapRegionType type, uint requested_node_index);
+ G1HeapRegion* allocate_free_region(HeapRegionType type, uint requested_node_index);
// Allocate a humongous object from the free list
- HeapRegion* allocate_humongous(uint num_regions);
+ G1HeapRegion* allocate_humongous(uint num_regions);
// Allocate a humongous object by expanding the heap
- HeapRegion* expand_and_allocate_humongous(uint num_regions);
+ G1HeapRegion* expand_and_allocate_humongous(uint num_regions);
- inline HeapRegion* allocate_free_regions_starting_at(uint first, uint num_regions);
+ inline G1HeapRegion* allocate_free_regions_starting_at(uint first, uint num_regions);
// Remove all regions from the free list.
void remove_all_free_regions() {
@@ -226,7 +226,7 @@ class HeapRegionManager: public CHeapObj {
}
size_t total_free_bytes() const {
- return num_free_regions() * HeapRegion::GrainBytes;
+ return num_free_regions() * G1HeapRegion::GrainBytes;
}
// Return the number of regions available (uncommitted) regions.
@@ -247,14 +247,14 @@ class HeapRegionManager: public CHeapObj {
// Expand the sequence to reflect that the heap has grown. Either create new
// HeapRegions, or re-use existing ones. Returns the number of regions the
- // sequence was expanded by. If a HeapRegion allocation fails, the resulting
+ // sequence was expanded by. If a G1HeapRegion allocation fails, the resulting
// number of regions might be smaller than what's desired.
uint expand_by(uint num_regions, WorkerThreads* pretouch_workers);
// Try to expand on the given node index, returning the index of the new region.
uint expand_on_preferred_node(uint node_index);
- HeapRegion* next_region_in_heap(const HeapRegion* r) const;
+ G1HeapRegion* next_region_in_heap(const G1HeapRegion* r) const;
// Find the highest free or uncommitted region in the reserved heap,
// and if uncommitted, commit it. If none are available, return G1_NO_HRM_INDEX.
diff --git a/src/hotspot/share/gc/g1/g1HeapRegionManager.inline.hpp b/src/hotspot/share/gc/g1/g1HeapRegionManager.inline.hpp
index 9768ee358baae..0a74a22c52e1e 100644
--- a/src/hotspot/share/gc/g1/g1HeapRegionManager.inline.hpp
+++ b/src/hotspot/share/gc/g1/g1HeapRegionManager.inline.hpp
@@ -35,7 +35,7 @@ inline bool HeapRegionManager::is_available(uint region) const {
return _committed_map.active(region);
}
-inline HeapRegion* HeapRegionManager::addr_to_region(HeapWord* addr) const {
+inline G1HeapRegion* HeapRegionManager::addr_to_region(HeapWord* addr) const {
assert(addr < heap_end(),
"addr: " PTR_FORMAT " end: " PTR_FORMAT, p2i(addr), p2i(heap_end()));
assert(addr >= heap_bottom(),
@@ -43,25 +43,25 @@ inline HeapRegion* HeapRegionManager::addr_to_region(HeapWord* addr) const {
return _regions.get_by_address(addr);
}
-inline HeapRegion* HeapRegionManager::at(uint index) const {
+inline G1HeapRegion* HeapRegionManager::at(uint index) const {
assert(is_available(index), "pre-condition");
- HeapRegion* hr = _regions.get_by_index(index);
+ G1HeapRegion* hr = _regions.get_by_index(index);
assert(hr != nullptr, "sanity");
assert(hr->hrm_index() == index, "sanity");
return hr;
}
-inline HeapRegion* HeapRegionManager::at_or_null(uint index) const {
+inline G1HeapRegion* HeapRegionManager::at_or_null(uint index) const {
if (!is_available(index)) {
return nullptr;
}
- HeapRegion* hr = _regions.get_by_index(index);
- assert(hr != nullptr, "All available regions must have a HeapRegion but index %u has not.", index);
+ G1HeapRegion* hr = _regions.get_by_index(index);
+ assert(hr != nullptr, "All available regions must have a G1HeapRegion but index %u has not.", index);
assert(hr->hrm_index() == index, "sanity");
return hr;
}
-inline HeapRegion* HeapRegionManager::next_region_in_humongous(HeapRegion* hr) const {
+inline G1HeapRegion* HeapRegionManager::next_region_in_humongous(G1HeapRegion* hr) const {
uint index = hr->hrm_index();
assert(is_available(index), "pre-condition");
assert(hr->is_humongous(), "next_region_in_humongous should only be called for a humongous region.");
@@ -73,12 +73,12 @@ inline HeapRegion* HeapRegionManager::next_region_in_humongous(HeapRegion* hr) c
}
}
-inline void HeapRegionManager::insert_into_free_list(HeapRegion* hr) {
+inline void HeapRegionManager::insert_into_free_list(G1HeapRegion* hr) {
_free_list.add_ordered(hr);
}
-inline HeapRegion* HeapRegionManager::allocate_free_regions_starting_at(uint first, uint num_regions) {
- HeapRegion* start = at(first);
+inline G1HeapRegion* HeapRegionManager::allocate_free_regions_starting_at(uint first, uint num_regions) {
+ G1HeapRegion* start = at(first);
_free_list.remove_starting_at(start, num_regions);
return start;
}
diff --git a/src/hotspot/share/gc/g1/g1HeapRegionPrinter.hpp b/src/hotspot/share/gc/g1/g1HeapRegionPrinter.hpp
new file mode 100644
index 0000000000000..bd686f3809d05
--- /dev/null
+++ b/src/hotspot/share/gc/g1/g1HeapRegionPrinter.hpp
@@ -0,0 +1,77 @@
+/*
+ * Copyright (c) 2011, 2024, Oracle and/or its affiliates. All rights reserved.
+ * DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
+ *
+ * This code is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 only, as
+ * published by the Free Software Foundation.
+ *
+ * This code is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * version 2 for more details (a copy is included in the LICENSE file that
+ * accompanied this code).
+ *
+ * You should have received a copy of the GNU General Public License version
+ * 2 along with this work; if not, write to the Free Software Foundation,
+ * Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ * Please contact Oracle, 500 Oracle Parkway, Redwood Shores, CA 94065 USA
+ * or visit www.oracle.com if you need additional information or have any
+ * questions.
+ *
+ */
+
+#ifndef SHARE_GC_G1_G1HEAPREGIONPRINTER_HPP
+#define SHARE_GC_G1_G1HEAPREGIONPRINTER_HPP
+
+#include "gc/g1/g1HeapRegion.hpp"
+#include "logging/log.hpp"
+#include "memory/allStatic.hpp"
+
+class FreeRegionList;
+
+class G1HeapRegionPrinter : public AllStatic {
+
+ // Print an action event.
+ static void print(const char* action, G1HeapRegion* hr) {
+ log_trace(gc, region)("G1HR %s(%s) [" PTR_FORMAT ", " PTR_FORMAT ", " PTR_FORMAT "]",
+ action, hr->get_type_str(), p2i(hr->bottom()), p2i(hr->top()), p2i(hr->end()));
+ }
+
+public:
+ // In some places we iterate over a list in order to generate output
+ // for the list's elements. By exposing this we can avoid this
+ // iteration if the printer is not active.
+ static bool is_active() { return log_is_enabled(Trace, gc, region); }
+
+ // The methods below are convenient wrappers for the print() method.
+
+ static void alloc(G1HeapRegion* hr) { print("ALLOC", hr); }
+
+ static void retire(G1HeapRegion* hr) { print("RETIRE", hr); }
+
+ static void reuse(G1HeapRegion* hr) { print("REUSE", hr); }
+
+ static void cset(G1HeapRegion* hr) { print("CSET", hr); }
+
+ static void evac_failure(G1HeapRegion* hr) { print("EVAC-FAILURE", hr); }
+
+ static void mark_reclaim(G1HeapRegion* hr) { print("MARK-RECLAIM", hr); }
+
+ static void eager_reclaim(G1HeapRegion* hr) { print("EAGER-RECLAIM", hr); }
+
+ static void evac_reclaim(G1HeapRegion* hr) { print("EVAC-RECLAIM", hr); }
+
+ static void post_compaction(G1HeapRegion* hr) { print("POST-COMPACTION", hr); }
+
+ static void commit(G1HeapRegion* hr) { print("COMMIT", hr); }
+
+ static void active(G1HeapRegion* hr) { print("ACTIVE", hr); }
+
+ static void inactive(G1HeapRegion* hr) { print("INACTIVE", hr); }
+
+ static void uncommit(G1HeapRegion* hr) { print("UNCOMMIT", hr); }
+};
+
+#endif // SHARE_GC_G1_G1HEAPREGIONPRINTER_HPP
diff --git a/src/hotspot/share/gc/g1/g1HeapRegionRemSet.cpp b/src/hotspot/share/gc/g1/g1HeapRegionRemSet.cpp
index 17134f0da241a..2610fbde298e8 100644
--- a/src/hotspot/share/gc/g1/g1HeapRegionRemSet.cpp
+++ b/src/hotspot/share/gc/g1/g1HeapRegionRemSet.cpp
@@ -55,7 +55,7 @@ void HeapRegionRemSet::initialize(MemRegion reserved) {
_heap_base_address = reserved.start();
}
-HeapRegionRemSet::HeapRegionRemSet(HeapRegion* hr,
+HeapRegionRemSet::HeapRegionRemSet(G1HeapRegion* hr,
G1CardSetConfiguration* config) :
_code_roots(),
_card_set_mm(config, G1CollectedHeap::heap()->card_set_freelist_pool()),
@@ -123,7 +123,7 @@ void HeapRegionRemSet::code_roots_do(NMethodClosure* blk) const {
_code_roots.nmethods_do(blk);
}
-void HeapRegionRemSet::clean_code_roots(HeapRegion* hr) {
+void HeapRegionRemSet::clean_code_roots(G1HeapRegion* hr) {
_code_roots.clean(hr);
}
diff --git a/src/hotspot/share/gc/g1/g1HeapRegionRemSet.hpp b/src/hotspot/share/gc/g1/g1HeapRegionRemSet.hpp
index fa7aee562c614..fb0a64ac0d18c 100644
--- a/src/hotspot/share/gc/g1/g1HeapRegionRemSet.hpp
+++ b/src/hotspot/share/gc/g1/g1HeapRegionRemSet.hpp
@@ -49,7 +49,7 @@ class HeapRegionRemSet : public CHeapObj {
// The set of cards in the Java heap
G1CardSet _card_set;
- HeapRegion* _hr;
+ G1HeapRegion* _hr;
// Cached value of heap base address.
static HeapWord* _heap_base_address;
@@ -57,7 +57,7 @@ class HeapRegionRemSet : public CHeapObj {
void clear_fcc();
public:
- HeapRegionRemSet(HeapRegion* hr, G1CardSetConfiguration* config);
+ HeapRegionRemSet(G1HeapRegion* hr, G1CardSetConfiguration* config);
bool cardset_is_empty() const {
return _card_set.is_empty();
@@ -155,7 +155,7 @@ class HeapRegionRemSet : public CHeapObj {
// Applies blk->do_nmethod() to each of the entries in _code_roots
void code_roots_do(NMethodClosure* blk) const;
// Clean out code roots not having an oop pointing into this region any more.
- void clean_code_roots(HeapRegion* hr);
+ void clean_code_roots(G1HeapRegion* hr);
// Returns the number of elements in _code_roots
size_t code_roots_list_length() const {
diff --git a/src/hotspot/share/gc/g1/g1HeapRegionSet.cpp b/src/hotspot/share/gc/g1/g1HeapRegionSet.cpp
index 832770ddb4c42..eaf475aff34e3 100644
--- a/src/hotspot/share/gc/g1/g1HeapRegionSet.cpp
+++ b/src/hotspot/share/gc/g1/g1HeapRegionSet.cpp
@@ -31,7 +31,7 @@
uint FreeRegionList::_unrealistically_long_length = 0;
#ifndef PRODUCT
-void HeapRegionSetBase::verify_region(HeapRegion* hr) {
+void HeapRegionSetBase::verify_region(G1HeapRegion* hr) {
assert(hr->containing_set() == this, "Inconsistent containing set for %u", hr->hrm_index());
assert(!hr->is_young(), "Adding young region %u", hr->hrm_index()); // currently we don't use these sets for young regions
assert(_checker == nullptr || _checker->is_correct_type(hr), "Wrong type of region %u (%s) and set %s",
@@ -99,11 +99,11 @@ void FreeRegionList::remove_all() {
check_mt_safety();
verify_optional();
- HeapRegion* curr = _head;
+ G1HeapRegion* curr = _head;
while (curr != nullptr) {
verify_region(curr);
- HeapRegion* next = curr->next();
+ G1HeapRegion* next = curr->next();
curr->set_next(nullptr);
curr->set_prev(nullptr);
curr->set_containing_set(nullptr);
@@ -134,7 +134,7 @@ void FreeRegionList::add_list_common_start(FreeRegionList* from_list) {
#ifdef ASSERT
FreeRegionListIterator iter(from_list);
while (iter.more_available()) {
- HeapRegion* hr = iter.get_next();
+ G1HeapRegion* hr = iter.get_next();
// In set_containing_set() we check that we either set the value
// from null to non-null or vice versa to catch bugs. So, we have
// to null it first before setting it to the value.
@@ -189,8 +189,8 @@ void FreeRegionList::add_ordered(FreeRegionList* from_list) {
_head = from_list->_head;
_tail = from_list->_tail;
} else {
- HeapRegion* curr_to = _head;
- HeapRegion* curr_from = from_list->_head;
+ G1HeapRegion* curr_to = _head;
+ G1HeapRegion* curr_from = from_list->_head;
while (curr_from != nullptr) {
while (curr_to != nullptr && curr_to->hrm_index() < curr_from->hrm_index()) {
@@ -203,7 +203,7 @@ void FreeRegionList::add_ordered(FreeRegionList* from_list) {
curr_from->set_prev(_tail);
curr_from = nullptr;
} else {
- HeapRegion* next_from = curr_from->next();
+ G1HeapRegion* next_from = curr_from->next();
curr_from->set_next(curr_to);
curr_from->set_prev(curr_to->prev());
@@ -227,7 +227,7 @@ void FreeRegionList::add_ordered(FreeRegionList* from_list) {
}
#ifdef ASSERT
-void FreeRegionList::verify_region_to_remove(HeapRegion* curr, HeapRegion* next) {
+void FreeRegionList::verify_region_to_remove(G1HeapRegion* curr, G1HeapRegion* next) {
assert_free_region_list(_head != next, "invariant");
if (next != nullptr) {
assert_free_region_list(next->prev() == curr, "invariant");
@@ -235,7 +235,7 @@ void FreeRegionList::verify_region_to_remove(HeapRegion* curr, HeapRegion* next)
} else {
assert_free_region_list(_tail == curr, "invariant");
}
- HeapRegion* prev = curr->prev();
+ G1HeapRegion* prev = curr->prev();
if (prev == nullptr) {
assert_free_region_list(_head == curr, "invariant");
} else {
@@ -244,7 +244,7 @@ void FreeRegionList::verify_region_to_remove(HeapRegion* curr, HeapRegion* next)
}
#endif
-void FreeRegionList::remove_starting_at(HeapRegion* first, uint num_regions) {
+void FreeRegionList::remove_starting_at(G1HeapRegion* first, uint num_regions) {
check_mt_safety();
assert_free_region_list(num_regions >= 1, "pre-condition");
assert_free_region_list(!is_empty(), "pre-condition");
@@ -254,13 +254,13 @@ void FreeRegionList::remove_starting_at(HeapRegion* first, uint num_regions) {
DEBUG_ONLY(uint old_length = length();)
// prev points to the node right before first or null when first == _head
- HeapRegion* const prev = first->prev();
+ G1HeapRegion* const prev = first->prev();
// next points to the node right after first or null when first == _tail,
// and after the while loop below, next should point to the next node right
// after the removed sublist, or null if the sublist contains _tail.
- HeapRegion* next = first->next();
+ G1HeapRegion* next = first->next();
- HeapRegion* curr = first;
+ G1HeapRegion* curr = first;
uint count = 0;
while (count < num_regions) {
verify_region(curr);
@@ -329,9 +329,9 @@ void FreeRegionList::clear() {
}
void FreeRegionList::verify_list() {
- HeapRegion* curr = _head;
- HeapRegion* prev1 = nullptr;
- HeapRegion* prev0 = nullptr;
+ G1HeapRegion* curr = _head;
+ G1HeapRegion* prev1 = nullptr;
+ G1HeapRegion* prev0 = nullptr;
uint count = 0;
size_t capacity = 0;
uint last_index = 0;
diff --git a/src/hotspot/share/gc/g1/g1HeapRegionSet.hpp b/src/hotspot/share/gc/g1/g1HeapRegionSet.hpp
index bfc60992c3d40..e401b38aadc94 100644
--- a/src/hotspot/share/gc/g1/g1HeapRegionSet.hpp
+++ b/src/hotspot/share/gc/g1/g1HeapRegionSet.hpp
@@ -53,8 +53,8 @@ class HeapRegionSetChecker : public CHeapObj {
public:
// Verify MT safety for this HeapRegionSet.
virtual void check_mt_safety() = 0;
- // Returns true if the given HeapRegion is of the correct type for this HeapRegionSet.
- virtual bool is_correct_type(HeapRegion* hr) = 0;
+ // Returns true if the given G1HeapRegion is of the correct type for this HeapRegionSet.
+ virtual bool is_correct_type(G1HeapRegion* hr) = 0;
// Return a description of the type of regions this HeapRegionSet contains.
virtual const char* get_description() = 0;
};
@@ -79,7 +79,7 @@ class HeapRegionSetBase {
// verify_region() is used to ensure that the contents of a region
// added to / removed from a set are consistent.
- void verify_region(HeapRegion* hr) PRODUCT_RETURN;
+ void verify_region(G1HeapRegion* hr) PRODUCT_RETURN;
void check_mt_safety() {
if (_checker != nullptr) {
@@ -98,15 +98,15 @@ class HeapRegionSetBase {
// It updates the fields of the set to reflect hr being added to
// the set and tags the region appropriately.
- inline void add(HeapRegion* hr);
+ inline void add(G1HeapRegion* hr);
// It updates the fields of the set to reflect hr being removed
// from the set and tags the region appropriately.
- inline void remove(HeapRegion* hr);
+ inline void remove(G1HeapRegion* hr);
virtual void verify();
void verify_start();
- void verify_next_region(HeapRegion* hr);
+ void verify_next_region(G1HeapRegion* hr);
void verify_end();
void verify_optional() { DEBUG_ONLY(verify();) }
@@ -163,19 +163,19 @@ class FreeRegionList : public HeapRegionSetBase {
void add(NodeInfo* info);
};
- HeapRegion* _head;
- HeapRegion* _tail;
+ G1HeapRegion* _head;
+ G1HeapRegion* _tail;
// _last is used to keep track of where we added an element the last
// time. It helps to improve performance when adding several ordered items in a row.
- HeapRegion* _last;
+ G1HeapRegion* _last;
NodeInfo* _node_info;
static uint _unrealistically_long_length;
- inline HeapRegion* remove_from_head_impl();
- inline HeapRegion* remove_from_tail_impl();
+ inline G1HeapRegion* remove_from_head_impl();
+ inline G1HeapRegion* remove_from_tail_impl();
inline void increase_length(uint node_index);
inline void decrease_length(uint node_index);
@@ -184,7 +184,7 @@ class FreeRegionList : public HeapRegionSetBase {
void add_list_common_start(FreeRegionList* from_list);
void add_list_common_end(FreeRegionList* from_list);
- void verify_region_to_remove(HeapRegion* curr, HeapRegion* next) NOT_DEBUG_RETURN;
+ void verify_region_to_remove(G1HeapRegion* curr, G1HeapRegion* next) NOT_DEBUG_RETURN;
protected:
// See the comment for HeapRegionSetBase::clear()
virtual void clear();
@@ -196,7 +196,7 @@ class FreeRegionList : public HeapRegionSetBase {
void verify_list();
#ifdef ASSERT
- bool contains(HeapRegion* hr) const {
+ bool contains(G1HeapRegion* hr) const {
return hr->containing_set() == this;
}
#endif
@@ -206,14 +206,14 @@ class FreeRegionList : public HeapRegionSetBase {
// Add hr to the list. The region should not be a member of another set.
// Assumes that the list is ordered and will preserve that order. The order
// is determined by hrm_index.
- inline void add_ordered(HeapRegion* hr);
+ inline void add_ordered(G1HeapRegion* hr);
// Same restrictions as above, but adds the region last in the list.
- inline void add_to_tail(HeapRegion* region_to_add);
+ inline void add_to_tail(G1HeapRegion* region_to_add);
// Removes from head or tail based on the given argument.
- HeapRegion* remove_region(bool from_head);
+ G1HeapRegion* remove_region(bool from_head);
- HeapRegion* remove_region_with_node_index(bool from_head,
+ G1HeapRegion* remove_region_with_node_index(bool from_head,
uint requested_node_index);
// Merge two ordered lists. The result is also ordered. The order is
@@ -231,7 +231,7 @@ class FreeRegionList : public HeapRegionSetBase {
// Remove all (contiguous) regions from first to first + num_regions -1 from
// this list.
// Num_regions must be >= 1.
- void remove_starting_at(HeapRegion* first, uint num_regions);
+ void remove_starting_at(G1HeapRegion* first, uint num_regions);
virtual void verify();
@@ -245,21 +245,21 @@ class FreeRegionList : public HeapRegionSetBase {
class FreeRegionListIterator : public StackObj {
private:
FreeRegionList* _list;
- HeapRegion* _curr;
+ G1HeapRegion* _curr;
public:
bool more_available() {
return _curr != nullptr;
}
- HeapRegion* get_next() {
+ G1HeapRegion* get_next() {
assert(more_available(),
"get_next() should be called when more regions are available");
// If we are going to introduce a count in the iterator we should
// do the "cycle" check.
- HeapRegion* hr = _curr;
+ G1HeapRegion* hr = _curr;
_list->verify_region(hr);
_curr = hr->next();
return hr;
diff --git a/src/hotspot/share/gc/g1/g1HeapRegionSet.inline.hpp b/src/hotspot/share/gc/g1/g1HeapRegionSet.inline.hpp
index fbd19466d9f9a..f0a38a4175436 100644
--- a/src/hotspot/share/gc/g1/g1HeapRegionSet.inline.hpp
+++ b/src/hotspot/share/gc/g1/g1HeapRegionSet.inline.hpp
@@ -29,7 +29,7 @@
#include "gc/g1/g1NUMA.hpp"
-inline void HeapRegionSetBase::add(HeapRegion* hr) {
+inline void HeapRegionSetBase::add(G1HeapRegion* hr) {
check_mt_safety();
assert_heap_region_set(hr->containing_set() == nullptr, "should not already have a containing set");
assert_heap_region_set(hr->next() == nullptr, "should not already be linked");
@@ -40,7 +40,7 @@ inline void HeapRegionSetBase::add(HeapRegion* hr) {
verify_region(hr);
}
-inline void HeapRegionSetBase::remove(HeapRegion* hr) {
+inline void HeapRegionSetBase::remove(G1HeapRegion* hr) {
check_mt_safety();
verify_region(hr);
assert_heap_region_set(hr->next() == nullptr, "should already be unlinked");
@@ -51,7 +51,7 @@ inline void HeapRegionSetBase::remove(HeapRegion* hr) {
_length--;
}
-inline void FreeRegionList::add_to_tail(HeapRegion* region_to_add) {
+inline void FreeRegionList::add_to_tail(G1HeapRegion* region_to_add) {
assert_free_region_list((length() == 0 && _head == nullptr && _tail == nullptr && _last == nullptr) ||
(length() > 0 && _head != nullptr && _tail != nullptr && _tail->hrm_index() < region_to_add->hrm_index()),
"invariant");
@@ -71,7 +71,7 @@ inline void FreeRegionList::add_to_tail(HeapRegion* region_to_add) {
increase_length(region_to_add->node_index());
}
-inline void FreeRegionList::add_ordered(HeapRegion* hr) {
+inline void FreeRegionList::add_ordered(G1HeapRegion* hr) {
assert_free_region_list((length() == 0 && _head == nullptr && _tail == nullptr && _last == nullptr) ||
(length() > 0 && _head != nullptr && _tail != nullptr),
"invariant");
@@ -80,7 +80,7 @@ inline void FreeRegionList::add_ordered(HeapRegion* hr) {
// Now link the region
if (_head != nullptr) {
- HeapRegion* curr;
+ G1HeapRegion* curr;
if (_last != nullptr && _last->hrm_index() < hr->hrm_index()) {
curr = _last;
@@ -120,8 +120,8 @@ inline void FreeRegionList::add_ordered(HeapRegion* hr) {
increase_length(hr->node_index());
}
-inline HeapRegion* FreeRegionList::remove_from_head_impl() {
- HeapRegion* result = _head;
+inline G1HeapRegion* FreeRegionList::remove_from_head_impl() {
+ G1HeapRegion* result = _head;
_head = result->next();
if (_head == nullptr) {
_tail = nullptr;
@@ -132,8 +132,8 @@ inline HeapRegion* FreeRegionList::remove_from_head_impl() {
return result;
}
-inline HeapRegion* FreeRegionList::remove_from_tail_impl() {
- HeapRegion* result = _tail;
+inline G1HeapRegion* FreeRegionList::remove_from_tail_impl() {
+ G1HeapRegion* result = _tail;
_tail = result->prev();
if (_tail == nullptr) {
@@ -145,7 +145,7 @@ inline HeapRegion* FreeRegionList::remove_from_tail_impl() {
return result;
}
-inline HeapRegion* FreeRegionList::remove_region(bool from_head) {
+inline G1HeapRegion* FreeRegionList::remove_region(bool from_head) {
check_mt_safety();
verify_optional();
@@ -154,7 +154,7 @@ inline HeapRegion* FreeRegionList::remove_region(bool from_head) {
}
assert_free_region_list(length() > 0 && _head != nullptr && _tail != nullptr, "invariant");
- HeapRegion* hr;
+ G1HeapRegion* hr;
if (from_head) {
hr = remove_from_head_impl();
@@ -174,12 +174,12 @@ inline HeapRegion* FreeRegionList::remove_region(bool from_head) {
return hr;
}
-inline HeapRegion* FreeRegionList::remove_region_with_node_index(bool from_head,
+inline G1HeapRegion* FreeRegionList::remove_region_with_node_index(bool from_head,
uint requested_node_index) {
assert(UseNUMA, "Invariant");
const uint max_search_depth = G1NUMA::numa()->max_search_depth();
- HeapRegion* cur;
+ G1HeapRegion* cur;
// Find the region to use, searching from _head or _tail as requested.
size_t cur_depth = 0;
@@ -207,8 +207,8 @@ inline HeapRegion* FreeRegionList::remove_region_with_node_index(bool from_head,
}
// Splice the region out of the list.
- HeapRegion* prev = cur->prev();
- HeapRegion* next = cur->next();
+ G1HeapRegion* prev = cur->prev();
+ G1HeapRegion* next = cur->next();
if (prev == nullptr) {
_head = next;
} else {
diff --git a/src/hotspot/share/gc/g1/g1HeapSizingPolicy.cpp b/src/hotspot/share/gc/g1/g1HeapSizingPolicy.cpp
index d340ed646b410..a7b08a6a5c9b5 100644
--- a/src/hotspot/share/gc/g1/g1HeapSizingPolicy.cpp
+++ b/src/hotspot/share/gc/g1/g1HeapSizingPolicy.cpp
@@ -123,7 +123,7 @@ size_t G1HeapSizingPolicy::young_collection_expansion_amount() {
bool filled_history_buffer = _pauses_since_start == _num_prev_pauses_for_heuristics;
if ((_ratio_over_threshold_count == MinOverThresholdForGrowth) ||
(filled_history_buffer && (long_term_pause_time_ratio > threshold))) {
- size_t min_expand_bytes = HeapRegion::GrainBytes;
+ size_t min_expand_bytes = G1HeapRegion::GrainBytes;
size_t reserved_bytes = _g1h->max_capacity();
size_t committed_bytes = _g1h->capacity();
size_t uncommitted_bytes = reserved_bytes - committed_bytes;
@@ -224,7 +224,7 @@ size_t G1HeapSizingPolicy::full_collection_resize_amount(bool& expand) {
// GC where eden is empty. During Remark there can be an
// arbitrary number of eden regions which would skew the
// results.
- _g1h->eden_regions_count() * HeapRegion::GrainBytes;
+ _g1h->eden_regions_count() * G1HeapRegion::GrainBytes;
size_t minimum_desired_capacity = target_heap_capacity(used_after_gc, MinHeapFreeRatio);
size_t maximum_desired_capacity = target_heap_capacity(used_after_gc, MaxHeapFreeRatio);
diff --git a/src/hotspot/share/gc/g1/g1HeapTransition.cpp b/src/hotspot/share/gc/g1/g1HeapTransition.cpp
index 4f759b4a6069e..8804d37cc719a 100644
--- a/src/hotspot/share/gc/g1/g1HeapTransition.cpp
+++ b/src/hotspot/share/gc/g1/g1HeapTransition.cpp
@@ -82,7 +82,7 @@ struct G1HeapTransition::DetailedUsage : public StackObj {
class G1HeapTransition::DetailedUsageClosure: public HeapRegionClosure {
public:
DetailedUsage _usage;
- bool do_heap_region(HeapRegion* r) {
+ bool do_heap_region(G1HeapRegion* r) {
if (r->is_old()) {
_usage._old_used += r->used();
_usage._old_region_count++;
@@ -157,17 +157,17 @@ void G1HeapTransition::print() {
log_regions("Survivor", _before._survivor_length, after._survivor_length, survivor_capacity_length_before_gc,
_before._survivor_length_per_node, after._survivor_length_per_node);
log_trace(gc, heap)(" Used: " SIZE_FORMAT "K, Waste: " SIZE_FORMAT "K",
- usage._survivor_used / K, ((after._survivor_length * HeapRegion::GrainBytes) - usage._survivor_used) / K);
+ usage._survivor_used / K, ((after._survivor_length * G1HeapRegion::GrainBytes) - usage._survivor_used) / K);
log_info(gc, heap)("Old regions: " SIZE_FORMAT "->" SIZE_FORMAT,
_before._old_length, after._old_length);
log_trace(gc, heap)(" Used: " SIZE_FORMAT "K, Waste: " SIZE_FORMAT "K",
- usage._old_used / K, ((after._old_length * HeapRegion::GrainBytes) - usage._old_used) / K);
+ usage._old_used / K, ((after._old_length * G1HeapRegion::GrainBytes) - usage._old_used) / K);
log_info(gc, heap)("Humongous regions: " SIZE_FORMAT "->" SIZE_FORMAT,
_before._humongous_length, after._humongous_length);
log_trace(gc, heap)(" Used: " SIZE_FORMAT "K, Waste: " SIZE_FORMAT "K",
- usage._humongous_used / K, ((after._humongous_length * HeapRegion::GrainBytes) - usage._humongous_used) / K);
+ usage._humongous_used / K, ((after._humongous_length * G1HeapRegion::GrainBytes) - usage._humongous_used) / K);
MetaspaceUtils::print_metaspace_change(_before._meta_sizes);
}
diff --git a/src/hotspot/share/gc/g1/g1HeapVerifier.cpp b/src/hotspot/share/gc/g1/g1HeapVerifier.cpp
index 199edded0b85c..9201fc98c8a14 100644
--- a/src/hotspot/share/gc/g1/g1HeapVerifier.cpp
+++ b/src/hotspot/share/gc/g1/g1HeapVerifier.cpp
@@ -110,7 +110,7 @@ class G1VerifyCodeRootOopClosure: public OopClosure {
oop obj = CompressedOops::decode_not_null(heap_oop);
// Now fetch the region containing the object
- HeapRegion* hr = _g1h->heap_region_containing(obj);
+ G1HeapRegion* hr = _g1h->heap_region_containing(obj);
HeapRegionRemSet* hrrs = hr->rem_set();
// Verify that the code root list for this region
// contains the nmethod
@@ -198,11 +198,11 @@ class VerifyObjsInRegionClosure: public ObjectClosure {
private:
G1CollectedHeap* _g1h;
size_t _live_bytes;
- HeapRegion *_hr;
+ G1HeapRegion* _hr;
VerifyOption _vo;
public:
- VerifyObjsInRegionClosure(HeapRegion *hr, VerifyOption vo)
+ VerifyObjsInRegionClosure(G1HeapRegion* hr, VerifyOption vo)
: _live_bytes(0), _hr(hr), _vo(vo) {
_g1h = G1CollectedHeap::heap();
}
@@ -245,7 +245,7 @@ class VerifyRegionClosure: public HeapRegionClosure {
return _failures;
}
- bool do_heap_region(HeapRegion* r) {
+ bool do_heap_region(G1HeapRegion* r) {
guarantee(!r->has_index_in_opt_cset(), "Region %u still has opt collection set index %u", r->hrm_index(), r->index_in_opt_cset());
guarantee(!r->is_young() || r->rem_set()->is_complete(), "Remembered set for Young region %u must be complete, is %s", r->hrm_index(), r->rem_set()->get_state_str());
// Humongous and old regions regions might be of any state, so can't check here.
@@ -394,7 +394,7 @@ class VerifyRegionListsClosure : public HeapRegionClosure {
_old_set(old_set), _humongous_set(humongous_set), _hrm(hrm),
_old_count(), _humongous_count(), _free_count(){ }
- bool do_heap_region(HeapRegion* hr) {
+ bool do_heap_region(G1HeapRegion* hr) {
if (hr->is_young()) {
// TODO
} else if (hr->is_humongous()) {
@@ -452,7 +452,7 @@ class G1VerifyRegionMarkingStateClosure : public HeapRegionClosure {
};
public:
- virtual bool do_heap_region(HeapRegion* r) {
+ virtual bool do_heap_region(G1HeapRegion* r) {
if (r->is_free()) {
return false;
}
@@ -541,7 +541,7 @@ void G1HeapVerifier::verify_bitmap_clear(bool from_tams) {
public:
G1VerifyBitmapClear(bool from_tams) : _from_tams(from_tams) { }
- virtual bool do_heap_region(HeapRegion* r) {
+ virtual bool do_heap_region(G1HeapRegion* r) {
G1ConcurrentMark* cm = G1CollectedHeap::heap()->concurrent_mark();
G1CMBitMap* bitmap = cm->mark_bitmap();
@@ -562,7 +562,7 @@ class G1VerifyCardTableCleanup: public HeapRegionClosure {
public:
G1VerifyCardTableCleanup(G1HeapVerifier* verifier)
: _verifier(verifier) { }
- virtual bool do_heap_region(HeapRegion* r) {
+ virtual bool do_heap_region(G1HeapRegion* r) {
if (r->is_survivor()) {
_verifier->verify_dirty_region(r);
} else {
@@ -579,14 +579,14 @@ void G1HeapVerifier::verify_card_table_cleanup() {
}
}
-void G1HeapVerifier::verify_not_dirty_region(HeapRegion* hr) {
+void G1HeapVerifier::verify_not_dirty_region(G1HeapRegion* hr) {
// All of the region should be clean.
G1CardTable* ct = _g1h->card_table();
MemRegion mr(hr->bottom(), hr->end());
ct->verify_not_dirty_region(mr);
}
-void G1HeapVerifier::verify_dirty_region(HeapRegion* hr) {
+void G1HeapVerifier::verify_dirty_region(G1HeapRegion* hr) {
// We cannot guarantee that [bottom(),end()] is dirty. Threads
// dirty allocated blocks as they allocate them. The thread that
// retires each region and replaces it with a new one will do a
@@ -608,7 +608,7 @@ class G1VerifyDirtyYoungListClosure : public HeapRegionClosure {
G1HeapVerifier* _verifier;
public:
G1VerifyDirtyYoungListClosure(G1HeapVerifier* verifier) : HeapRegionClosure(), _verifier(verifier) { }
- virtual bool do_heap_region(HeapRegion* r) {
+ virtual bool do_heap_region(G1HeapRegion* r) {
_verifier->verify_dirty_region(r);
return false;
}
@@ -626,7 +626,7 @@ class G1CheckRegionAttrTableClosure : public HeapRegionClosure {
public:
G1CheckRegionAttrTableClosure() : HeapRegionClosure(), _failures(false) { }
- virtual bool do_heap_region(HeapRegion* hr) {
+ virtual bool do_heap_region(G1HeapRegion* hr) {
uint i = hr->hrm_index();
G1HeapRegionAttr region_attr = (G1HeapRegionAttr) G1CollectedHeap::heap()->_region_attr.get_by_index(i);
if (hr->is_humongous()) {
diff --git a/src/hotspot/share/gc/g1/g1HeapVerifier.hpp b/src/hotspot/share/gc/g1/g1HeapVerifier.hpp
index 221ae9576489e..39246237ba3b6 100644
--- a/src/hotspot/share/gc/g1/g1HeapVerifier.hpp
+++ b/src/hotspot/share/gc/g1/g1HeapVerifier.hpp
@@ -80,8 +80,8 @@ class G1HeapVerifier : public CHeapObj {
void verify_card_table_cleanup() PRODUCT_RETURN;
- void verify_not_dirty_region(HeapRegion* hr) PRODUCT_RETURN;
- void verify_dirty_region(HeapRegion* hr) PRODUCT_RETURN;
+ void verify_not_dirty_region(G1HeapRegion* hr) PRODUCT_RETURN;
+ void verify_dirty_region(G1HeapRegion* hr) PRODUCT_RETURN;
void verify_dirty_young_regions() PRODUCT_RETURN;
};
diff --git a/src/hotspot/share/gc/g1/g1MonitoringSupport.cpp b/src/hotspot/share/gc/g1/g1MonitoringSupport.cpp
index 4e0a7dcfc3d6b..febb27fb43cda 100644
--- a/src/hotspot/share/gc/g1/g1MonitoringSupport.cpp
+++ b/src/hotspot/share/gc/g1/g1MonitoringSupport.cpp
@@ -253,8 +253,8 @@ void G1MonitoringSupport::recalculate_sizes() {
uint eden_list_max_length = young_list_target_length - survivor_list_length;
// First calculate the committed sizes that can be calculated independently.
- _survivor_space_committed = survivor_list_length * HeapRegion::GrainBytes;
- _old_gen_committed = HeapRegion::align_up_to_region_byte_size(_old_gen_used);
+ _survivor_space_committed = survivor_list_length * G1HeapRegion::GrainBytes;
+ _old_gen_committed = G1HeapRegion::align_up_to_region_byte_size(_old_gen_used);
// Next, start with the overall committed size.
_overall_committed = _g1h->capacity();
@@ -266,7 +266,7 @@ void G1MonitoringSupport::recalculate_sizes() {
committed -= _survivor_space_committed + _old_gen_committed;
// Next, calculate and remove the committed size for the eden.
- _eden_space_committed = (size_t) eden_list_max_length * HeapRegion::GrainBytes;
+ _eden_space_committed = (size_t) eden_list_max_length * G1HeapRegion::GrainBytes;
// Somewhat defensive: be robust in case there are inaccuracies in
// the calculations
_eden_space_committed = MIN2(_eden_space_committed, committed);
diff --git a/src/hotspot/share/gc/g1/g1MonotonicArena.hpp b/src/hotspot/share/gc/g1/g1MonotonicArena.hpp
index 586766a8c8f97..bf46e4a33513a 100644
--- a/src/hotspot/share/gc/g1/g1MonotonicArena.hpp
+++ b/src/hotspot/share/gc/g1/g1MonotonicArena.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2021, 2022, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2021, 2024, Oracle and/or its affiliates. All rights reserved.
* Copyright (c) 2021, 2022, Huawei Technologies Co., Ltd. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
@@ -27,7 +27,7 @@
#define SHARE_GC_G1_G1MONOTONICARENA_HPP
#include "gc/shared/freeListAllocator.hpp"
-#include "memory/allocation.hpp"
+#include "nmt/memflags.hpp"
#include "utilities/globalDefinitions.hpp"
#include "utilities/lockFreeStack.hpp"
diff --git a/src/hotspot/share/gc/g1/g1MonotonicArenaFreeMemoryTask.cpp b/src/hotspot/share/gc/g1/g1MonotonicArenaFreeMemoryTask.cpp
index 337acfe24058d..59a1afe6f3408 100644
--- a/src/hotspot/share/gc/g1/g1MonotonicArenaFreeMemoryTask.cpp
+++ b/src/hotspot/share/gc/g1/g1MonotonicArenaFreeMemoryTask.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2021, 2022, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2021, 2024, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -29,9 +29,8 @@
#include "gc/g1/g1HeapRegionRemSet.hpp"
#include "gc/g1/g1MonotonicArenaFreeMemoryTask.hpp"
#include "gc/g1/g1MonotonicArenaFreePool.hpp"
-#include "gc/g1/g1_globals.hpp"
-#include "gc/shared/gc_globals.hpp"
#include "gc/shared/gcTraceTime.inline.hpp"
+#include "gc/shared/gc_globals.hpp"
#include "gc/shared/suspendibleThreadSet.hpp"
#include "runtime/os.hpp"
diff --git a/src/hotspot/share/gc/g1/g1NMethodClosure.cpp b/src/hotspot/share/gc/g1/g1NMethodClosure.cpp
index 56f1f52e9815e..1574ded4aa19f 100644
--- a/src/hotspot/share/gc/g1/g1NMethodClosure.cpp
+++ b/src/hotspot/share/gc/g1/g1NMethodClosure.cpp
@@ -40,7 +40,7 @@ void G1NMethodClosure::HeapRegionGatheringOopClosure::do_oop_work(T* p) {
T oop_or_narrowoop = RawAccess<>::oop_load(p);
if (!CompressedOops::is_null(oop_or_narrowoop)) {
oop o = CompressedOops::decode_not_null(oop_or_narrowoop);
- HeapRegion* hr = _g1h->heap_region_containing(o);
+ G1HeapRegion* hr = _g1h->heap_region_containing(o);
assert(!_g1h->is_in_cset(o) || hr->rem_set()->code_roots_list_contains(_nm), "if o still in collection set then evacuation failed and nm must already be in the remset");
hr->add_code_root(_nm);
}
diff --git a/src/hotspot/share/gc/g1/g1NUMA.cpp b/src/hotspot/share/gc/g1/g1NUMA.cpp
index 07c39a269895b..fd80bd52ce17e 100644
--- a/src/hotspot/share/gc/g1/g1NUMA.cpp
+++ b/src/hotspot/share/gc/g1/g1NUMA.cpp
@@ -173,7 +173,7 @@ uint G1NUMA::index_of_address(HeapWord *address) const {
}
}
-uint G1NUMA::index_for_region(HeapRegion* hr) const {
+uint G1NUMA::index_for_region(G1HeapRegion* hr) const {
if (!is_enabled()) {
return 0;
}
@@ -194,15 +194,15 @@ uint G1NUMA::index_for_region(HeapRegion* hr) const {
// nodes. Which node to request for a given address is given by the
// region size and the page size. Below are two examples on 4 NUMA nodes system:
// 1. G1HeapRegionSize(_region_size) is larger than or equal to page size.
-// * Page #: |-0--||-1--||-2--||-3--||-4--||-5--||-6--||-7--||-8--||-9--||-10-||-11-||-12-||-13-||-14-||-15-|
-// * HeapRegion #: |----#0----||----#1----||----#2----||----#3----||----#4----||----#5----||----#6----||----#7----|
-// * NUMA node #: |----#0----||----#1----||----#2----||----#3----||----#0----||----#1----||----#2----||----#3----|
+// * Page #: |-0--||-1--||-2--||-3--||-4--||-5--||-6--||-7--||-8--||-9--||-10-||-11-||-12-||-13-||-14-||-15-|
+// * G1HeapRegion #: |----#0----||----#1----||----#2----||----#3----||----#4----||----#5----||----#6----||----#7----|
+// * NUMA node #: |----#0----||----#1----||----#2----||----#3----||----#0----||----#1----||----#2----||----#3----|
// 2. G1HeapRegionSize(_region_size) is smaller than page size.
// Memory will be touched one page at a time because G1RegionToSpaceMapper commits
// pages one by one.
-// * Page #: |-----0----||-----1----||-----2----||-----3----||-----4----||-----5----||-----6----||-----7----|
-// * HeapRegion #: |-#0-||-#1-||-#2-||-#3-||-#4-||-#5-||-#6-||-#7-||-#8-||-#9-||#10-||#11-||#12-||#13-||#14-||#15-|
-// * NUMA node #: |----#0----||----#1----||----#2----||----#3----||----#0----||----#1----||----#2----||----#3----|
+// * Page #: |-----0----||-----1----||-----2----||-----3----||-----4----||-----5----||-----6----||-----7----|
+// * G1HeapRegion #: |-#0-||-#1-||-#2-||-#3-||-#4-||-#5-||-#6-||-#7-||-#8-||-#9-||#10-||#11-||#12-||#13-||#14-||#15-|
+// * NUMA node #: |----#0----||----#1----||----#2----||----#3----||----#0----||----#1----||----#2----||----#3----|
void G1NUMA::request_memory_on_node(void* aligned_address, size_t size_in_bytes, uint region_index) {
if (!is_enabled()) {
return;
@@ -288,7 +288,7 @@ G1NodeIndexCheckClosure::~G1NodeIndexCheckClosure() {
FREE_C_HEAP_ARRAY(uint, _total);
}
-bool G1NodeIndexCheckClosure::do_heap_region(HeapRegion* hr) {
+bool G1NodeIndexCheckClosure::do_heap_region(G1HeapRegion* hr) {
// Preferred node index will only have valid node index.
uint preferred_node_index = _numa->preferred_node_index_for_index(hr->hrm_index());
// Active node index may have UnknownNodeIndex.
diff --git a/src/hotspot/share/gc/g1/g1NUMA.hpp b/src/hotspot/share/gc/g1/g1NUMA.hpp
index cbefab411dff6..e9726fcad1b49 100644
--- a/src/hotspot/share/gc/g1/g1NUMA.hpp
+++ b/src/hotspot/share/gc/g1/g1NUMA.hpp
@@ -46,7 +46,7 @@ class G1NUMA: public CHeapObj {
// Total number of node ids.
uint _num_active_node_ids;
- // HeapRegion size
+ // G1HeapRegion size
size_t _region_size;
// Necessary when touching memory.
size_t _page_size;
@@ -94,9 +94,9 @@ class G1NUMA: public CHeapObj {
// Returns node index of current calling thread.
uint index_of_current_thread() const;
- // Returns the preferred index for the given HeapRegion index.
+ // Returns the preferred index for the given G1HeapRegion index.
// This assumes that HeapRegions are evenly spit, so we can decide preferred index
- // with the given HeapRegion index.
+ // with the given G1HeapRegion index.
// Result is less than num_active_nodes().
uint preferred_node_index_for_index(uint region_index) const;
@@ -107,7 +107,7 @@ class G1NUMA: public CHeapObj {
// If AlwaysPreTouch is enabled, return actual node index via system call.
// If disabled, return preferred node index of the given heap region.
- uint index_for_region(HeapRegion* hr) const;
+ uint index_for_region(G1HeapRegion* hr) const;
// Requests the given memory area to be located at the given node index.
void request_memory_on_node(void* aligned_address, size_t size_in_bytes, uint region_index);
@@ -143,7 +143,7 @@ class G1NodeIndexCheckClosure : public HeapRegionClosure {
G1NodeIndexCheckClosure(const char* desc, G1NUMA* numa, LogStream* ls);
~G1NodeIndexCheckClosure();
- bool do_heap_region(HeapRegion* hr);
+ bool do_heap_region(G1HeapRegion* hr);
};
#endif // SHARE_VM_GC_G1_NUMA_HPP
diff --git a/src/hotspot/share/gc/g1/g1OopClosures.hpp b/src/hotspot/share/gc/g1/g1OopClosures.hpp
index ef32e5f33bbec..965df8ea82a30 100644
--- a/src/hotspot/share/gc/g1/g1OopClosures.hpp
+++ b/src/hotspot/share/gc/g1/g1OopClosures.hpp
@@ -30,7 +30,6 @@
#include "memory/iterator.hpp"
#include "oops/markWord.hpp"
-class HeapRegion;
class G1CollectedHeap;
class G1RemSet;
class G1ConcurrentMark;
diff --git a/src/hotspot/share/gc/g1/g1OopClosures.inline.hpp b/src/hotspot/share/gc/g1/g1OopClosures.inline.hpp
index 50b3ab203d706..1d4a1d5ab8661 100644
--- a/src/hotspot/share/gc/g1/g1OopClosures.inline.hpp
+++ b/src/hotspot/share/gc/g1/g1OopClosures.inline.hpp
@@ -88,7 +88,7 @@ inline void G1ScanEvacuatedObjClosure::do_oop_work(T* p) {
const G1HeapRegionAttr region_attr = _g1h->region_attr(obj);
if (region_attr.is_in_cset()) {
prefetch_and_push(p, obj);
- } else if (!HeapRegion::is_in_same_region(p, obj)) {
+ } else if (!G1HeapRegion::is_in_same_region(p, obj)) {
handle_non_cset_obj_common(region_attr, p, obj);
assert(_skip_card_enqueue != Uninitialized, "Scan location has not been initialized.");
if (_skip_card_enqueue == True) {
@@ -135,7 +135,7 @@ inline void G1ConcurrentRefineOopClosure::do_oop_work(T* p) {
check_obj_during_refinement(p, obj);
- if (HeapRegion::is_in_same_region(p, obj)) {
+ if (G1HeapRegion::is_in_same_region(p, obj)) {
// Normally this closure should only be called with cross-region references.
// But since Java threads are manipulating the references concurrently and we
// reload the values things may have changed.
@@ -174,7 +174,7 @@ inline void G1ScanCardClosure::do_oop_work(T* p) {
// that this is a cross-region reference too.
prefetch_and_push(p, obj);
_heap_roots_found++;
- } else if (!HeapRegion::is_in_same_region(p, obj)) {
+ } else if (!G1HeapRegion::is_in_same_region(p, obj)) {
handle_non_cset_obj_common(region_attr, p, obj);
_par_scan_state->enqueue_card_if_tracked(region_attr, p, obj);
}
@@ -261,11 +261,11 @@ template void G1RebuildRemSetClosure::do_oop_work(T* p) {
return;
}
- if (HeapRegion::is_in_same_region(p, obj)) {
+ if (G1HeapRegion::is_in_same_region(p, obj)) {
return;
}
- HeapRegion* to = _g1h->heap_region_containing(obj);
+ G1HeapRegion* to = _g1h->heap_region_containing(obj);
HeapRegionRemSet* rem_set = to->rem_set();
if (rem_set->is_tracked()) {
rem_set->add_reference(p, _worker_id);
diff --git a/src/hotspot/share/gc/g1/g1PageBasedVirtualSpace.cpp b/src/hotspot/share/gc/g1/g1PageBasedVirtualSpace.cpp
index c0cf66fb2fdc2..bdb732f6ebb39 100644
--- a/src/hotspot/share/gc/g1/g1PageBasedVirtualSpace.cpp
+++ b/src/hotspot/share/gc/g1/g1PageBasedVirtualSpace.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2014, 2024, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2014, 2023, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -135,14 +135,14 @@ void G1PageBasedVirtualSpace::commit_preferred_pages(size_t start, size_t num_pa
char* start_addr = page_start(start);
size_t size = num_pages * _page_size;
- os::commit_memory_or_exit(start_addr, size, _page_size, !ExecMem, mtGC, "G1 virtual space");
+ os::commit_memory_or_exit(start_addr, size, _page_size, false, "G1 virtual space");
}
void G1PageBasedVirtualSpace::commit_tail() {
vmassert(_tail_size > 0, "The size of the tail area must be > 0 when reaching here");
char* const aligned_end_address = align_down(_high_boundary, _page_size);
- os::commit_memory_or_exit(aligned_end_address, _tail_size, os::vm_page_size(), !ExecMem, mtGC, "G1 virtual space");
+ os::commit_memory_or_exit(aligned_end_address, _tail_size, os::vm_page_size(), false, "G1 virtual space");
}
void G1PageBasedVirtualSpace::commit_internal(size_t start_page, size_t end_page) {
@@ -201,7 +201,7 @@ void G1PageBasedVirtualSpace::uncommit_internal(size_t start_page, size_t end_pa
"Given start page " SIZE_FORMAT " is larger or equal to end page " SIZE_FORMAT, start_page, end_page);
char* start_addr = page_start(start_page);
- os::uncommit_memory(start_addr, pointer_delta(bounded_end_addr(end_page), start_addr, sizeof(char)), !ExecMem, mtGC);
+ os::uncommit_memory(start_addr, pointer_delta(bounded_end_addr(end_page), start_addr, sizeof(char)));
}
void G1PageBasedVirtualSpace::uncommit(size_t start_page, size_t size_in_pages) {
diff --git a/src/hotspot/share/gc/g1/g1ParScanThreadState.cpp b/src/hotspot/share/gc/g1/g1ParScanThreadState.cpp
index c495889faade6..e4ac20ca7ea92 100644
--- a/src/hotspot/share/gc/g1/g1ParScanThreadState.cpp
+++ b/src/hotspot/share/gc/g1/g1ParScanThreadState.cpp
@@ -27,6 +27,7 @@
#include "gc/g1/g1CollectedHeap.inline.hpp"
#include "gc/g1/g1CollectionSet.hpp"
#include "gc/g1/g1EvacFailureRegions.inline.hpp"
+#include "gc/g1/g1HeapRegionPrinter.hpp"
#include "gc/g1/g1OopClosures.inline.hpp"
#include "gc/g1/g1ParScanThreadState.inline.hpp"
#include "gc/g1/g1RootClosures.hpp"
@@ -276,7 +277,7 @@ void G1ParScanThreadState::start_partial_objarray(G1HeapRegionAttr dest_attr,
}
// Skip the card enqueue iff the object (to_array) is in survivor region.
- // However, HeapRegion::is_survivor() is too expensive here.
+ // However, G1HeapRegion::is_survivor() is too expensive here.
// Instead, we use dest_attr.is_young() because the two values are always
// equal: successfully allocated young regions must be survivor regions.
assert(dest_attr.is_young() == _g1h->heap_region_containing(to_array)->is_survivor(), "must be");
@@ -443,7 +444,7 @@ void G1ParScanThreadState::undo_allocation(G1HeapRegionAttr dest_attr,
void G1ParScanThreadState::update_bot_after_copying(oop obj, size_t word_sz) {
HeapWord* obj_start = cast_from_oop(obj);
- HeapRegion* region = _g1h->heap_region_containing(obj_start);
+ G1HeapRegion* region = _g1h->heap_region_containing(obj_start);
region->update_bot_for_block(obj_start, obj_start + word_sz);
}
@@ -468,7 +469,7 @@ oop G1ParScanThreadState::do_copy_to_survivor_space(G1HeapRegionAttr const regio
uint age = 0;
G1HeapRegionAttr dest_attr = next_region_attr(region_attr, old_mark, age);
- HeapRegion* const from_region = _g1h->heap_region_containing(old);
+ G1HeapRegion* const from_region = _g1h->heap_region_containing(old);
uint node_index = from_region->node_index();
HeapWord* obj_ptr = _plab_allocator->plab_allocate(dest_attr, word_sz, node_index);
@@ -551,7 +552,7 @@ oop G1ParScanThreadState::do_copy_to_survivor_space(G1HeapRegionAttr const regio
}
// Skip the card enqueue iff the object (obj) is in survivor region.
- // However, HeapRegion::is_survivor() is too expensive here.
+ // However, G1HeapRegion::is_survivor() is too expensive here.
// Instead, we use dest_attr.is_young() because the two values are always
// equal: successfully allocated young regions must be survivor regions.
assert(dest_attr.is_young() == _g1h->heap_region_containing(obj)->is_survivor(), "must be");
@@ -622,7 +623,7 @@ void G1ParScanThreadStateSet::flush_stats() {
_flushed = true;
}
-void G1ParScanThreadStateSet::record_unused_optional_region(HeapRegion* hr) {
+void G1ParScanThreadStateSet::record_unused_optional_region(G1HeapRegion* hr) {
for (uint worker_index = 0; worker_index < _num_workers; ++worker_index) {
G1ParScanThreadState* pss = _states[worker_index];
assert(pss != nullptr, "must be initialized");
@@ -639,10 +640,10 @@ oop G1ParScanThreadState::handle_evacuation_failure_par(oop old, markWord m, siz
oop forward_ptr = old->forward_to_atomic(old, m, memory_order_relaxed);
if (forward_ptr == nullptr) {
// Forward-to-self succeeded. We are the "owner" of the object.
- HeapRegion* r = _g1h->heap_region_containing(old);
+ G1HeapRegion* r = _g1h->heap_region_containing(old);
if (_evac_failure_regions->record(_worker_id, r->hrm_index(), cause_pinned)) {
- _g1h->hr_printer()->evac_failure(r);
+ G1HeapRegionPrinter::evac_failure(r);
}
// Mark the failing object in the marking bitmap and later use the bitmap to handle
diff --git a/src/hotspot/share/gc/g1/g1ParScanThreadState.hpp b/src/hotspot/share/gc/g1/g1ParScanThreadState.hpp
index 72f7a0b84e7d9..24ca682e14118 100644
--- a/src/hotspot/share/gc/g1/g1ParScanThreadState.hpp
+++ b/src/hotspot/share/gc/g1/g1ParScanThreadState.hpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2014, 2023, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2014, 2024, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -29,9 +29,9 @@
#include "gc/g1/g1RedirtyCardsQueue.hpp"
#include "gc/g1/g1OopClosures.hpp"
#include "gc/g1/g1YoungGCAllocationFailureInjector.hpp"
-#include "gc/g1/g1_globals.hpp"
#include "gc/shared/ageTable.hpp"
#include "gc/shared/copyFailedInfo.hpp"
+#include "gc/shared/gc_globals.hpp"
#include "gc/shared/partialArrayTaskStepper.hpp"
#include "gc/shared/preservedMarks.hpp"
#include "gc/shared/stringdedup/stringDedup.hpp"
@@ -46,7 +46,7 @@ class G1EvacFailureRegions;
class G1EvacuationRootClosures;
class G1OopStarChunkedList;
class G1PLABAllocator;
-class HeapRegion;
+class G1HeapRegion;
class PreservedMarks;
class PreservedMarksSet;
class outputStream;
@@ -238,7 +238,7 @@ class G1ParScanThreadState : public CHeapObj {
template
inline void remember_reference_into_optional_region(T* p);
- inline G1OopStarChunkedList* oops_into_optional_region(const HeapRegion* hr);
+ inline G1OopStarChunkedList* oops_into_optional_region(const G1HeapRegion* hr);
};
class G1ParScanThreadStateSet : public StackObj {
@@ -265,7 +265,7 @@ class G1ParScanThreadStateSet : public StackObj {
PreservedMarksSet* preserved_marks_set() { return &_preserved_marks_set; }
void flush_stats();
- void record_unused_optional_region(HeapRegion* hr);
+ void record_unused_optional_region(G1HeapRegion* hr);
G1ParScanThreadState* state_for_worker(uint worker_id);
uint num_workers() const { return _num_workers; }
diff --git a/src/hotspot/share/gc/g1/g1ParScanThreadState.inline.hpp b/src/hotspot/share/gc/g1/g1ParScanThreadState.inline.hpp
index 7f666c0f49716..a164fa34929b6 100644
--- a/src/hotspot/share/gc/g1/g1ParScanThreadState.inline.hpp
+++ b/src/hotspot/share/gc/g1/g1ParScanThreadState.inline.hpp
@@ -89,7 +89,7 @@ inline void G1ParScanThreadState::remember_reference_into_optional_region(T* p)
verify_task(p);
}
-G1OopStarChunkedList* G1ParScanThreadState::oops_into_optional_region(const HeapRegion* hr) {
+G1OopStarChunkedList* G1ParScanThreadState::oops_into_optional_region(const G1HeapRegion* hr) {
assert(hr->index_in_opt_cset() < _max_num_optional_regions,
"Trying to access optional region idx %u beyond " SIZE_FORMAT " " HR_FORMAT,
hr->index_in_opt_cset(), _max_num_optional_regions, HR_FORMAT_PARAMS(hr));
@@ -109,7 +109,7 @@ template bool G1ParScanThreadState::enqueue_if_new(T* p) {
}
template void G1ParScanThreadState::enqueue_card_into_evac_fail_region(T* p, oop obj) {
- assert(!HeapRegion::is_in_same_region(p, obj), "Should have filtered out cross-region references already.");
+ assert(!G1HeapRegion::is_in_same_region(p, obj), "Should have filtered out cross-region references already.");
assert(!_g1h->heap_region_containing(p)->is_survivor(), "Should have filtered out from-newly allocated survivor references already.");
assert(_g1h->heap_region_containing(obj)->in_collection_set(), "Only for enqeueing reference into collection set region");
@@ -120,7 +120,7 @@ template void G1ParScanThreadState::enqueue_card_into_evac_fail_region
template void G1ParScanThreadState::write_ref_field_post(T* p, oop obj) {
assert(obj != nullptr, "Must be");
- if (HeapRegion::is_in_same_region(p, obj)) {
+ if (G1HeapRegion::is_in_same_region(p, obj)) {
return;
}
G1HeapRegionAttr from_attr = _g1h->region_attr(p);
@@ -144,14 +144,14 @@ template void G1ParScanThreadState::write_ref_field_post(T* p, oop obj
}
template void G1ParScanThreadState::enqueue_card_if_tracked(G1HeapRegionAttr region_attr, T* p, oop o) {
- assert(!HeapRegion::is_in_same_region(p, o), "Should have filtered out cross-region references already.");
+ assert(!G1HeapRegion::is_in_same_region(p, o), "Should have filtered out cross-region references already.");
assert(!_g1h->heap_region_containing(p)->is_survivor(), "Should have filtered out from-newly allocated survivor references already.");
// We relabel all regions that failed evacuation as old gen without remembered,
// and so pre-filter them out in the caller.
assert(!_g1h->heap_region_containing(o)->in_collection_set(), "Should not try to enqueue reference into collection set region");
#ifdef ASSERT
- HeapRegion* const hr_obj = _g1h->heap_region_containing(o);
+ G1HeapRegion* const hr_obj = _g1h->heap_region_containing(o);
assert(region_attr.remset_is_tracked() == hr_obj->rem_set()->is_tracked(),
"State flag indicating remset tracking disagrees (%s) with actual remembered set (%s) for region %u",
BOOL_TO_STR(region_attr.remset_is_tracked()),
diff --git a/src/hotspot/share/gc/g1/g1Policy.cpp b/src/hotspot/share/gc/g1/g1Policy.cpp
index dd9d937ad77b5..7d0acd3a326da 100644
--- a/src/hotspot/share/gc/g1/g1Policy.cpp
+++ b/src/hotspot/share/gc/g1/g1Policy.cpp
@@ -142,7 +142,7 @@ class G1YoungLengthPredictor {
return false;
}
- const size_t free_bytes = (_base_free_regions - young_length) * HeapRegion::GrainBytes;
+ const size_t free_bytes = (_base_free_regions - young_length) * G1HeapRegion::GrainBytes;
// When copying, we will likely need more bytes free than is live in the region.
// Add some safety margin to factor in the confidence of our guess, and the
@@ -173,7 +173,7 @@ void G1Policy::record_new_heap_size(uint new_number_of_regions) {
_young_gen_sizer.heap_size_changed(new_number_of_regions);
- _ihop_control->update_target_occupancy(new_number_of_regions * HeapRegion::GrainBytes);
+ _ihop_control->update_target_occupancy(new_number_of_regions * G1HeapRegion::GrainBytes);
}
uint G1Policy::calculate_desired_eden_length_by_mmu() const {
@@ -507,9 +507,9 @@ uint G1Policy::calculate_desired_eden_length_before_mixed(double base_time_ms,
}
double G1Policy::predict_survivor_regions_evac_time() const {
- const GrowableArray* survivor_regions = _g1h->survivor()->regions();
+ const GrowableArray* survivor_regions = _g1h->survivor()->regions();
double survivor_regions_evac_time = predict_young_region_other_time_ms(_g1h->survivor()->length());
- for (GrowableArrayIterator it = survivor_regions->begin();
+ for (GrowableArrayIterator it = survivor_regions->begin();
it != survivor_regions->end();
++it) {
survivor_regions_evac_time += predict_region_copy_time_ms(*it, _g1h->collector_state()->in_young_only_phase());
@@ -529,7 +529,7 @@ double G1Policy::predict_retained_regions_evac_time() const {
list.length());
for (G1CollectionSetCandidateInfo* ci : list) {
- HeapRegion* r = ci->_r;
+ G1HeapRegion* r = ci->_r;
// We optimistically assume that any of these marking candidate regions will
// be reclaimable the next gc, so just consider them as normal.
if (r->has_pinned_objects()) {
@@ -599,7 +599,7 @@ void G1Policy::record_full_collection_end() {
_survivor_surv_rate_group->reset();
update_young_length_bounds();
- _old_gen_alloc_tracker.reset_after_gc(_g1h->humongous_regions_count() * HeapRegion::GrainBytes);
+ _old_gen_alloc_tracker.reset_after_gc(_g1h->humongous_regions_count() * G1HeapRegion::GrainBytes);
record_pause(G1GCPauseType::FullGC, _full_collection_start_sec, end_sec);
}
@@ -663,12 +663,12 @@ bool G1Policy::should_retain_evac_failed_region(uint index) const {
size_t live_bytes = _g1h->region_at(index)->live_bytes();
#ifdef ASSERT
- HeapRegion* r = _g1h->region_at(index);
+ G1HeapRegion* r = _g1h->region_at(index);
assert(live_bytes != 0,
"live bytes not set for %u used %zu garbage %zu cm-live %zu pinned %d",
index, r->used(), r->garbage_bytes(), live_bytes, r->has_pinned_objects());
#endif
- size_t threshold = G1RetainRegionLiveThresholdPercent * HeapRegion::GrainBytes / 100;
+ size_t threshold = G1RetainRegionLiveThresholdPercent * G1HeapRegion::GrainBytes / 100;
return live_bytes < threshold;
}
@@ -954,7 +954,7 @@ void G1Policy::record_young_collection_end(bool concurrent_operation_is_full_mar
if (_g1h->gc_cause() != GCCause::_g1_periodic_collection) {
update_young_length_bounds();
- _old_gen_alloc_tracker.reset_after_gc(_g1h->humongous_regions_count() * HeapRegion::GrainBytes);
+ _old_gen_alloc_tracker.reset_after_gc(_g1h->humongous_regions_count() * G1HeapRegion::GrainBytes);
update_ihop_prediction(app_time_ms / 1000.0,
G1GCPauseTypeHelper::is_young_only_pause(this_pause));
@@ -1040,7 +1040,7 @@ void G1Policy::update_ihop_prediction(double mutator_time_s,
// restrained by the heap reserve. Using the actual length would make the
// prediction too small and the limit the young gen every time we get to the
// predicted target occupancy.
- size_t young_gen_size = young_list_desired_length() * HeapRegion::GrainBytes;
+ size_t young_gen_size = young_list_desired_length() * G1HeapRegion::GrainBytes;
_ihop_control->update_allocation_info(mutator_time_s, young_gen_size);
report = true;
}
@@ -1093,7 +1093,7 @@ double G1Policy::predict_base_time_ms(size_t pending_cards) const {
return predict_base_time_ms(pending_cards, card_rs_length, code_root_rs_length);
}
-size_t G1Policy::predict_bytes_to_copy(HeapRegion* hr) const {
+size_t G1Policy::predict_bytes_to_copy(G1HeapRegion* hr) const {
size_t bytes_to_copy;
if (!hr->is_young()) {
bytes_to_copy = hr->live_bytes();
@@ -1111,19 +1111,19 @@ double G1Policy::predict_eden_copy_time_ms(uint count, size_t* bytes_to_copy) co
if (count == 0) {
return 0.0;
}
- size_t const expected_bytes = _eden_surv_rate_group->accum_surv_rate_pred(count - 1) * HeapRegion::GrainBytes;
+ size_t const expected_bytes = _eden_surv_rate_group->accum_surv_rate_pred(count - 1) * G1HeapRegion::GrainBytes;
if (bytes_to_copy != nullptr) {
*bytes_to_copy = expected_bytes;
}
return _analytics->predict_object_copy_time_ms(expected_bytes, collector_state()->in_young_only_phase());
}
-double G1Policy::predict_region_copy_time_ms(HeapRegion* hr, bool for_young_only_phase) const {
+double G1Policy::predict_region_copy_time_ms(G1HeapRegion* hr, bool for_young_only_phase) const {
size_t const bytes_to_copy = predict_bytes_to_copy(hr);
return _analytics->predict_object_copy_time_ms(bytes_to_copy, for_young_only_phase);
}
-double G1Policy::predict_region_merge_scan_time(HeapRegion* hr, bool for_young_only_phase) const {
+double G1Policy::predict_region_merge_scan_time(G1HeapRegion* hr, bool for_young_only_phase) const {
size_t card_rs_length = hr->rem_set()->occupied();
size_t scan_card_num = _analytics->predict_scan_card_num(card_rs_length, for_young_only_phase);
@@ -1132,14 +1132,14 @@ double G1Policy::predict_region_merge_scan_time(HeapRegion* hr, bool for_young_o
_analytics->predict_card_scan_time_ms(scan_card_num, for_young_only_phase);
}
-double G1Policy::predict_region_code_root_scan_time(HeapRegion* hr, bool for_young_only_phase) const {
+double G1Policy::predict_region_code_root_scan_time(G1HeapRegion* hr, bool for_young_only_phase) const {
size_t code_root_length = hr->rem_set()->code_roots_list_length();
return
_analytics->predict_code_root_scan_time_ms(code_root_length, for_young_only_phase);
}
-double G1Policy::predict_region_non_copy_time_ms(HeapRegion* hr,
+double G1Policy::predict_region_non_copy_time_ms(G1HeapRegion* hr,
bool for_young_only_phase) const {
double region_elapsed_time_ms = predict_region_merge_scan_time(hr, for_young_only_phase) +
@@ -1154,7 +1154,7 @@ double G1Policy::predict_region_non_copy_time_ms(HeapRegion* hr,
return region_elapsed_time_ms;
}
-double G1Policy::predict_region_total_time_ms(HeapRegion* hr, bool for_young_only_phase) const {
+double G1Policy::predict_region_total_time_ms(G1HeapRegion* hr, bool for_young_only_phase) const {
return
predict_region_non_copy_time_ms(hr, for_young_only_phase) +
predict_region_copy_time_ms(hr, for_young_only_phase);
@@ -1175,12 +1175,12 @@ size_t G1Policy::estimate_used_young_bytes_locked() const {
uint used = _g1h->young_regions_count();
uint alloc = allocator->num_nodes();
uint full = used - MIN2(used, alloc);
- size_t bytes_used = full * HeapRegion::GrainBytes;
+ size_t bytes_used = full * G1HeapRegion::GrainBytes;
return bytes_used + allocator->used_in_alloc_regions();
}
size_t G1Policy::desired_survivor_size(uint max_regions) const {
- size_t const survivor_capacity = HeapRegion::GrainWords * max_regions;
+ size_t const survivor_capacity = G1HeapRegion::GrainWords * max_regions;
return (size_t)((((double)survivor_capacity) * TargetSurvivorRatio) / 100);
}
@@ -1310,7 +1310,7 @@ void G1Policy::decide_on_concurrent_start_pause() {
void G1Policy::record_concurrent_mark_cleanup_end(bool has_rebuilt_remembered_sets) {
bool mixed_gc_pending = false;
if (has_rebuilt_remembered_sets) {
- G1CollectionSetChooser::build(_g1h->workers(), _g1h->num_regions(), candidates());
+ candidates()->sort_marking_by_efficiency();
mixed_gc_pending = next_gc_should_be_mixed();
}
@@ -1337,7 +1337,7 @@ void G1Policy::record_concurrent_mark_cleanup_end(bool has_rebuilt_remembered_se
void G1Policy::abandon_collection_set_candidates() {
// Clear remembered sets of remaining candidate regions and the actual candidate
// set.
- for (HeapRegion* r : *candidates()) {
+ for (G1HeapRegion* r : *candidates()) {
r->rem_set()->clear(true /* only_cardset */);
}
_collection_set->abandon_all_candidates();
@@ -1502,7 +1502,7 @@ double G1Policy::select_candidates_from_marking(G1CollectionCandidateList* marki
print_finish_message("Maximum number of regions reached", true);
break;
}
- HeapRegion* hr = (*iter)->_r;
+ G1HeapRegion* hr = (*iter)->_r;
// Skip evacuating pinned marking regions because we are not getting any free
// space from them (and we expect to get free space from marking candidates).
// Also prepare to move them to retained regions to be evacuated optionally later
@@ -1597,7 +1597,7 @@ void G1Policy::select_candidates_from_retained(G1CollectionCandidateList* retain
min_regions, retained_list->length(), time_remaining_ms, optional_time_remaining_ms);
for (G1CollectionSetCandidateInfo* ci : *retained_list) {
- HeapRegion* r = ci->_r;
+ G1HeapRegion* r = ci->_r;
double predicted_time_ms = predict_region_total_time_ms(r, collector_state()->in_young_only_phase());
bool fits_in_remaining_time = predicted_time_ms <= time_remaining_ms;
// If we can't reclaim that region ignore it for now.
@@ -1655,7 +1655,7 @@ void G1Policy::calculate_optional_collection_set_regions(G1CollectionCandidateRe
double total_prediction_ms = 0.0;
- for (HeapRegion* r : *optional_regions) {
+ for (G1HeapRegion* r : *optional_regions) {
double prediction_ms = predict_region_total_time_ms(r, false);
if (prediction_ms > time_remaining_ms) {
@@ -1678,10 +1678,10 @@ void G1Policy::calculate_optional_collection_set_regions(G1CollectionCandidateRe
void G1Policy::transfer_survivors_to_cset(const G1SurvivorRegions* survivors) {
start_adding_survivor_regions();
- for (GrowableArrayIterator it = survivors->regions()->begin();
+ for (GrowableArrayIterator it = survivors->regions()->begin();
it != survivors->regions()->end();
++it) {
- HeapRegion* curr = *it;
+ G1HeapRegion* curr = *it;
set_region_survivor(curr);
// The region is a non-empty survivor so let's add it to
diff --git a/src/hotspot/share/gc/g1/g1Policy.hpp b/src/hotspot/share/gc/g1/g1Policy.hpp
index c951909788c2e..9de27e76d9622 100644
--- a/src/hotspot/share/gc/g1/g1Policy.hpp
+++ b/src/hotspot/share/gc/g1/g1Policy.hpp
@@ -44,7 +44,7 @@
// * choice of collection set.
// * when to collect.
-class HeapRegion;
+class G1HeapRegion;
class G1CollectionSet;
class G1CollectionCandidateList;
class G1CollectionSetCandidates;
@@ -122,12 +122,12 @@ class G1Policy: public CHeapObj {
G1OldGenAllocationTracker* old_gen_alloc_tracker() { return &_old_gen_alloc_tracker; }
- void set_region_eden(HeapRegion* hr) {
+ void set_region_eden(G1HeapRegion* hr) {
hr->set_eden();
hr->install_surv_rate_group(_eden_surv_rate_group);
}
- void set_region_survivor(HeapRegion* hr) {
+ void set_region_survivor(G1HeapRegion* hr) {
assert(hr->is_survivor(), "pre-condition");
hr->install_surv_rate_group(_survivor_surv_rate_group);
}
@@ -145,14 +145,14 @@ class G1Policy: public CHeapObj {
double predict_base_time_ms(size_t pending_cards, size_t card_rs_length, size_t code_root_length) const;
// Copy time for a region is copying live data.
- double predict_region_copy_time_ms(HeapRegion* hr, bool for_young_only_phase) const;
+ double predict_region_copy_time_ms(G1HeapRegion* hr, bool for_young_only_phase) const;
// Merge-scan time for a region is handling card-based remembered sets of that region
// (as a single unit).
- double predict_region_merge_scan_time(HeapRegion* hr, bool for_young_only_phase) const;
+ double predict_region_merge_scan_time(G1HeapRegion* hr, bool for_young_only_phase) const;
// Code root scan time prediction for the given region.
- double predict_region_code_root_scan_time(HeapRegion* hr, bool for_young_only_phase) const;
+ double predict_region_code_root_scan_time(G1HeapRegion* hr, bool for_young_only_phase) const;
// Non-copy time for a region is handling remembered sets and other time.
- double predict_region_non_copy_time_ms(HeapRegion* hr, bool for_young_only_phase) const;
+ double predict_region_non_copy_time_ms(G1HeapRegion* hr, bool for_young_only_phase) const;
public:
@@ -163,7 +163,7 @@ class G1Policy: public CHeapObj {
double predict_eden_copy_time_ms(uint count, size_t* bytes_to_copy = nullptr) const;
// Total time for a region is handling remembered sets (as a single unit), copying its live data
// and other time.
- double predict_region_total_time_ms(HeapRegion* hr, bool for_young_only_phase) const;
+ double predict_region_total_time_ms(G1HeapRegion* hr, bool for_young_only_phase) const;
void cset_regions_freed() {
bool update = should_update_surv_rate_group_predictors();
@@ -184,9 +184,10 @@ class G1Policy: public CHeapObj {
return _mmu_tracker->max_gc_time() * 1000.0;
}
+ G1CollectionSetCandidates* candidates() const;
+
private:
G1CollectionSet* _collection_set;
- G1CollectionSetCandidates* candidates() const;
double average_time_ms(G1GCPhaseTimes::GCParPhases phase) const;
double other_time_ms(double pause_time_ms) const;
@@ -244,7 +245,7 @@ class G1Policy: public CHeapObj {
// Limit the given desired young length to available free regions.
uint calculate_young_target_length(uint desired_young_length) const;
- size_t predict_bytes_to_copy(HeapRegion* hr) const;
+ size_t predict_bytes_to_copy(G1HeapRegion* hr) const;
double predict_survivor_regions_evac_time() const;
double predict_retained_regions_evac_time() const;
@@ -399,7 +400,7 @@ class G1Policy: public CHeapObj {
void record_concurrent_refinement_stats(size_t pending_cards,
size_t thread_buffer_cards);
- bool should_retain_evac_failed_region(HeapRegion* r) const {
+ bool should_retain_evac_failed_region(G1HeapRegion* r) const {
return should_retain_evac_failed_region(r->hrm_index());
}
bool should_retain_evac_failed_region(uint index) const;
diff --git a/src/hotspot/share/gc/g1/g1RegionToSpaceMapper.cpp b/src/hotspot/share/gc/g1/g1RegionToSpaceMapper.cpp
index 104b126c01674..5f903960cce8c 100644
--- a/src/hotspot/share/gc/g1/g1RegionToSpaceMapper.cpp
+++ b/src/hotspot/share/gc/g1/g1RegionToSpaceMapper.cpp
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2001, 2024, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2001, 2023, Oracle and/or its affiliates. All rights reserved.
* DO NOT ALTER OR REMOVE COPYRIGHT NOTICES OR THIS FILE HEADER.
*
* This code is free software; you can redistribute it and/or modify it
@@ -43,10 +43,12 @@ G1RegionToSpaceMapper::G1RegionToSpaceMapper(ReservedSpace rs,
MEMFLAGS type) :
_listener(nullptr),
_storage(rs, used_size, page_size),
- _region_commit_map(rs.size() * commit_factor / region_granularity, type),
+ _region_commit_map(rs.size() * commit_factor / region_granularity, mtGC),
_memory_type(type) {
guarantee(is_power_of_2(page_size), "must be");
guarantee(is_power_of_2(region_granularity), "must be");
+
+ MemTracker::record_virtual_memory_type((address)rs.base(), type);
}
// Used to manually signal a mapper to handle a set of regions as committed.
diff --git a/src/hotspot/share/gc/g1/g1RegionsOnNodes.cpp b/src/hotspot/share/gc/g1/g1RegionsOnNodes.cpp
index 88a4b9611dc2a..4a81b529e7b4a 100644
--- a/src/hotspot/share/gc/g1/g1RegionsOnNodes.cpp
+++ b/src/hotspot/share/gc/g1/g1RegionsOnNodes.cpp
@@ -36,7 +36,7 @@ G1RegionsOnNodes::~G1RegionsOnNodes() {
FREE_C_HEAP_ARRAY(uint, _count_per_node);
}
-uint G1RegionsOnNodes::add(HeapRegion* hr) {
+uint G1RegionsOnNodes::add(G1HeapRegion* hr) {
uint node_index = hr->node_index();
// Update only if the node index is valid.
diff --git a/src/hotspot/share/gc/g1/g1RegionsOnNodes.hpp b/src/hotspot/share/gc/g1/g1RegionsOnNodes.hpp
index 27f96e4977ef5..2ab4950a25c19 100644
--- a/src/hotspot/share/gc/g1/g1RegionsOnNodes.hpp
+++ b/src/hotspot/share/gc/g1/g1RegionsOnNodes.hpp
@@ -28,7 +28,7 @@
#include "memory/allocation.hpp"
class G1NUMA;
-class HeapRegion;
+class G1HeapRegion;
// Contains per node index region count
class G1RegionsOnNodes : public StackObj {
@@ -41,7 +41,7 @@ class G1RegionsOnNodes : public StackObj {
~G1RegionsOnNodes();
// Increase _count_per_node for the node of given heap region and returns node index.
- uint add(HeapRegion* hr);
+ uint add(G1HeapRegion* hr);
void clear();
diff --git a/src/hotspot/share/gc/g1/g1RemSet.cpp b/src/hotspot/share/gc/g1/g1RemSet.cpp
index 65aa0ff18ab0c..39c9c2c7a30e9 100644
--- a/src/hotspot/share/gc/g1/g1RemSet.cpp
+++ b/src/hotspot/share/gc/g1/g1RemSet.cpp
@@ -42,10 +42,10 @@
#include "gc/g1/g1Policy.hpp"
#include "gc/g1/g1RootClosures.hpp"
#include "gc/g1/g1RemSet.hpp"
-#include "gc/g1/g1_globals.hpp"
#include "gc/shared/bufferNode.hpp"
#include "gc/shared/bufferNodeList.hpp"
#include "gc/shared/gcTraceTime.inline.hpp"
+#include "gc/shared/gc_globals.hpp"
#include "jfr/jfrEvents.hpp"
#include "memory/iterator.hpp"
#include "memory/resourceArea.hpp"
@@ -92,7 +92,7 @@ class G1RemSetScanState : public CHeapObj {
size_t _max_reserved_regions;
// Card table iteration claim for each heap region, from 0 (completely unscanned)
- // to (>=) HeapRegion::CardsPerRegion (completely scanned).
+ // to (>=) G1HeapRegion::CardsPerRegion (completely scanned).
uint volatile* _card_table_scan_state;
uint _scan_chunks_per_region; // Number of chunks per region.
@@ -223,7 +223,7 @@ class G1RemSetScanState : public CHeapObj {
return AlmostNoWork;
}
- double num_cards = num_regions << HeapRegion::LogCardsPerRegion;
+ double num_cards = num_regions << G1HeapRegion::LogCardsPerRegion;
return ceil(num_cards / num_cards_per_worker);
}
@@ -235,14 +235,14 @@ class G1RemSetScanState : public CHeapObj {
}
void do_work(uint worker_id) override {
- const uint num_regions_per_worker = num_cards_per_worker / (uint)HeapRegion::CardsPerRegion;
+ const uint num_regions_per_worker = num_cards_per_worker / (uint)G1HeapRegion::CardsPerRegion;
while (_cur_dirty_regions < _regions->size()) {
uint next = Atomic::fetch_then_add(&_cur_dirty_regions, num_regions_per_worker);
uint max = MIN2(next + num_regions_per_worker, _regions->size());
for (uint i = next; i < max; i++) {
- HeapRegion* r = _g1h->region_at(_regions->at(i));
+ G1HeapRegion* r = _g1h->region_at(_regions->at(i));
r->clear_cardtable();
}
}
@@ -276,7 +276,7 @@ class G1RemSetScanState : public CHeapObj {
_num_total_scan_chunks = max_reserved_regions * _scan_chunks_per_region;
_region_scan_chunks = NEW_C_HEAP_ARRAY(bool, _num_total_scan_chunks, mtGC);
- _scan_chunks_shift = (uint8_t)log2i(HeapRegion::CardsPerRegion / _scan_chunks_per_region);
+ _scan_chunks_shift = (uint8_t)log2i(G1HeapRegion::CardsPerRegion / _scan_chunks_per_region);
_scan_top = NEW_C_HEAP_ARRAY(HeapWord*, max_reserved_regions, mtGC);
}
@@ -317,7 +317,7 @@ class G1RemSetScanState : public CHeapObj {
// - are located in free regions
// as we do not clean up remembered sets before merging heap roots.
bool contains_cards_to_process(uint const region_idx) const {
- HeapRegion* hr = G1CollectedHeap::heap()->region_at_or_null(region_idx);
+ G1HeapRegion* hr = G1CollectedHeap::heap()->region_at_or_null(region_idx);
return (hr != nullptr && !hr->in_collection_set() && hr->is_old_or_humongous());
}
@@ -328,11 +328,11 @@ class G1RemSetScanState : public CHeapObj {
result++;
}
}
- return result * (HeapRegion::CardsPerRegion / _scan_chunks_per_region);
+ return result * (G1HeapRegion::CardsPerRegion / _scan_chunks_per_region);
}
size_t num_cards_in_dirty_regions() const {
- return _next_dirty_regions->size() * HeapRegion::CardsPerRegion;
+ return _next_dirty_regions->size() * G1HeapRegion::CardsPerRegion;
}
void set_chunk_range_dirty(size_t const region_card_idx, size_t const card_length) {
@@ -392,7 +392,7 @@ class G1RemSetScanState : public CHeapObj {
bool has_cards_to_scan(uint region) {
assert(region < _max_reserved_regions, "Tried to access invalid region %u", region);
- return _card_table_scan_state[region] < HeapRegion::CardsPerRegion;
+ return _card_table_scan_state[region] < G1HeapRegion::CardsPerRegion;
}
uint claim_cards_to_scan(uint region, uint increment) {
@@ -402,7 +402,7 @@ class G1RemSetScanState : public CHeapObj {
void add_dirty_region(uint const region) {
#ifdef ASSERT
- HeapRegion* hr = G1CollectedHeap::heap()->region_at(region);
+ G1HeapRegion* hr = G1CollectedHeap::heap()->region_at(region);
assert(!hr->in_collection_set() && hr->is_old_or_humongous(),
"Region %u is not suitable for scanning, is %sin collection set or %s",
hr->hrm_index(), hr->in_collection_set() ? "" : "not ", hr->get_short_type_str());
@@ -412,7 +412,7 @@ class G1RemSetScanState : public CHeapObj {
void add_all_dirty_region(uint region) {
#ifdef ASSERT
- HeapRegion* hr = G1CollectedHeap::heap()->region_at(region);
+ G1HeapRegion* hr = G1CollectedHeap::heap()->region_at(region);
assert(hr->in_collection_set(),
"Only add collection set regions to all dirty regions directly but %u is %s",
hr->hrm_index(), hr->get_short_type_str());
@@ -461,13 +461,13 @@ class G1CardTableChunkClaimer {
_scan_state(scan_state),
_region_idx(region_idx),
_cur_claim(0) {
- guarantee(size() <= HeapRegion::CardsPerRegion, "Should not claim more space than possible.");
+ guarantee(size() <= G1HeapRegion::CardsPerRegion, "Should not claim more space than possible.");
}
bool has_next() {
while (true) {
_cur_claim = _scan_state->claim_cards_to_scan(_region_idx, size());
- if (_cur_claim >= HeapRegion::CardsPerRegion) {
+ if (_cur_claim >= G1HeapRegion::CardsPerRegion) {
return false;
}
if (_scan_state->chunk_needs_scan(_region_idx, _cur_claim)) {
@@ -509,7 +509,7 @@ class G1ScanHRForRegionClosure : public HeapRegionClosure {
CardValue _scanned_card_value;
HeapWord* scan_memregion(uint region_idx_for_card, MemRegion mr) {
- HeapRegion* const card_region = _g1h->region_at(region_idx_for_card);
+ G1HeapRegion* const card_region = _g1h->region_at(region_idx_for_card);
G1ScanCardClosure card_cl(_g1h, _pss, _heap_roots_found);
HeapWord* const scanned_to = card_region->oops_on_memregion_seq_iterate_careful(mr, &card_cl);
@@ -639,7 +639,7 @@ class G1ScanHRForRegionClosure : public HeapRegionClosure {
}
};
- void scan_heap_roots(HeapRegion* r) {
+ void scan_heap_roots(G1HeapRegion* r) {
uint const region_idx = r->hrm_index();
ResourceMark rm;
@@ -655,7 +655,7 @@ class G1ScanHRForRegionClosure : public HeapRegionClosure {
while (claim.has_next()) {
_chunks_claimed++;
- size_t const region_card_base_idx = ((size_t)region_idx << HeapRegion::LogCardsPerRegion) + claim.value();
+ size_t const region_card_base_idx = ((size_t)region_idx << G1HeapRegion::LogCardsPerRegion) + claim.value();
CardValue* const start_card = _ct->byte_for_index(region_card_base_idx);
CardValue* const end_card = start_card + claim.size();
@@ -690,7 +690,7 @@ class G1ScanHRForRegionClosure : public HeapRegionClosure {
: G1CardTable::clean_card_val()) {
}
- bool do_heap_region(HeapRegion* r) {
+ bool do_heap_region(G1HeapRegion* r) {
assert(!r->in_collection_set() && r->is_old_or_humongous(),
"Should only be called on old gen non-collection set regions but region %u is not.",
r->hrm_index());
@@ -776,7 +776,7 @@ class G1ScanCollectionSetRegionClosure : public HeapRegionClosure {
Tickspan _rem_set_opt_root_scan_time;
Tickspan _rem_set_opt_trim_partially_time;
- void scan_opt_rem_set_roots(HeapRegion* r) {
+ void scan_opt_rem_set_roots(G1HeapRegion* r) {
G1OopStarChunkedList* opt_rem_set_list = _pss->oops_into_optional_region(r);
G1ScanCardClosure scan_cl(G1CollectedHeap::heap(), _pss, _opt_roots_scanned);
@@ -805,7 +805,7 @@ class G1ScanCollectionSetRegionClosure : public HeapRegionClosure {
_rem_set_opt_root_scan_time(),
_rem_set_opt_trim_partially_time() { }
- bool do_heap_region(HeapRegion* r) {
+ bool do_heap_region(G1HeapRegion* r) {
// The individual references for the optional remembered set are per-worker, so we
// always need to scan them.
if (r->has_index_in_opt_cset()) {
@@ -880,7 +880,7 @@ void G1RemSet::assert_scan_top_is_null(uint hrm_index) {
}
#endif
-void G1RemSet::prepare_region_for_scan(HeapRegion* r) {
+void G1RemSet::prepare_region_for_scan(G1HeapRegion* r) {
uint hrm_index = r->hrm_index();
r->prepare_remset_for_scan();
@@ -1044,7 +1044,7 @@ class G1MergeHeapRootsTask : public WorkerTask {
bool start_iterate(uint const tag, uint const region_idx) {
assert(tag < G1GCPhaseTimes::MergeRSCards, "invalid tag %u", tag);
if (remember_if_interesting(region_idx)) {
- _region_base_idx = (size_t)region_idx << HeapRegion::LogCardsPerRegion;
+ _region_base_idx = (size_t)region_idx << G1HeapRegion::LogCardsPerRegion;
_stats.inc_card_set_merged(tag);
return true;
}
@@ -1071,7 +1071,7 @@ class G1MergeHeapRootsTask : public WorkerTask {
// pass) or the "dirty" list will be merged with the "all" list later otherwise.
// (And there is no problem either way if the region does not contain dirty
// cards).
- void merge_card_set_for_region(HeapRegion* r) {
+ void merge_card_set_for_region(G1HeapRegion* r) {
assert(r->in_collection_set() || r->is_starts_humongous(), "must be");
HeapRegionRemSet* rem_set = r->rem_set();
@@ -1080,7 +1080,7 @@ class G1MergeHeapRootsTask : public WorkerTask {
}
}
- virtual bool do_heap_region(HeapRegion* r) {
+ virtual bool do_heap_region(G1HeapRegion* r) {
assert(r->in_collection_set(), "must be");
_scan_state->add_all_dirty_region(r->hrm_index());
@@ -1101,12 +1101,12 @@ class G1MergeHeapRootsTask : public WorkerTask {
class G1ClearBitmapClosure : public HeapRegionClosure {
G1CollectedHeap* _g1h;
- void assert_bitmap_clear(HeapRegion* hr, const G1CMBitMap* bitmap) {
+ void assert_bitmap_clear(G1HeapRegion* hr, const G1CMBitMap* bitmap) {
assert(bitmap->get_next_marked_addr(hr->bottom(), hr->end()) == hr->end(),
"Bitmap should have no mark for region %u (%s)", hr->hrm_index(), hr->get_short_type_str());
}
- bool should_clear_region(HeapRegion* hr) const {
+ bool should_clear_region(G1HeapRegion* hr) const {
// The bitmap for young regions must obviously be clear as we never mark through them;
// old regions that are currently being marked through are only in the collection set
// after the concurrent cycle completed, so their bitmaps must also be clear except when
@@ -1126,7 +1126,7 @@ class G1MergeHeapRootsTask : public WorkerTask {
public:
G1ClearBitmapClosure(G1CollectedHeap* g1h) : _g1h(g1h) { }
- bool do_heap_region(HeapRegion* hr) {
+ bool do_heap_region(G1HeapRegion* hr) {
assert(_g1h->is_in_cset(hr), "Should only be used iterating the collection set");
// Evacuation failure uses the bitmap to record evacuation failed objects,
@@ -1152,7 +1152,7 @@ class G1MergeHeapRootsTask : public WorkerTask {
_closure1(cl1),
_closure2(cl2) { }
- bool do_heap_region(HeapRegion* hr) {
+ bool do_heap_region(G1HeapRegion* hr) {
return _closure1->do_heap_region(hr) ||
_closure2->do_heap_region(hr);
}
@@ -1173,7 +1173,7 @@ class G1MergeHeapRootsTask : public WorkerTask {
return false;
}
- HeapRegion* r = g1h->region_at(region_index);
+ G1HeapRegion* r = g1h->region_at(region_index);
assert(r->rem_set()->is_complete(), "humongous candidates must have complete remset");
@@ -1413,7 +1413,7 @@ void G1RemSet::print_merge_heap_roots_stats() {
G1CollectedHeap* g1h = G1CollectedHeap::heap();
size_t total_old_region_cards =
- (g1h->num_regions() - (g1h->num_free_regions() - g1h->collection_set()->cur_length())) * HeapRegion::CardsPerRegion;
+ (g1h->num_regions() - (g1h->num_free_regions() - g1h->collection_set()->cur_length())) * G1HeapRegion::CardsPerRegion;
ls.print_cr("Visited cards " SIZE_FORMAT " Total dirty " SIZE_FORMAT " (%.2lf%%) Total old " SIZE_FORMAT " (%.2lf%%)",
num_visited_cards,
@@ -1496,7 +1496,7 @@ bool G1RemSet::clean_card_before_refine(CardValue** const card_ptr_addr) {
// Find the start address represented by the card.
HeapWord* start = _ct->addr_for(card_ptr);
// And find the region containing it.
- HeapRegion* r = _g1h->heap_region_containing_or_null(start);
+ G1HeapRegion* r = _g1h->heap_region_containing_or_null(start);
// If this is a (stale) card into an uncommitted region, exit.
if (r == nullptr) {
@@ -1564,13 +1564,13 @@ bool G1RemSet::clean_card_before_refine(CardValue** const card_ptr_addr) {
void G1RemSet::refine_card_concurrently(CardValue* const card_ptr,
const uint worker_id) {
- assert(!_g1h->is_gc_active(), "Only call concurrently");
+ assert(!_g1h->is_stw_gc_active(), "Only call concurrently");
check_card_ptr(card_ptr, _ct);
// Construct the MemRegion representing the card.
HeapWord* start = _ct->addr_for(card_ptr);
// And find the region containing it.
- HeapRegion* r = _g1h->heap_region_containing(start);
+ G1HeapRegion* r = _g1h->heap_region_containing(start);
// This reload of the top is safe even though it happens after the full
// fence, because top is stable for old and unfiltered humongous
// regions, so it must return the same value as the previous load when
diff --git a/src/hotspot/share/gc/g1/g1RemSet.hpp b/src/hotspot/share/gc/g1/g1RemSet.hpp
index 260fc728923f5..65eabc312a6c5 100644
--- a/src/hotspot/share/gc/g1/g1RemSet.hpp
+++ b/src/hotspot/share/gc/g1/g1RemSet.hpp
@@ -108,7 +108,7 @@ class G1RemSet: public CHeapObj