diff --git a/AUTHORS b/AUTHORS
index b2a6b3fc45..6d2ae6378b 100644
--- a/AUTHORS
+++ b/AUTHORS
@@ -47,7 +47,7 @@ Ahmy Yulrizka
Aiden Scandella
Ainar Garipov
Aishraj Dahal
-Akhil Indurti
+Akhil Indurti
Akihiro Suda
Akshat Kumar
Alan Shreve
diff --git a/CONTRIBUTORS b/CONTRIBUTORS
index 12e3b16f67..2ba9916b8b 100644
--- a/CONTRIBUTORS
+++ b/CONTRIBUTORS
@@ -67,7 +67,7 @@ Ahsun Ahmed
Aiden Scandella
Ainar Garipov
Aishraj Dahal
-Akhil Indurti
+Akhil Indurti
Akihiro Suda
Akshat Kumar
Al Cutter
diff --git a/doc/gccgo_install.html b/doc/gccgo_install.html
index a974bb3680..5b026ba57e 100644
--- a/doc/gccgo_install.html
+++ b/doc/gccgo_install.html
@@ -5,8 +5,8 @@
This document explains how to use gccgo, a compiler for
-the Go language. The gccgo compiler is a new frontend
-for GCC, the widely used GNU compiler. Although the
+the Go language. The gccgo compiler is a new frontend
+for GCC, the widely used GNU compiler. Although the
frontend itself is under a BSD-style license, gccgo is
normally used as part of GCC and is then covered by
the GNU General Public
@@ -24,10 +24,10 @@ Releases
The simplest way to install gccgo is to install a GCC binary release
-built to include Go support. GCC binary releases are available from
+built to include Go support. GCC binary releases are available from
various
websites and are typically included as part of GNU/Linux
-distributions. We expect that most people who build these binaries
+distributions. We expect that most people who build these binaries
will include Go support.
@@ -38,7 +38,7 @@ Releases
Due to timing, the GCC 4.8.0 and 4.8.1 releases are close to but not
-identical to Go 1.1. The GCC 4.8.2 release includes a complete Go
+identical to Go 1.1. The GCC 4.8.2 release includes a complete Go
1.1.2 implementation.
@@ -48,28 +48,32 @@ Releases
The GCC 5 releases include a complete implementation of the Go 1.4
-user libraries. The Go 1.4 runtime is not fully merged, but that
+user libraries. The Go 1.4 runtime is not fully merged, but that
should not be visible to Go programs.
The GCC 6 releases include a complete implementation of the Go 1.6.1
-user libraries. The Go 1.6 runtime is not fully merged, but that
+user libraries. The Go 1.6 runtime is not fully merged, but that
should not be visible to Go programs.
The GCC 7 releases include a complete implementation of the Go 1.8.1
-user libraries. As with earlier releases, the Go 1.8 runtime is not
+user libraries. As with earlier releases, the Go 1.8 runtime is not
fully merged, but that should not be visible to Go programs.
-The GCC 8 releases are expected to include a complete implementation
-of the Go 1.10 release, depending on release timing. The Go 1.10
-runtime has now been fully merged into the GCC development sources,
-and concurrent garbage collection is expected to be fully supported in
-GCC 8.
+The GCC 8 releases include a complete implementation of the Go 1.10.1
+release. The Go 1.10 runtime has now been fully merged into the GCC
+development sources, and concurrent garbage collection is fully
+supported.
+
+
+
+The GCC 9 releases include a complete implementation of the Go 1.12.2
+release.
Source code
@@ -77,10 +81,10 @@ Source code
If you cannot use a release, or prefer to build gccgo for
yourself,
-the gccgo source code is accessible via Subversion. The
+the gccgo source code is accessible via Subversion. The
GCC web site
has instructions for getting the
-GCC source code. The gccgo source code is included. As a
+GCC source code
. The gccgo source code is included. As a
convenience, a stable version of the Go support is available in
a branch of the main GCC code
repository: svn://gcc.gnu.org/svn/gcc/branches/gccgo
.
@@ -90,7 +94,7 @@
Source code
Note that although gcc.gnu.org
is the most convenient way
to get the source code for the Go frontend, it is not where the master
-sources live. If you want to contribute changes to the Go frontend
+sources live. If you want to contribute changes to the Go frontend
compiler, see Contributing to
gccgo.
@@ -100,16 +104,16 @@ Building
Building gccgo is just like building GCC
-with one or two additional options. See
+with one or two additional options. See
the instructions on the gcc web
-site. When you run configure
, add the
+site. When you run configure
, add the
option --enable-languages=c,c++,go
(along with other
-languages you may want to build). If you are targeting a 32-bit x86,
+languages you may want to build). If you are targeting a 32-bit x86,
then you will want to build gccgo to default to
supporting locked compare and exchange instructions; do this by also
using the configure
option --with-arch=i586
(or a newer architecture, depending on where you need your programs to
-run). If you are targeting a 64-bit x86, but sometimes want to use
+run). If you are targeting a 64-bit x86, but sometimes want to use
the -m32
option, then use the configure
option --with-arch-32=i586
.
@@ -118,18 +122,18 @@ Gold
On x86 GNU/Linux systems the gccgo compiler is able to
-use a small discontiguous stack for goroutines. This permits programs
+use a small discontiguous stack for goroutines. This permits programs
to run many more goroutines, since each goroutine can use a relatively
-small stack. Doing this requires using the gold linker version 2.22
-or later. You can either install GNU binutils 2.22 or later, or you
+small stack. Doing this requires using the gold linker version 2.22
+or later. You can either install GNU binutils 2.22 or later, or you
can build gold yourself.
To build gold yourself, build the GNU binutils,
using --enable-gold=default
when you run
-the configure
script. Before building, you must install
-the flex and bison packages. A typical sequence would look like
+the configure
script. Before building, you must install
+the flex and bison packages. A typical sequence would look like
this (you can replace /opt/gold
with any directory to
which you have write access):
@@ -157,7 +161,7 @@ Prerequisites
A number of prerequisites are required to build GCC, as
described on
the gcc web
-site. It is important to install all the prerequisites before
+site. It is important to install all the prerequisites before
running the gcc configure
script.
The prerequisite libraries can be conveniently downloaded using the
script contrib/download_prerequisites
in the GCC sources.
@@ -183,7 +187,7 @@ Build commands
Using gccgo
-The gccgo compiler works like other gcc frontends. As of GCC 5 the gccgo
+The gccgo compiler works like other gcc frontends. As of GCC 5 the gccgo
installation also includes a version of the go
command,
which may be used to build Go programs as described at
https://golang.org/cmd/go.
@@ -208,7 +212,7 @@
Using gccgo
To run the resulting file, you will need to tell the program where to
-find the compiled Go packages. There are a few ways to do this:
+find the compiled Go packages. There are a few ways to do this:
@@ -226,11 +230,11 @@ Using gccgo
Here ${prefix}
is the --prefix
option used
-when building gccgo. For a binary install this is
-normally /usr
. Whether to use lib
+when building gccgo. For a binary install this is
+normally /usr
. Whether to use lib
or lib64
depends on the target.
Typically lib64
is correct for x86_64 systems,
-and lib
is correct for other systems. The idea is to
+and lib
is correct for other systems. The idea is to
name the directory where libgo.so
is found.
@@ -325,9 +329,9 @@ Imports
The gccgo compiler will look in the current
-directory for import files. In more complex scenarios you
+directory for import files. In more complex scenarios you
may pass the -I
or -L
option to
-gccgo. Both options take directories to search. The
+gccgo. Both options take directories to search. The
-L
option is also passed to the linker.
@@ -348,11 +352,11 @@ Debugging
If you use the -g
option when you compile, you can run
-gdb
on your executable. The debugger has only limited
-knowledge about Go. You can set breakpoints, single-step,
-etc. You can print variables, but they will be printed as though they
-had C/C++ types. For numeric types this doesn't matter. Go strings
-and interfaces will show up as two-element structures. Go
+gdb
on your executable. The debugger has only limited
+knowledge about Go. You can set breakpoints, single-step,
+etc. You can print variables, but they will be printed as though they
+had C/C++ types. For numeric types this doesn't matter. Go strings
+and interfaces will show up as two-element structures. Go
maps and channels are always represented as C pointers to run-time
structures.
@@ -399,7 +403,7 @@ Types
-A slice in Go is a structure. The current definition is
+A slice in Go is a structure. The current definition is
(this is subject to change):
@@ -413,15 +417,15 @@ Types
The type of a Go function is a pointer to a struct (this is
-subject to change). The first field in the
+subject to change). The first field in the
struct points to the code of the function, which will be equivalent to
a pointer to a C function whose parameter types are equivalent, with
-an additional trailing parameter. The trailing parameter is the
+an additional trailing parameter. The trailing parameter is the
closure, and the argument to pass is a pointer to the Go function
struct.
When a Go function returns more than one value, the C function returns
-a struct. For example, these functions are roughly equivalent:
+a struct. For example, these functions are roughly equivalent:
@@ -458,7 +462,7 @@ Function names
Go code can call C functions directly using a Go extension implemented
in gccgo: a function declaration may be preceded by
-//extern NAME
. For example, here is how the C function
+//extern NAME
. For example, here is how the C function
open
can be declared in Go:
@@ -518,11 +522,11 @@
Compile your C code as usual, and add the option
--fdump-go-spec=FILENAME
. This will create the
+-fdump-go-spec=FILENAME
. This will create the
file FILENAME
as a side effect of the
-compilation. This file will contain Go declarations for the types,
-variables and functions declared in the C code. C types that can not
-be represented in Go will be recorded as comments in the Go code. The
+compilation. This file will contain Go declarations for the types,
+variables and functions declared in the C code. C types that can not
+be represented in Go will be recorded as comments in the Go code. The
generated file will not have a package
declaration, but
can otherwise be compiled directly by gccgo.
diff --git a/doc/go1.14.html b/doc/go1.14.html
new file mode 100644
index 0000000000..525a1421f7
--- /dev/null
+++ b/doc/go1.14.html
@@ -0,0 +1,140 @@
+
+
+
+
+
+
+DRAFT RELEASE NOTES — Introduction to Go 1.14
+
+
+
+ Go 1.14 is not yet released. These are work-in-progress
+ release notes. Go 1.14 is expected to be released in February 2020.
+
+
+
+Changes to the language
+
+
+TODO
+
+
+Ports
+
+
+TODO
+
+
+Native Client (NaCl)
+
+
+ As announced in the Go 1.13 release notes,
+ Go 1.14 drops support for the Native Client platform (GOOS=nacl
).
+
+
+
+
+
+TODO
+
+
+Go command
+
+
+ The go
command now includes snippets of plain-text error messages
+ from module proxies and other HTTP servers.
+ An error message will only be shown if it is valid UTF-8 and consists of only
+ graphic characters and spaces.
+
+
+Runtime
+
+
+TODO
+
+
+
+Core library
+
+
+TODO
+
+
+
- bytes/hash
+ -
+
+ TODO: https://golang.org/cl/186877: add hashing package for bytes and strings
+
+
+
+
+- crypto/tls
+ -
+
+ TODO: https://golang.org/cl/191976: remove SSLv3 support
+
+
+
+ TODO: https://golang.org/cl/191999: remove TLS 1.3 opt-out
+
+
+
+
+- encoding/asn1
+ -
+
+ TODO: https://golang.org/cl/126624: handle ASN1's string type BMPString
+
+
+
+
+- mime
+ -
+
+ TODO: https://golang.org/cl/186927: update type of .js and .mjs files to text/javascript
+
+
+
+
+- plugin
+ -
+
+ TODO: https://golang.org/cl/191617: add freebsd/amd64 plugin support
+
+
+
+
+- runtime
+ -
+
+ TODO: https://golang.org/cl/187739: treat CTRL_CLOSE_EVENT, CTRL_LOGOFF_EVENT, CTRL_SHUTDOWN_EVENT as SIGTERM on Windows
+
+
+
+ TODO: https://golang.org/cl/188297: don't forward SIGPIPE on macOS
+
+
+
+
+Minor changes to the library
+
+
+ As always, there are various minor changes and updates to the library,
+ made with the Go 1 promise of compatibility
+ in mind.
+
+
+
+TODO
+
diff --git a/src/cmd/asm/internal/arch/arch.go b/src/cmd/asm/internal/arch/arch.go
index f6c6c88f64..5d1f9a5326 100644
--- a/src/cmd/asm/internal/arch/arch.go
+++ b/src/cmd/asm/internal/arch/arch.go
@@ -88,6 +88,14 @@ func jumpX86(word string) bool {
return word[0] == 'J' || word == "CALL" || strings.HasPrefix(word, "LOOP") || word == "XBEGIN"
}
+func jumpRISCV(word string) bool {
+ switch word {
+ case "BEQ", "BNE", "BLT", "BGE", "BLTU", "BGEU", "CALL", "JAL", "JALR", "JMP":
+ return true
+ }
+ return false
+}
+
func jumpWasm(word string) bool {
return word == "JMP" || word == "CALL" || word == "Call" || word == "Br" || word == "BrIf"
}
@@ -519,34 +527,114 @@ func archMips64(linkArch *obj.LinkArch) *Arch {
}
}
-var riscvJumps = map[string]bool{
- "BEQ": true,
- "BNE": true,
- "BLT": true,
- "BGE": true,
- "BLTU": true,
- "BGEU": true,
- "CALL": true,
- "JAL": true,
- "JALR": true,
- "JMP": true,
-}
-
func archRISCV64() *Arch {
+ register := make(map[string]int16)
+
+ // Standard register names.
+ for i := riscv.REG_X0; i <= riscv.REG_X31; i++ {
+ name := fmt.Sprintf("X%d", i-riscv.REG_X0)
+ register[name] = int16(i)
+ }
+ for i := riscv.REG_F0; i <= riscv.REG_F31; i++ {
+ name := fmt.Sprintf("F%d", i-riscv.REG_F0)
+ register[name] = int16(i)
+ }
+
+ // General registers with ABI names.
+ register["ZERO"] = riscv.REG_ZERO
+ register["RA"] = riscv.REG_RA
+ register["SP"] = riscv.REG_SP
+ register["GP"] = riscv.REG_GP
+ register["TP"] = riscv.REG_TP
+ register["T0"] = riscv.REG_T0
+ register["T1"] = riscv.REG_T1
+ register["T2"] = riscv.REG_T2
+ register["S0"] = riscv.REG_S0
+ register["S1"] = riscv.REG_S1
+ register["A0"] = riscv.REG_A0
+ register["A1"] = riscv.REG_A1
+ register["A2"] = riscv.REG_A2
+ register["A3"] = riscv.REG_A3
+ register["A4"] = riscv.REG_A4
+ register["A5"] = riscv.REG_A5
+ register["A6"] = riscv.REG_A6
+ register["A7"] = riscv.REG_A7
+ register["S2"] = riscv.REG_S2
+ register["S3"] = riscv.REG_S3
+ register["S4"] = riscv.REG_S4
+ register["S5"] = riscv.REG_S5
+ register["S6"] = riscv.REG_S6
+ register["S7"] = riscv.REG_S7
+ register["S8"] = riscv.REG_S8
+ register["S9"] = riscv.REG_S9
+ register["S10"] = riscv.REG_S10
+ register["S11"] = riscv.REG_S11
+ register["T3"] = riscv.REG_T3
+ register["T4"] = riscv.REG_T4
+ register["T5"] = riscv.REG_T5
+ register["T6"] = riscv.REG_T6
+
+ // Go runtime register names.
+ register["g"] = riscv.REG_G
+ register["CTXT"] = riscv.REG_CTXT
+ register["TMP"] = riscv.REG_TMP
+
+ // ABI names for floating point register.
+ register["FT0"] = riscv.REG_FT0
+ register["FT1"] = riscv.REG_FT1
+ register["FT2"] = riscv.REG_FT2
+ register["FT3"] = riscv.REG_FT3
+ register["FT4"] = riscv.REG_FT4
+ register["FT5"] = riscv.REG_FT5
+ register["FT6"] = riscv.REG_FT6
+ register["FT7"] = riscv.REG_FT7
+ register["FS0"] = riscv.REG_FS0
+ register["FS1"] = riscv.REG_FS1
+ register["FA0"] = riscv.REG_FA0
+ register["FA1"] = riscv.REG_FA1
+ register["FA2"] = riscv.REG_FA2
+ register["FA3"] = riscv.REG_FA3
+ register["FA4"] = riscv.REG_FA4
+ register["FA5"] = riscv.REG_FA5
+ register["FA6"] = riscv.REG_FA6
+ register["FA7"] = riscv.REG_FA7
+ register["FS2"] = riscv.REG_FS2
+ register["FS3"] = riscv.REG_FS3
+ register["FS4"] = riscv.REG_FS4
+ register["FS5"] = riscv.REG_FS5
+ register["FS6"] = riscv.REG_FS6
+ register["FS7"] = riscv.REG_FS7
+ register["FS8"] = riscv.REG_FS8
+ register["FS9"] = riscv.REG_FS9
+ register["FS10"] = riscv.REG_FS10
+ register["FS11"] = riscv.REG_FS11
+ register["FT8"] = riscv.REG_FT8
+ register["FT9"] = riscv.REG_FT9
+ register["FT10"] = riscv.REG_FT10
+ register["FT11"] = riscv.REG_FT11
+
// Pseudo-registers.
- riscv.Registers["SB"] = RSB
- riscv.Registers["FP"] = RFP
- riscv.Registers["PC"] = RPC
+ register["SB"] = RSB
+ register["FP"] = RFP
+ register["PC"] = RPC
+
+ instructions := make(map[string]obj.As)
+ for i, s := range obj.Anames {
+ instructions[s] = obj.As(i)
+ }
+ for i, s := range riscv.Anames {
+ if obj.As(i) >= obj.A_ARCHSPECIFIC {
+ instructions[s] = obj.As(i) + obj.ABaseRISCV
+ }
+ }
return &Arch{
LinkArch: &riscv.LinkRISCV64,
- Instructions: riscv.Instructions,
- Register: riscv.Registers,
+ Instructions: instructions,
+ Register: register,
RegisterPrefix: nil,
RegisterNumber: nilRegisterNumber,
- IsJump: func(s string) bool {
- return riscvJumps[s]
- },
+ IsJump: jumpRISCV,
}
}
diff --git a/src/cmd/asm/internal/asm/testdata/riscvenc.s b/src/cmd/asm/internal/asm/testdata/riscvenc.s
index 629ca3c67c..76b3df8cfc 100644
--- a/src/cmd/asm/internal/asm/testdata/riscvenc.s
+++ b/src/cmd/asm/internal/asm/testdata/riscvenc.s
@@ -6,6 +6,10 @@
TEXT asmtest(SB),DUPOK|NOSPLIT,$0
start:
+ // Arbitrary bytes (entered in little-endian mode)
+ WORD $0x12345678 // WORD $305419896 // 78563412
+ WORD $0x9abcdef0 // WORD $2596069104 // f0debc9a
+
ADD T1, T0, T2 // b3836200
ADD T0, T1 // 33035300
ADD $2047, T0, T1 // 1383f27f
@@ -117,11 +121,6 @@ start:
SEQZ A5, A5 // 93b71700
SNEZ A5, A5 // b337f000
- // Arbitrary bytes (entered in little-endian mode)
- WORD $0x12345678 // WORD $305419896 // 78563412
- WORD $0x9abcdef0 // WORD $2596069104 // f0debc9a
-
-
// M extension
MUL T0, T1, T2 // b3035302
MULH T0, T1, T2 // b3135302
diff --git a/src/cmd/asm/internal/asm/testdata/s390x.s b/src/cmd/asm/internal/asm/testdata/s390x.s
index c9f4d69736..62563d885e 100644
--- a/src/cmd/asm/internal/asm/testdata/s390x.s
+++ b/src/cmd/asm/internal/asm/testdata/s390x.s
@@ -33,12 +33,12 @@ TEXT main·foo(SB),DUPOK|NOSPLIT,$16-0 // TEXT main.foo(SB), DUPOK|NOSPLIT, $16-
MOVWBR (R15), R9 // e390f000001e
MOVD R1, n-8(SP) // e310f0100024
- MOVW R2, n-8(SP) // e320f0100050
- MOVH R3, n-8(SP) // e330f0100070
- MOVB R4, n-8(SP) // e340f0100072
- MOVWZ R5, n-8(SP) // e350f0100050
- MOVHZ R6, n-8(SP) // e360f0100070
- MOVBZ R7, n-8(SP) // e370f0100072
+ MOVW R2, n-8(SP) // 5020f010
+ MOVH R3, n-8(SP) // 4030f010
+ MOVB R4, n-8(SP) // 4240f010
+ MOVWZ R5, n-8(SP) // 5050f010
+ MOVHZ R6, n-8(SP) // 4060f010
+ MOVBZ R7, n-8(SP) // 4270f010
MOVDBR R8, n-8(SP) // e380f010002f
MOVWBR R9, n-8(SP) // e390f010003e
@@ -58,6 +58,20 @@ TEXT main·foo(SB),DUPOK|NOSPLIT,$16-0 // TEXT main.foo(SB), DUPOK|NOSPLIT, $16-
MOVB $-128, -524288(R5) // eb8050008052
MOVB $1, -524289(R6) // c0a1fff7ffff41aa60009201a000
+ // RX (12-bit displacement) and RXY (20-bit displacement) instruction encoding (e.g: ST vs STY)
+ MOVW R1, 4095(R2)(R3) // 50132fff
+ MOVW R1, 4096(R2)(R3) // e31320000150
+ MOVWZ R1, 4095(R2)(R3) // 50132fff
+ MOVWZ R1, 4096(R2)(R3) // e31320000150
+ MOVH R1, 4095(R2)(R3) // 40132fff
+ MOVHZ R1, 4095(R2)(R3) // 40132fff
+ MOVH R1, 4096(R2)(R3) // e31320000170
+ MOVHZ R1, 4096(R2)(R3) // e31320000170
+ MOVB R1, 4095(R2)(R3) // 42132fff
+ MOVBZ R1, 4095(R2)(R3) // 42132fff
+ MOVB R1, 4096(R2)(R3) // e31320000172
+ MOVBZ R1, 4096(R2)(R3) // e31320000172
+
ADD R1, R2 // b9e81022
ADD R1, R2, R3 // b9e81032
ADD $8192, R1 // a71b2000
@@ -95,6 +109,7 @@ TEXT main·foo(SB),DUPOK|NOSPLIT,$16-0 // TEXT main.foo(SB), DUPOK|NOSPLIT, $16-
MULHD R7, R2, R1 // b90400b2b98600a7ebb7003f000ab98000b2b90900abebb2003f000ab98000b7b9e9b01a
MULHDU R3, R4 // b90400b4b98600a3b904004a
MULHDU R5, R6, R7 // b90400b6b98600a5b904007a
+ MLGR R1, R2 // b9860021
DIVD R1, R2 // b90400b2b90d00a1b904002b
DIVD R1, R2, R3 // b90400b2b90d00a1b904003b
DIVW R4, R5 // b90400b5b91d00a4b904005b
@@ -300,10 +315,22 @@ TEXT main·foo(SB),DUPOK|NOSPLIT,$16-0 // TEXT main.foo(SB), DUPOK|NOSPLIT, $16-
FMOVS $0, F11 // b37400b0
FMOVD $0, F12 // b37500c0
- FMOVS (R1)(R2*1), F0 // ed0210000064
- FMOVS n-8(SP), F15 // edf0f0100064
- FMOVD -9999999(R8)(R9*1), F8 // c0a1ff67698141aa9000ed8a80000065
+ FMOVS (R1)(R2*1), F0 // 78021000
+ FMOVS n-8(SP), F15 // 78f0f010
+ FMOVD -9999999(R8)(R9*1), F8 // c0a1ff67698141aa9000688a8000
FMOVD F4, F5 // 2854
+
+ // RX (12-bit displacement) and RXY (20-bit displacement) instruction encoding (e.g. LD vs LDY)
+ FMOVD (R1), F0 // 68001000
+ FMOVD 4095(R2), F13 // 68d02fff
+ FMOVD 4096(R2), F15 // edf020000165
+ FMOVS 4095(R2)(R3), F13 // 78d32fff
+ FMOVS 4096(R2)(R4), F15 // edf420000164
+ FMOVD F0, 4095(R1) // 60001fff
+ FMOVD F0, 4096(R1) // ed0010000167
+ FMOVS F13, 4095(R2)(R3) // 70d32fff
+ FMOVS F13, 4096(R2)(R3) // edd320000166
+
FADDS F0, F15 // b30a00f0
FADD F1, F14 // b31a00e1
FSUBS F2, F13 // b30b00d2
@@ -329,6 +356,9 @@ TEXT main·foo(SB),DUPOK|NOSPLIT,$16-0 // TEXT main.foo(SB), DUPOK|NOSPLIT, $16-
TCEB F5, $8 // ed5000080010
TCDB F15, $4095 // edf00fff0011
+ UNDEF // 00000000
+ NOPH // 0700
+
VL (R15), V1 // e710f0000006
VST V1, (R15) // e710f000000e
VL (R15), V31 // e7f0f0000806
diff --git a/src/cmd/compile/internal/gc/bexport.go b/src/cmd/compile/internal/gc/bexport.go
index 7c09ab5a34..e67506f4e1 100644
--- a/src/cmd/compile/internal/gc/bexport.go
+++ b/src/cmd/compile/internal/gc/bexport.go
@@ -43,11 +43,14 @@ func (p *exporter) markType(t *types.Type) {
// the user already needs some way to construct values of
// those types.
switch t.Etype {
- case TPTR, TARRAY, TSLICE, TCHAN:
- // TODO(mdempsky): Skip marking element type for
- // send-only channels?
+ case TPTR, TARRAY, TSLICE:
p.markType(t.Elem())
+ case TCHAN:
+ if t.ChanDir().CanRecv() {
+ p.markType(t.Elem())
+ }
+
case TMAP:
p.markType(t.Key())
p.markType(t.Elem())
diff --git a/src/cmd/compile/internal/gc/bimport.go b/src/cmd/compile/internal/gc/bimport.go
index 7aabae764e..911ac4c0dc 100644
--- a/src/cmd/compile/internal/gc/bimport.go
+++ b/src/cmd/compile/internal/gc/bimport.go
@@ -5,7 +5,6 @@
package gc
import (
- "cmd/compile/internal/types"
"cmd/internal/src"
)
@@ -15,15 +14,6 @@ import (
// the same name appears in an error message.
var numImport = make(map[string]int)
-func idealType(typ *types.Type) *types.Type {
- switch typ {
- case types.Idealint, types.Idealrune, types.Idealfloat, types.Idealcomplex:
- // canonicalize ideal types
- typ = types.Types[TIDEAL]
- }
- return typ
-}
-
func npos(pos src.XPos, n *Node) *Node {
n.Pos = pos
return n
diff --git a/src/cmd/compile/internal/gc/closure.go b/src/cmd/compile/internal/gc/closure.go
index fb04924121..21b3461b47 100644
--- a/src/cmd/compile/internal/gc/closure.go
+++ b/src/cmd/compile/internal/gc/closure.go
@@ -73,6 +73,12 @@ func (p *noder) funcLit(expr *syntax.FuncLit) *Node {
func typecheckclosure(clo *Node, top int) {
xfunc := clo.Func.Closure
+ // Set current associated iota value, so iota can be used inside
+ // function in ConstSpec, see issue #22344
+ if x := getIotaValue(); x >= 0 {
+ xfunc.SetIota(x)
+ }
+
clo.Func.Ntype = typecheck(clo.Func.Ntype, ctxType)
clo.Type = clo.Func.Ntype.Type
clo.Func.Top = top
diff --git a/src/cmd/compile/internal/gc/const.go b/src/cmd/compile/internal/gc/const.go
index 47601ecf5f..e40c23b8ef 100644
--- a/src/cmd/compile/internal/gc/const.go
+++ b/src/cmd/compile/internal/gc/const.go
@@ -998,6 +998,11 @@ func setconst(n *Node, v Val) {
Xoffset: BADWIDTH,
}
n.SetVal(v)
+ if n.Type.IsUntyped() {
+ // TODO(mdempsky): Make typecheck responsible for setting
+ // the correct untyped type.
+ n.Type = idealType(v.Ctype())
+ }
// Check range.
lno := setlineno(n)
@@ -1030,24 +1035,29 @@ func setintconst(n *Node, v int64) {
func nodlit(v Val) *Node {
n := nod(OLITERAL, nil, nil)
n.SetVal(v)
- switch v.Ctype() {
- default:
- Fatalf("nodlit ctype %d", v.Ctype())
+ n.Type = idealType(v.Ctype())
+ return n
+}
+func idealType(ct Ctype) *types.Type {
+ switch ct {
case CTSTR:
- n.Type = types.Idealstring
-
+ return types.Idealstring
case CTBOOL:
- n.Type = types.Idealbool
-
- case CTINT, CTRUNE, CTFLT, CTCPLX:
- n.Type = types.Types[TIDEAL]
-
+ return types.Idealbool
+ case CTINT:
+ return types.Idealint
+ case CTRUNE:
+ return types.Idealrune
+ case CTFLT:
+ return types.Idealfloat
+ case CTCPLX:
+ return types.Idealcomplex
case CTNIL:
- n.Type = types.Types[TNIL]
+ return types.Types[TNIL]
}
-
- return n
+ Fatalf("unexpected Ctype: %v", ct)
+ return nil
}
// idealkind returns a constant kind like consttype
@@ -1294,7 +1304,7 @@ type constSetKey struct {
// equal value and identical type has already been added, then add
// reports an error about the duplicate value.
//
-// pos provides position information for where expression n occured
+// pos provides position information for where expression n occurred
// (in case n does not have its own position information). what and
// where are used in the error message.
//
diff --git a/src/cmd/compile/internal/gc/esc.go b/src/cmd/compile/internal/gc/esc.go
index 5ffc3c543a..c350d7c1bc 100644
--- a/src/cmd/compile/internal/gc/esc.go
+++ b/src/cmd/compile/internal/gc/esc.go
@@ -212,6 +212,8 @@ var tags [1 << (bitsPerOutputInTag + EscReturnBits)]string
// mktag returns the string representation for an escape analysis tag.
func mktag(mask int) string {
switch mask & EscMask {
+ case EscHeap:
+ return ""
case EscNone, EscReturn:
default:
Fatalf("escape mktag")
@@ -403,91 +405,93 @@ const unsafeUintptrTag = "unsafe-uintptr"
// marked go:uintptrescapes.
const uintptrEscapesTag = "uintptr-escapes"
-func esctag(fn *Node) {
- fn.Esc = EscFuncTagged
-
- name := func(s *types.Sym, narg int) string {
- if s != nil {
- return s.Name
+func (e *Escape) paramTag(fn *Node, narg int, f *types.Field) string {
+ name := func() string {
+ if f.Sym != nil {
+ return f.Sym.Name
}
return fmt.Sprintf("arg#%d", narg)
}
- // External functions are assumed unsafe,
- // unless //go:noescape is given before the declaration.
if fn.Nbody.Len() == 0 {
- if fn.Noescape() {
- for _, f := range fn.Type.Params().Fields().Slice() {
- if types.Haspointers(f.Type) {
- f.Note = mktag(EscNone)
- }
- }
- }
-
// Assume that uintptr arguments must be held live across the call.
// This is most important for syscall.Syscall.
// See golang.org/issue/13372.
// This really doesn't have much to do with escape analysis per se,
// but we are reusing the ability to annotate an individual function
// argument and pass those annotations along to importing code.
- narg := 0
- for _, f := range fn.Type.Params().Fields().Slice() {
- narg++
- if f.Type.Etype == TUINTPTR {
- if Debug['m'] != 0 {
- Warnl(fn.Pos, "%v assuming %v is unsafe uintptr", funcSym(fn), name(f.Sym, narg))
- }
- f.Note = unsafeUintptrTag
+ if f.Type.Etype == TUINTPTR {
+ if Debug['m'] != 0 {
+ Warnl(fn.Pos, "%v assuming %v is unsafe uintptr", funcSym(fn), name())
}
+ return unsafeUintptrTag
}
- return
- }
+ if !types.Haspointers(f.Type) { // don't bother tagging for scalars
+ return ""
+ }
- if fn.Func.Pragma&UintptrEscapes != 0 {
- narg := 0
- for _, f := range fn.Type.Params().Fields().Slice() {
- narg++
- if f.Type.Etype == TUINTPTR {
- if Debug['m'] != 0 {
- Warnl(fn.Pos, "%v marking %v as escaping uintptr", funcSym(fn), name(f.Sym, narg))
- }
- f.Note = uintptrEscapesTag
+ // External functions are assumed unsafe, unless
+ // //go:noescape is given before the declaration.
+ if fn.Noescape() {
+ if Debug['m'] != 0 && f.Sym != nil {
+ Warnl(fn.Pos, "%S %v does not escape", funcSym(fn), name())
}
+ return mktag(EscNone)
+ }
- if f.IsDDD() && f.Type.Elem().Etype == TUINTPTR {
- // final argument is ...uintptr.
- if Debug['m'] != 0 {
- Warnl(fn.Pos, "%v marking %v as escaping ...uintptr", funcSym(fn), name(f.Sym, narg))
- }
- f.Note = uintptrEscapesTag
- }
+ if Debug['m'] != 0 && f.Sym != nil {
+ Warnl(fn.Pos, "leaking param: %v", name())
}
+ return mktag(EscHeap)
}
- for _, fs := range types.RecvsParams {
- for _, f := range fs(fn.Type).Fields().Slice() {
- if !types.Haspointers(f.Type) { // don't bother tagging for scalars
- continue
+ if fn.Func.Pragma&UintptrEscapes != 0 {
+ if f.Type.Etype == TUINTPTR {
+ if Debug['m'] != 0 {
+ Warnl(fn.Pos, "%v marking %v as escaping uintptr", funcSym(fn), name())
}
- if f.Note == uintptrEscapesTag {
- // Note is already set in the loop above.
- continue
+ return uintptrEscapesTag
+ }
+ if f.IsDDD() && f.Type.Elem().Etype == TUINTPTR {
+ // final argument is ...uintptr.
+ if Debug['m'] != 0 {
+ Warnl(fn.Pos, "%v marking %v as escaping ...uintptr", funcSym(fn), name())
}
+ return uintptrEscapesTag
+ }
+ }
- // Unnamed parameters are unused and therefore do not escape.
- if f.Sym == nil || f.Sym.IsBlank() {
- f.Note = mktag(EscNone)
- continue
- }
+ if !types.Haspointers(f.Type) { // don't bother tagging for scalars
+ return ""
+ }
- switch esc := asNode(f.Nname).Esc; esc & EscMask {
- case EscNone, // not touched by escflood
- EscReturn:
- f.Note = mktag(int(esc))
+ // Unnamed parameters are unused and therefore do not escape.
+ if f.Sym == nil || f.Sym.IsBlank() {
+ return mktag(EscNone)
+ }
- case EscHeap: // touched by escflood, moved to heap
+ n := asNode(f.Nname)
+ loc := e.oldLoc(n)
+ esc := finalizeEsc(loc.paramEsc)
+
+ if Debug['m'] != 0 && !loc.escapes {
+ if esc == EscNone {
+ Warnl(n.Pos, "%S %S does not escape", funcSym(fn), n)
+ } else if esc == EscHeap {
+ Warnl(n.Pos, "leaking param: %S", n)
+ } else {
+ if esc&EscContentEscapes != 0 {
+ Warnl(n.Pos, "leaking param content: %S", n)
+ }
+ for i := 0; i < numEscReturns; i++ {
+ if x := getEscReturn(esc, i); x >= 0 {
+ res := n.Name.Curfn.Type.Results().Field(i).Sym
+ Warnl(n.Pos, "leaking param: %S to result %v level=%d", n, res, x)
+ }
}
}
}
+
+ return mktag(int(esc))
}
diff --git a/src/cmd/compile/internal/gc/escape.go b/src/cmd/compile/internal/gc/escape.go
index d2e096d0f0..ff958beef3 100644
--- a/src/cmd/compile/internal/gc/escape.go
+++ b/src/cmd/compile/internal/gc/escape.go
@@ -24,7 +24,7 @@ import (
// First, we construct a directed weighted graph where vertices
// (termed "locations") represent variables allocated by statements
// and expressions, and edges represent assignments between variables
-// (with weights reperesenting addressing/dereference counts).
+// (with weights representing addressing/dereference counts).
//
// Next we walk the graph looking for assignment paths that might
// violate the invariants stated above. If a variable v's address is
@@ -33,7 +33,7 @@ import (
//
// To support interprocedural analysis, we also record data-flow from
// each function's parameters to the heap and to its result
-// parameters. This information is summarized as "paremeter tags",
+// parameters. This information is summarized as "parameter tags",
// which are used at static call sites to improve escape analysis of
// function arguments.
@@ -44,7 +44,7 @@ import (
// "location."
//
// We also model every Go assignment as a directed edges between
-// locations. The number of derefence operations minus the number of
+// locations. The number of dereference operations minus the number of
// addressing operations is recorded as the edge's weight (termed
// "derefs"). For example:
//
@@ -147,12 +147,7 @@ func escapeFuncs(fns []*Node, recursive bool) {
e.curfn = nil
e.walkAll()
- e.finish()
-
- // Record parameter tags for package export data.
- for _, fn := range fns {
- esctag(fn)
- }
+ e.finish(fns)
}
func (e *Escape) initFunc(fn *Node) {
@@ -1258,7 +1253,20 @@ func (l *EscLocation) leakTo(sink *EscLocation, derefs int) {
}
}
-func (e *Escape) finish() {
+func (e *Escape) finish(fns []*Node) {
+ // Record parameter tags for package export data.
+ for _, fn := range fns {
+ fn.Esc = EscFuncTagged
+
+ narg := 0
+ for _, fs := range types.RecvsParams {
+ for _, f := range fs(fn.Type).Fields().Slice() {
+ narg++
+ f.Note = e.paramTag(fn, narg, f)
+ }
+ }
+ }
+
for _, loc := range e.allLocs {
n := loc.n
if n == nil {
@@ -1279,27 +1287,10 @@ func (e *Escape) finish() {
}
n.Esc = EscHeap
addrescapes(n)
- } else if loc.isName(PPARAM) {
- n.Esc = finalizeEsc(loc.paramEsc)
-
- if Debug['m'] != 0 && types.Haspointers(n.Type) {
- if n.Esc == EscNone {
- Warnl(n.Pos, "%S %S does not escape", funcSym(loc.curfn), n)
- } else if n.Esc == EscHeap {
- Warnl(n.Pos, "leaking param: %S", n)
- } else {
- if n.Esc&EscContentEscapes != 0 {
- Warnl(n.Pos, "leaking param content: %S", n)
- }
- for i := 0; i < numEscReturns; i++ {
- if x := getEscReturn(n.Esc, i); x >= 0 {
- res := n.Name.Curfn.Type.Results().Field(i).Sym
- Warnl(n.Pos, "leaking param: %S to result %v level=%d", n, res, x)
- }
- }
- }
- }
} else {
+ if Debug['m'] != 0 && n.Op != ONAME && n.Op != OTYPESW && n.Op != ORANGE && n.Op != ODEFER {
+ Warnl(n.Pos, "%S %S does not escape", funcSym(loc.curfn), n)
+ }
n.Esc = EscNone
if loc.transient {
switch n.Op {
@@ -1307,10 +1298,6 @@ func (e *Escape) finish() {
n.SetNoescape(true)
}
}
-
- if Debug['m'] != 0 && n.Op != ONAME && n.Op != OTYPESW && n.Op != ORANGE && n.Op != ODEFER {
- Warnl(n.Pos, "%S %S does not escape", funcSym(loc.curfn), n)
- }
}
}
}
diff --git a/src/cmd/compile/internal/gc/fmt.go b/src/cmd/compile/internal/gc/fmt.go
index 782e4cb840..53d6b9d2cc 100644
--- a/src/cmd/compile/internal/gc/fmt.go
+++ b/src/cmd/compile/internal/gc/fmt.go
@@ -686,11 +686,21 @@ func typefmt(t *types.Type, flag FmtFlag, mode fmtMode, depth int) string {
}
if int(t.Etype) < len(basicnames) && basicnames[t.Etype] != "" {
- name := basicnames[t.Etype]
- if t == types.Idealbool || t == types.Idealstring {
- name = "untyped " + name
- }
- return name
+ switch t {
+ case types.Idealbool:
+ return "untyped bool"
+ case types.Idealstring:
+ return "untyped string"
+ case types.Idealint:
+ return "untyped int"
+ case types.Idealrune:
+ return "untyped rune"
+ case types.Idealfloat:
+ return "untyped float"
+ case types.Idealcomplex:
+ return "untyped complex"
+ }
+ return basicnames[t.Etype]
}
if mode == FDbg {
@@ -854,9 +864,6 @@ func typefmt(t *types.Type, flag FmtFlag, mode fmtMode, depth int) string {
case TUNSAFEPTR:
return "unsafe.Pointer"
- case TDDDFIELD:
- return mode.Sprintf("%v <%v> %v", t.Etype, t.Sym, t.DDDField())
-
case Txxx:
return "Txxx"
}
diff --git a/src/cmd/compile/internal/gc/iimport.go b/src/cmd/compile/internal/gc/iimport.go
index 1d4329b4b1..7d134f3a5f 100644
--- a/src/cmd/compile/internal/gc/iimport.go
+++ b/src/cmd/compile/internal/gc/iimport.go
@@ -374,8 +374,6 @@ func (p *importReader) value() (typ *types.Type, v Val) {
p.float(&x.Imag, typ)
v.U = x
}
-
- typ = idealType(typ)
return
}
diff --git a/src/cmd/compile/internal/gc/main.go b/src/cmd/compile/internal/gc/main.go
index 12ebfb871b..dff33ee530 100644
--- a/src/cmd/compile/internal/gc/main.go
+++ b/src/cmd/compile/internal/gc/main.go
@@ -570,7 +570,7 @@ func Main(archInit func(*Arch)) {
fcount++
}
}
- // With all types ckecked, it's now safe to verify map keys. One single
+ // With all types checked, it's now safe to verify map keys. One single
// check past phase 9 isn't sufficient, as we may exit with other errors
// before then, thus skipping map key errors.
checkMapKeys()
diff --git a/src/cmd/compile/internal/gc/plive.go b/src/cmd/compile/internal/gc/plive.go
index f436af8141..27c9674a7d 100644
--- a/src/cmd/compile/internal/gc/plive.go
+++ b/src/cmd/compile/internal/gc/plive.go
@@ -304,7 +304,7 @@ func (lv *Liveness) valueEffects(v *ssa.Value) (int32, liveEffect) {
var effect liveEffect
// Read is a read, obviously.
//
- // Addr is a read also, as any subseqent holder of the pointer must be able
+ // Addr is a read also, as any subsequent holder of the pointer must be able
// to see all the values (including initialization) written so far.
// This also prevents a variable from "coming back from the dead" and presenting
// stale pointers to the garbage collector. See issue 28445.
@@ -1453,12 +1453,25 @@ func liveness(e *ssafn, f *ssa.Func, pp *Progs) LivenessMap {
return lv.livenessMap
}
+// TODO(cuonglm,mdempsky): Revisit after #24416 is fixed.
func isfat(t *types.Type) bool {
if t != nil {
switch t.Etype {
- case TSTRUCT, TARRAY, TSLICE, TSTRING,
+ case TSLICE, TSTRING,
TINTER: // maybe remove later
return true
+ case TARRAY:
+ // Array of 1 element, check if element is fat
+ if t.NumElem() == 1 {
+ return isfat(t.Elem())
+ }
+ return true
+ case TSTRUCT:
+ // Struct with 1 field, check if field is fat
+ if t.NumFields() == 1 {
+ return isfat(t.Field(0).Type)
+ }
+ return true
}
}
diff --git a/src/cmd/compile/internal/gc/sinit.go b/src/cmd/compile/internal/gc/sinit.go
index a506bfe31f..ae8e79d854 100644
--- a/src/cmd/compile/internal/gc/sinit.go
+++ b/src/cmd/compile/internal/gc/sinit.go
@@ -495,7 +495,7 @@ func isStaticCompositeLiteral(n *Node) bool {
// literal components of composite literals.
// Dynamic initialization represents non-literals and
// non-literal components of composite literals.
-// LocalCode initializion represents initialization
+// LocalCode initialization represents initialization
// that occurs purely in generated code local to the function of use.
// Initialization code is sometimes generated in passes,
// first static then dynamic.
diff --git a/src/cmd/compile/internal/gc/ssa.go b/src/cmd/compile/internal/gc/ssa.go
index 2866625c83..01d7bb31d9 100644
--- a/src/cmd/compile/internal/gc/ssa.go
+++ b/src/cmd/compile/internal/gc/ssa.go
@@ -1645,7 +1645,7 @@ var fpConvOpToSSA32 = map[twoTypes]twoOpsAndType{
twoTypes{TFLOAT64, TUINT32}: twoOpsAndType{ssa.OpCvt64Fto32U, ssa.OpCopy, TUINT32},
}
-// uint64<->float conversions, only on machines that have intructions for that
+// uint64<->float conversions, only on machines that have instructions for that
var uint64fpConvOpToSSA = map[twoTypes]twoOpsAndType{
twoTypes{TUINT64, TFLOAT32}: twoOpsAndType{ssa.OpCopy, ssa.OpCvt64Uto32F, TUINT64},
twoTypes{TUINT64, TFLOAT64}: twoOpsAndType{ssa.OpCopy, ssa.OpCvt64Uto64F, TUINT64},
@@ -3600,8 +3600,8 @@ func init() {
func(s *state, n *Node, args []*ssa.Value) *ssa.Value {
return s.newValue2(ssa.OpMul64uhilo, types.NewTuple(types.Types[TUINT64], types.Types[TUINT64]), args[0], args[1])
},
- sys.AMD64, sys.ARM64, sys.PPC64)
- alias("math/bits", "Mul", "math/bits", "Mul64", sys.ArchAMD64, sys.ArchARM64, sys.ArchPPC64)
+ sys.AMD64, sys.ARM64, sys.PPC64, sys.S390X)
+ alias("math/bits", "Mul", "math/bits", "Mul64", sys.ArchAMD64, sys.ArchARM64, sys.ArchPPC64, sys.ArchS390X)
addF("math/bits", "Add64",
func(s *state, n *Node, args []*ssa.Value) *ssa.Value {
return s.newValue3(ssa.OpAdd64carry, types.NewTuple(types.Types[TUINT64], types.Types[TUINT64]), args[0], args[1], args[2])
diff --git a/src/cmd/compile/internal/gc/swt.go b/src/cmd/compile/internal/gc/swt.go
index 33bc71b862..5089efe08b 100644
--- a/src/cmd/compile/internal/gc/swt.go
+++ b/src/cmd/compile/internal/gc/swt.go
@@ -6,6 +6,7 @@ package gc
import (
"cmd/compile/internal/types"
+ "cmd/internal/src"
"sort"
)
@@ -56,164 +57,210 @@ type caseClauses struct {
// typecheckswitch typechecks a switch statement.
func typecheckswitch(n *Node) {
typecheckslice(n.Ninit.Slice(), ctxStmt)
-
- var nilonly string
- var top int
- var t *types.Type
-
if n.Left != nil && n.Left.Op == OTYPESW {
- // type switch
- top = ctxType
- n.Left.Right = typecheck(n.Left.Right, ctxExpr)
- t = n.Left.Right.Type
- if t != nil && !t.IsInterface() {
- yyerrorl(n.Pos, "cannot type switch on non-interface value %L", n.Left.Right)
- }
- if v := n.Left.Left; v != nil && !v.isBlank() && n.List.Len() == 0 {
- // We don't actually declare the type switch's guarded
- // declaration itself. So if there are no cases, we
- // won't notice that it went unused.
- yyerrorl(v.Pos, "%v declared and not used", v.Sym)
- }
+ typecheckTypeSwitch(n)
} else {
- // expression switch
- top = ctxExpr
- if n.Left != nil {
- n.Left = typecheck(n.Left, ctxExpr)
- n.Left = defaultlit(n.Left, nil)
- t = n.Left.Type
- } else {
- t = types.Types[TBOOL]
+ typecheckExprSwitch(n)
+ }
+}
+
+func typecheckTypeSwitch(n *Node) {
+ n.Left.Right = typecheck(n.Left.Right, ctxExpr)
+ t := n.Left.Right.Type
+ if t != nil && !t.IsInterface() {
+ yyerrorl(n.Pos, "cannot type switch on non-interface value %L", n.Left.Right)
+ t = nil
+ }
+ n.Type = t // TODO(mdempsky): Remove; statements aren't typed.
+
+ // We don't actually declare the type switch's guarded
+ // declaration itself. So if there are no cases, we won't
+ // notice that it went unused.
+ if v := n.Left.Left; v != nil && !v.isBlank() && n.List.Len() == 0 {
+ yyerrorl(v.Pos, "%v declared and not used", v.Sym)
+ }
+
+ var defCase, nilCase *Node
+ var ts typeSet
+ for _, ncase := range n.List.Slice() {
+ ls := ncase.List.Slice()
+ if len(ls) == 0 { // default:
+ if defCase != nil {
+ yyerrorl(ncase.Pos, "multiple defaults in switch (first at %v)", defCase.Line())
+ } else {
+ defCase = ncase
+ }
}
- if t != nil {
+
+ for i := range ls {
+ ls[i] = typecheck(ls[i], ctxExpr|ctxType)
+ n1 := ls[i]
+ if t == nil || n1.Type == nil {
+ continue
+ }
+
+ var missing, have *types.Field
+ var ptr int
switch {
- case !okforeq[t.Etype]:
- yyerrorl(n.Pos, "cannot switch on %L", n.Left)
- case t.IsSlice():
- nilonly = "slice"
- case t.IsArray() && !IsComparable(t):
- yyerrorl(n.Pos, "cannot switch on %L", n.Left)
- case t.IsStruct():
- if f := IncomparableField(t); f != nil {
- yyerrorl(n.Pos, "cannot switch on %L (struct containing %v cannot be compared)", n.Left, f.Type)
+ case n1.isNil(): // case nil:
+ if nilCase != nil {
+ yyerrorl(ncase.Pos, "multiple nil cases in type switch (first at %v)", nilCase.Line())
+ } else {
+ nilCase = ncase
+ }
+ case n1.Op != OTYPE:
+ yyerrorl(ncase.Pos, "%L is not a type", n1)
+ case !n1.Type.IsInterface() && !implements(n1.Type, t, &missing, &have, &ptr) && !missing.Broke():
+ if have != nil && !have.Broke() {
+ yyerrorl(ncase.Pos, "impossible type switch case: %L cannot have dynamic type %v"+
+ " (wrong type for %v method)\n\thave %v%S\n\twant %v%S", n.Left.Right, n1.Type, missing.Sym, have.Sym, have.Type, missing.Sym, missing.Type)
+ } else if ptr != 0 {
+ yyerrorl(ncase.Pos, "impossible type switch case: %L cannot have dynamic type %v"+
+ " (%v method has pointer receiver)", n.Left.Right, n1.Type, missing.Sym)
+ } else {
+ yyerrorl(ncase.Pos, "impossible type switch case: %L cannot have dynamic type %v"+
+ " (missing %v method)", n.Left.Right, n1.Type, missing.Sym)
+ }
+ }
+
+ if n1.Op == OTYPE {
+ ts.add(ncase.Pos, n1.Type)
+ }
+ }
+
+ if ncase.Rlist.Len() != 0 {
+ // Assign the clause variable's type.
+ vt := t
+ if len(ls) == 1 {
+ if ls[0].Op == OTYPE {
+ vt = ls[0].Type
+ } else if ls[0].Op != OLITERAL { // TODO(mdempsky): Should be !ls[0].isNil()
+ // Invalid single-type case;
+ // mark variable as broken.
+ vt = nil
}
- case t.Etype == TFUNC:
- nilonly = "func"
- case t.IsMap():
- nilonly = "map"
}
+
+ // TODO(mdempsky): It should be possible to
+ // still typecheck the case body.
+ if vt == nil {
+ continue
+ }
+
+ nvar := ncase.Rlist.First()
+ nvar.Type = vt
+ nvar = typecheck(nvar, ctxExpr|ctxAssign)
+ ncase.Rlist.SetFirst(nvar)
}
+
+ typecheckslice(ncase.Nbody.Slice(), ctxStmt)
}
+}
- n.Type = t
+type typeSet struct {
+ m map[string][]typeSetEntry
+}
- var def, niltype *Node
- for _, ncase := range n.List.Slice() {
- if ncase.List.Len() == 0 {
- // default
- if def != nil {
- setlineno(ncase)
- yyerrorl(ncase.Pos, "multiple defaults in switch (first at %v)", def.Line())
+type typeSetEntry struct {
+ pos src.XPos
+ typ *types.Type
+}
+
+func (s *typeSet) add(pos src.XPos, typ *types.Type) {
+ if s.m == nil {
+ s.m = make(map[string][]typeSetEntry)
+ }
+
+ // LongString does not uniquely identify types, so we need to
+ // disambiguate collisions with types.Identical.
+ // TODO(mdempsky): Add a method that *is* unique.
+ ls := typ.LongString()
+ prevs := s.m[ls]
+ for _, prev := range prevs {
+ if types.Identical(typ, prev.typ) {
+ yyerrorl(pos, "duplicate case %v in type switch\n\tprevious case at %s", typ, linestr(prev.pos))
+ return
+ }
+ }
+ s.m[ls] = append(prevs, typeSetEntry{pos, typ})
+}
+
+func typecheckExprSwitch(n *Node) {
+ t := types.Types[TBOOL]
+ if n.Left != nil {
+ n.Left = typecheck(n.Left, ctxExpr)
+ n.Left = defaultlit(n.Left, nil)
+ t = n.Left.Type
+ }
+
+ var nilonly string
+ if t != nil {
+ switch {
+ case t.IsMap():
+ nilonly = "map"
+ case t.Etype == TFUNC:
+ nilonly = "func"
+ case t.IsSlice():
+ nilonly = "slice"
+
+ case !IsComparable(t):
+ if t.IsStruct() {
+ yyerrorl(n.Pos, "cannot switch on %L (struct containing %v cannot be compared)", n.Left, IncomparableField(t).Type)
} else {
- def = ncase
+ yyerrorl(n.Pos, "cannot switch on %L", n.Left)
}
- } else {
- ls := ncase.List.Slice()
- for i1, n1 := range ls {
- setlineno(n1)
- ls[i1] = typecheck(ls[i1], ctxExpr|ctxType)
- n1 = ls[i1]
- if n1.Type == nil || t == nil {
- continue
- }
-
- setlineno(ncase)
- switch top {
- // expression switch
- case ctxExpr:
- ls[i1] = defaultlit(ls[i1], t)
- n1 = ls[i1]
- switch {
- case n1.Op == OTYPE:
- yyerrorl(ncase.Pos, "type %v is not an expression", n1.Type)
- case n1.Type != nil && assignop(n1.Type, t, nil) == 0 && assignop(t, n1.Type, nil) == 0:
- if n.Left != nil {
- yyerrorl(ncase.Pos, "invalid case %v in switch on %v (mismatched types %v and %v)", n1, n.Left, n1.Type, t)
- } else {
- yyerrorl(ncase.Pos, "invalid case %v in switch (mismatched types %v and bool)", n1, n1.Type)
- }
- case nilonly != "" && !n1.isNil():
- yyerrorl(ncase.Pos, "invalid case %v in switch (can only compare %s %v to nil)", n1, nilonly, n.Left)
- case t.IsInterface() && !n1.Type.IsInterface() && !IsComparable(n1.Type):
- yyerrorl(ncase.Pos, "invalid case %L in switch (incomparable type)", n1)
- }
+ t = nil
+ }
+ }
+ n.Type = t // TODO(mdempsky): Remove; statements aren't typed.
- // type switch
- case ctxType:
- var missing, have *types.Field
- var ptr int
- switch {
- case n1.Op == OLITERAL && n1.Type.IsKind(TNIL):
- // case nil:
- if niltype != nil {
- yyerrorl(ncase.Pos, "multiple nil cases in type switch (first at %v)", niltype.Line())
- } else {
- niltype = ncase
- }
- case n1.Op != OTYPE && n1.Type != nil: // should this be ||?
- yyerrorl(ncase.Pos, "%L is not a type", n1)
- // reset to original type
- n1 = n.Left.Right
- ls[i1] = n1
- case !n1.Type.IsInterface() && t.IsInterface() && !implements(n1.Type, t, &missing, &have, &ptr):
- if have != nil && !missing.Broke() && !have.Broke() {
- yyerrorl(ncase.Pos, "impossible type switch case: %L cannot have dynamic type %v"+
- " (wrong type for %v method)\n\thave %v%S\n\twant %v%S", n.Left.Right, n1.Type, missing.Sym, have.Sym, have.Type, missing.Sym, missing.Type)
- } else if !missing.Broke() {
- if ptr != 0 {
- yyerrorl(ncase.Pos, "impossible type switch case: %L cannot have dynamic type %v"+
- " (%v method has pointer receiver)", n.Left.Right, n1.Type, missing.Sym)
- } else {
- yyerrorl(ncase.Pos, "impossible type switch case: %L cannot have dynamic type %v"+
- " (missing %v method)", n.Left.Right, n1.Type, missing.Sym)
- }
- }
- }
- }
+ var defCase *Node
+ var cs constSet
+ for _, ncase := range n.List.Slice() {
+ ls := ncase.List.Slice()
+ if len(ls) == 0 { // default:
+ if defCase != nil {
+ yyerrorl(ncase.Pos, "multiple defaults in switch (first at %v)", defCase.Line())
+ } else {
+ defCase = ncase
}
}
- if top == ctxType {
- ll := ncase.List
- if ncase.Rlist.Len() != 0 {
- nvar := ncase.Rlist.First()
- if ll.Len() == 1 && (ll.First().Type == nil || !ll.First().Type.IsKind(TNIL)) {
- // single entry type switch
- nvar.Type = ll.First().Type
- } else {
- // multiple entry type switch or default
- nvar.Type = n.Type
- }
+ for i := range ls {
+ setlineno(ncase)
+ ls[i] = typecheck(ls[i], ctxExpr)
+ ls[i] = defaultlit(ls[i], t)
+ n1 := ls[i]
+ if t == nil || n1.Type == nil {
+ continue
+ }
- if nvar.Type == nil || nvar.Type.IsUntyped() {
- // if the value we're switching on has no type or is untyped,
- // we've already printed an error and don't need to continue
- // typechecking the body
- continue
+ switch {
+ case nilonly != "" && !n1.isNil():
+ yyerrorl(ncase.Pos, "invalid case %v in switch (can only compare %s %v to nil)", n1, nilonly, n.Left)
+ case t.IsInterface() && !n1.Type.IsInterface() && !IsComparable(n1.Type):
+ yyerrorl(ncase.Pos, "invalid case %L in switch (incomparable type)", n1)
+ case assignop(n1.Type, t, nil) == 0 && assignop(t, n1.Type, nil) == 0:
+ if n.Left != nil {
+ yyerrorl(ncase.Pos, "invalid case %v in switch on %v (mismatched types %v and %v)", n1, n.Left, n1.Type, t)
+ } else {
+ yyerrorl(ncase.Pos, "invalid case %v in switch (mismatched types %v and bool)", n1, n1.Type)
}
+ }
- nvar = typecheck(nvar, ctxExpr|ctxAssign)
- ncase.Rlist.SetFirst(nvar)
+ // Don't check for duplicate bools. Although the spec allows it,
+ // (1) the compiler hasn't checked it in the past, so compatibility mandates it, and
+ // (2) it would disallow useful things like
+ // case GOARCH == "arm" && GOARM == "5":
+ // case GOARCH == "arm":
+ // which would both evaluate to false for non-ARM compiles.
+ if !n1.Type.IsBoolean() {
+ cs.add(ncase.Pos, n1, "case", "switch")
}
}
typecheckslice(ncase.Nbody.Slice(), ctxStmt)
}
- switch top {
- // expression switch
- case ctxExpr:
- checkDupExprCases(n.Left, n.List.Slice())
- }
}
// walkswitch walks a switch statement.
@@ -586,65 +633,9 @@ func (s *typeSwitch) genCaseClauses(clauses []*Node) caseClauses {
cc.defjmp = nod(OBREAK, nil, nil)
}
- // diagnose duplicate cases
- s.checkDupCases(cc.list)
return cc
}
-func (s *typeSwitch) checkDupCases(cc []caseClause) {
- if len(cc) < 2 {
- return
- }
- // We store seen types in a map keyed by type hash.
- // It is possible, but very unlikely, for multiple distinct types to have the same hash.
- seen := make(map[uint32][]*Node)
- // To avoid many small allocations of length 1 slices,
- // also set up a single large slice to slice into.
- nn := make([]*Node, 0, len(cc))
-Outer:
- for _, c := range cc {
- prev, ok := seen[c.hash]
- if !ok {
- // First entry for this hash.
- nn = append(nn, c.node)
- seen[c.hash] = nn[len(nn)-1 : len(nn) : len(nn)]
- continue
- }
- for _, n := range prev {
- if types.Identical(n.Left.Type, c.node.Left.Type) {
- yyerrorl(c.node.Pos, "duplicate case %v in type switch\n\tprevious case at %v", c.node.Left.Type, n.Line())
- // avoid double-reporting errors
- continue Outer
- }
- }
- seen[c.hash] = append(seen[c.hash], c.node)
- }
-}
-
-func checkDupExprCases(exprname *Node, clauses []*Node) {
- // boolean (naked) switch, nothing to do.
- if exprname == nil {
- return
- }
-
- var cs constSet
- for _, ncase := range clauses {
- for _, n := range ncase.List.Slice() {
- // Don't check for duplicate bools. Although the spec allows it,
- // (1) the compiler hasn't checked it in the past, so compatibility mandates it, and
- // (2) it would disallow useful things like
- // case GOARCH == "arm" && GOARM == "5":
- // case GOARCH == "arm":
- // which would both evaluate to false for non-ARM compiles.
- if n.Type.IsBoolean() {
- continue
- }
-
- cs.add(ncase.Pos, n, "case", "switch")
- }
- }
-}
-
// walk generates an AST that implements sw,
// where sw is a type switch.
// The AST is generally of the form of a linear
diff --git a/src/cmd/compile/internal/gc/syntax.go b/src/cmd/compile/internal/gc/syntax.go
index dec72690bf..e8a527a8fc 100644
--- a/src/cmd/compile/internal/gc/syntax.go
+++ b/src/cmd/compile/internal/gc/syntax.go
@@ -48,6 +48,7 @@ type Node struct {
// - OSTRUCTKEY uses it to store the named field's offset.
// - Named OLITERALs use it to store their ambient iota value.
// - OINLMARK stores an index into the inlTree data structure.
+ // - OCLOSURE uses it to store ambient iota value, if any.
// Possibly still more uses. If you find any, document them.
Xoffset int64
@@ -691,7 +692,7 @@ const (
ORECV // <-Left
ORUNESTR // Type(Left) (Type is string, Left is rune)
OSELRECV // Left = <-Right.Left: (appears as .Left of OCASE; Right.Op == ORECV)
- OSELRECV2 // List = <-Right.Left: (apperas as .Left of OCASE; count(List) == 2, Right.Op == ORECV)
+ OSELRECV2 // List = <-Right.Left: (appears as .Left of OCASE; count(List) == 2, Right.Op == ORECV)
OIOTA // iota
OREAL // real(Left)
OIMAG // imag(Left)
diff --git a/src/cmd/compile/internal/gc/typecheck.go b/src/cmd/compile/internal/gc/typecheck.go
index e4d1cedd74..e725c6f363 100644
--- a/src/cmd/compile/internal/gc/typecheck.go
+++ b/src/cmd/compile/internal/gc/typecheck.go
@@ -100,10 +100,8 @@ func resolve(n *Node) (res *Node) {
}
if r.Op == OIOTA {
- if i := len(typecheckdefstack); i > 0 {
- if x := typecheckdefstack[i-1]; x.Op == OLITERAL {
- return nodintconst(x.Iota())
- }
+ if x := getIotaValue(); x >= 0 {
+ return nodintconst(x)
}
return n
}
@@ -1407,20 +1405,18 @@ func typecheck1(n *Node, top int) (res *Node) {
}
// Determine result type.
- et := t.Etype
- switch et {
+ switch t.Etype {
case TIDEAL:
- // result is ideal
+ n.Type = types.Idealfloat
case TCOMPLEX64:
- et = TFLOAT32
+ n.Type = types.Types[TFLOAT32]
case TCOMPLEX128:
- et = TFLOAT64
+ n.Type = types.Types[TFLOAT64]
default:
yyerror("invalid argument %L for %v", l, n.Op)
n.Type = nil
return n
}
- n.Type = types.Types[et]
case OCOMPLEX:
ok |= ctxExpr
@@ -1457,7 +1453,7 @@ func typecheck1(n *Node, top int) (res *Node) {
return n
case TIDEAL:
- t = types.Types[TIDEAL]
+ t = types.Idealcomplex
case TFLOAT32:
t = types.Types[TCOMPLEX64]
@@ -2683,20 +2679,20 @@ func errorDetails(nl Nodes, tstruct *types.Type, isddd bool) string {
// e.g in error messages about wrong arguments to return.
func sigrepr(t *types.Type) string {
switch t {
- default:
- return t.String()
-
- case types.Types[TIDEAL]:
- // "untyped number" is not commonly used
- // outside of the compiler, so let's use "number".
- return "number"
-
case types.Idealstring:
return "string"
-
case types.Idealbool:
return "bool"
}
+
+ if t.Etype == TIDEAL {
+ // "untyped number" is not commonly used
+ // outside of the compiler, so let's use "number".
+ // TODO(mdempsky): Revisit this.
+ return "number"
+ }
+
+ return t.String()
}
// retsigerr returns the signature of the types
@@ -3937,3 +3933,19 @@ func setTypeNode(n *Node, t *types.Type) {
n.Type = t
n.Type.Nod = asTypesNode(n)
}
+
+// getIotaValue returns the current value for "iota",
+// or -1 if not within a ConstSpec.
+func getIotaValue() int64 {
+ if i := len(typecheckdefstack); i > 0 {
+ if x := typecheckdefstack[i-1]; x.Op == OLITERAL {
+ return x.Iota()
+ }
+ }
+
+ if Curfn != nil && Curfn.Iota() >= 0 {
+ return Curfn.Iota()
+ }
+
+ return -1
+}
diff --git a/src/cmd/compile/internal/gc/types.go b/src/cmd/compile/internal/gc/types.go
index ce82c3a52e..748f8458bd 100644
--- a/src/cmd/compile/internal/gc/types.go
+++ b/src/cmd/compile/internal/gc/types.go
@@ -54,8 +54,5 @@ const (
TFUNCARGS = types.TFUNCARGS
TCHANARGS = types.TCHANARGS
- // pseudo-types for import/export
- TDDDFIELD = types.TDDDFIELD // wrapper: contained type is a ... field
-
NTYPE = types.NTYPE
)
diff --git a/src/cmd/compile/internal/gc/universe.go b/src/cmd/compile/internal/gc/universe.go
index b8260d6525..2077c5639e 100644
--- a/src/cmd/compile/internal/gc/universe.go
+++ b/src/cmd/compile/internal/gc/universe.go
@@ -334,14 +334,7 @@ func typeinit() {
maxfltval[TCOMPLEX128] = maxfltval[TFLOAT64]
minfltval[TCOMPLEX128] = minfltval[TFLOAT64]
- // for walk to use in error messages
- types.Types[TFUNC] = functype(nil, nil, nil)
-
- // types used in front end
- // types.Types[TNIL] got set early in lexinit
- types.Types[TIDEAL] = types.New(TIDEAL)
-
- types.Types[TINTER] = types.New(TINTER)
+ types.Types[TINTER] = types.New(TINTER) // empty interface
// simple aliases
simtype[TMAP] = TPTR
diff --git a/src/cmd/compile/internal/gc/walk.go b/src/cmd/compile/internal/gc/walk.go
index 0062dddc6c..cb49e0f7ce 100644
--- a/src/cmd/compile/internal/gc/walk.go
+++ b/src/cmd/compile/internal/gc/walk.go
@@ -1265,7 +1265,7 @@ opswitch:
}
// Map initialization with a variable or large hint is
// more complicated. We therefore generate a call to
- // runtime.makemap to intialize hmap and allocate the
+ // runtime.makemap to initialize hmap and allocate the
// map buckets.
// When hint fits into int, use makemap instead of
diff --git a/src/cmd/compile/internal/s390x/ggen.go b/src/cmd/compile/internal/s390x/ggen.go
index 6a72b27ac5..ae9965c378 100644
--- a/src/cmd/compile/internal/s390x/ggen.go
+++ b/src/cmd/compile/internal/s390x/ggen.go
@@ -105,8 +105,5 @@ func zeroAuto(pp *gc.Progs, n *gc.Node) {
}
func ginsnop(pp *gc.Progs) *obj.Prog {
- p := pp.Prog(s390x.AWORD)
- p.From.Type = obj.TYPE_CONST
- p.From.Offset = 0x47000000 // nop 0
- return p
+ return pp.Prog(s390x.ANOPH)
}
diff --git a/src/cmd/compile/internal/s390x/ssa.go b/src/cmd/compile/internal/s390x/ssa.go
index 7ddebe7b64..5acb391dcd 100644
--- a/src/cmd/compile/internal/s390x/ssa.go
+++ b/src/cmd/compile/internal/s390x/ssa.go
@@ -225,6 +225,19 @@ func ssaGenValue(s *gc.SSAGenState, v *ssa.Value) {
v.Fatalf("input[0] and output not in same register %s", v.LongString())
}
opregreg(s, v.Op.Asm(), r, v.Args[1].Reg())
+ case ssa.OpS390XMLGR:
+ // MLGR Rx R3 -> R2:R3
+ r0 := v.Args[0].Reg()
+ r1 := v.Args[1].Reg()
+ if r1 != s390x.REG_R3 {
+ v.Fatalf("We require the multiplcand to be stored in R3 for MLGR %s", v.LongString())
+ }
+ p := s.Prog(s390x.AMLGR)
+ p.From.Type = obj.TYPE_REG
+ p.From.Reg = r0
+ p.To.Reg = s390x.REG_R2
+ p.To.Type = obj.TYPE_REG
+
case ssa.OpS390XFMADD, ssa.OpS390XFMADDS,
ssa.OpS390XFMSUB, ssa.OpS390XFMSUBS:
r := v.Reg()
@@ -482,7 +495,7 @@ func ssaGenValue(s *gc.SSAGenState, v *ssa.Value) {
p.To.Type = obj.TYPE_MEM
p.To.Reg = v.Args[0].Reg()
gc.AddAux2(&p.To, v, sc.Off())
- case ssa.OpCopy, ssa.OpS390XMOVDreg:
+ case ssa.OpCopy:
if v.Type.IsMemory() {
return
}
@@ -491,11 +504,6 @@ func ssaGenValue(s *gc.SSAGenState, v *ssa.Value) {
if x != y {
opregreg(s, moveByType(v.Type), y, x)
}
- case ssa.OpS390XMOVDnop:
- if v.Reg() != v.Args[0].Reg() {
- v.Fatalf("input[0] and output not in same register %s", v.LongString())
- }
- // nothing to do
case ssa.OpLoadReg:
if v.Type.IsFlags() {
v.Fatalf("load flags not implemented: %v", v.LongString())
diff --git a/src/cmd/compile/internal/ssa/branchelim.go b/src/cmd/compile/internal/ssa/branchelim.go
index 71c947d0d5..fda0cbb9b3 100644
--- a/src/cmd/compile/internal/ssa/branchelim.go
+++ b/src/cmd/compile/internal/ssa/branchelim.go
@@ -319,7 +319,7 @@ func shouldElimIfElse(no, yes, post *Block, arch string) bool {
cost := phi * 1
if phi > 1 {
// If we have more than 1 phi and some values in post have args
- // in yes or no blocks, we may have to recalucalte condition, because
+ // in yes or no blocks, we may have to recalculate condition, because
// those args may clobber flags. For now assume that all operations clobber flags.
cost += other * 1
}
diff --git a/src/cmd/compile/internal/ssa/deadstore.go b/src/cmd/compile/internal/ssa/deadstore.go
index 69616b3a88..ebcb571e66 100644
--- a/src/cmd/compile/internal/ssa/deadstore.go
+++ b/src/cmd/compile/internal/ssa/deadstore.go
@@ -180,7 +180,7 @@ func elimDeadAutosGeneric(f *Func) {
}
return
case OpStore, OpMove, OpZero:
- // v should be elimated if we eliminate the auto.
+ // v should be eliminated if we eliminate the auto.
n, ok := addr[args[0]]
if ok && elim[v] == nil {
elim[v] = n
diff --git a/src/cmd/compile/internal/ssa/debug_test.go b/src/cmd/compile/internal/ssa/debug_test.go
index 73a0afb82c..28bb88a0c3 100644
--- a/src/cmd/compile/internal/ssa/debug_test.go
+++ b/src/cmd/compile/internal/ssa/debug_test.go
@@ -959,7 +959,7 @@ func replaceEnv(env []string, ev string, evv string) []string {
}
// asCommandLine renders cmd as something that could be copy-and-pasted into a command line
-// If cwd is not empty and different from the command's directory, prepend an approprirate "cd"
+// If cwd is not empty and different from the command's directory, prepend an appropriate "cd"
func asCommandLine(cwd string, cmd *exec.Cmd) string {
s := "("
if cmd.Dir != "" && cmd.Dir != cwd {
diff --git a/src/cmd/compile/internal/ssa/gen/PPC64Ops.go b/src/cmd/compile/internal/ssa/gen/PPC64Ops.go
index 65a183dba6..af72774765 100644
--- a/src/cmd/compile/internal/ssa/gen/PPC64Ops.go
+++ b/src/cmd/compile/internal/ssa/gen/PPC64Ops.go
@@ -222,7 +222,7 @@ func init() {
{name: "POPCNTD", argLength: 1, reg: gp11, asm: "POPCNTD"}, // number of set bits in arg0
{name: "POPCNTW", argLength: 1, reg: gp11, asm: "POPCNTW"}, // number of set bits in each word of arg0 placed in corresponding word
- {name: "POPCNTB", argLength: 1, reg: gp11, asm: "POPCNTB"}, // number of set bits in each byte of arg0 placed in corresonding byte
+ {name: "POPCNTB", argLength: 1, reg: gp11, asm: "POPCNTB"}, // number of set bits in each byte of arg0 placed in corresponding byte
{name: "FDIV", argLength: 2, reg: fp21, asm: "FDIV"}, // arg0/arg1
{name: "FDIVS", argLength: 2, reg: fp21, asm: "FDIVS"}, // arg0/arg1
diff --git a/src/cmd/compile/internal/ssa/gen/S390X.rules b/src/cmd/compile/internal/ssa/gen/S390X.rules
index ee670a908d..98bf875f80 100644
--- a/src/cmd/compile/internal/ssa/gen/S390X.rules
+++ b/src/cmd/compile/internal/ssa/gen/S390X.rules
@@ -17,6 +17,7 @@
(Mul(32|16|8) x y) -> (MULLW x y)
(Mul32F x y) -> (FMULS x y)
(Mul64F x y) -> (FMUL x y)
+(Mul64uhilo x y) -> (MLGR x y)
(Div32F x y) -> (FDIVS x y)
(Div64F x y) -> (FDIV x y)
@@ -443,61 +444,121 @@
// ***************************
// TODO: Should the optimizations be a separate pass?
-// Fold unnecessary type conversions.
-(MOVDreg x) && t.Compare(x.Type) == types.CMPeq -> x
-(MOVDnop x) && t.Compare(x.Type) == types.CMPeq -> x
-
-// Propagate constants through type conversions.
-(MOVDreg (MOVDconst [c])) -> (MOVDconst [c])
-(MOVDnop (MOVDconst [c])) -> (MOVDconst [c])
-
-// If a register move has only 1 use, just use the same register without emitting instruction.
-// MOVDnop doesn't emit instruction, only for ensuring the type.
-(MOVDreg x) && x.Uses == 1 -> (MOVDnop x)
-
-// Fold type changes into loads.
-(MOVDreg x:(MOVBZload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBZload [off] {sym} ptr mem)
-(MOVDreg x:(MOVBload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBload [off] {sym} ptr mem)
-(MOVDreg x:(MOVHZload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHZload [off] {sym} ptr mem)
-(MOVDreg x:(MOVHload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHload [off] {sym} ptr mem)
-(MOVDreg x:(MOVWZload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWZload [off] {sym} ptr mem)
-(MOVDreg x:(MOVWload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWload [off] {sym} ptr mem)
-(MOVDreg x:(MOVDload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVDload [off] {sym} ptr mem)
-
-(MOVDnop x:(MOVBZload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBZload [off] {sym} ptr mem)
-(MOVDnop x:(MOVBload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBload [off] {sym} ptr mem)
-(MOVDnop x:(MOVHZload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHZload [off] {sym} ptr mem)
-(MOVDnop x:(MOVHload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHload [off] {sym} ptr mem)
-(MOVDnop x:(MOVWZload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWZload [off] {sym} ptr mem)
-(MOVDnop x:(MOVWload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWload [off] {sym} ptr mem)
-(MOVDnop x:(MOVDload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVDload [off] {sym} ptr mem)
-
-(MOVDreg x:(MOVBZloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBZloadidx [off] {sym} ptr idx mem)
-(MOVDreg x:(MOVBloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBloadidx [off] {sym} ptr idx mem)
-(MOVDreg x:(MOVHZloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHZloadidx [off] {sym} ptr idx mem)
-(MOVDreg x:(MOVHloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHloadidx [off] {sym} ptr idx mem)
-(MOVDreg x:(MOVWZloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWZloadidx [off] {sym} ptr idx mem)
-(MOVDreg x:(MOVWloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWloadidx [off] {sym} ptr idx mem)
-(MOVDreg x:(MOVDloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVDloadidx [off] {sym} ptr idx mem)
-
-(MOVDnop x:(MOVBZloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBZloadidx [off] {sym} ptr idx mem)
-(MOVDnop x:(MOVBloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBloadidx [off] {sym} ptr idx mem)
-(MOVDnop x:(MOVHZloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHZloadidx [off] {sym} ptr idx mem)
-(MOVDnop x:(MOVHloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHloadidx [off] {sym} ptr idx mem)
-(MOVDnop x:(MOVWZloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWZloadidx [off] {sym} ptr idx mem)
-(MOVDnop x:(MOVWloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWloadidx [off] {sym} ptr idx mem)
-(MOVDnop x:(MOVDloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVDloadidx [off] {sym} ptr idx mem)
-
-// Fold sign extensions into conditional moves of constants.
-// Designed to remove the MOVBZreg inserted by the If lowering.
-(MOVBZreg x:(MOVDLT (MOVDconst [c]) (MOVDconst [d]) _)) && int64(uint8(c)) == c && int64(uint8(d)) == d -> (MOVDreg x)
-(MOVBZreg x:(MOVDLE (MOVDconst [c]) (MOVDconst [d]) _)) && int64(uint8(c)) == c && int64(uint8(d)) == d -> (MOVDreg x)
-(MOVBZreg x:(MOVDGT (MOVDconst [c]) (MOVDconst [d]) _)) && int64(uint8(c)) == c && int64(uint8(d)) == d -> (MOVDreg x)
-(MOVBZreg x:(MOVDGE (MOVDconst [c]) (MOVDconst [d]) _)) && int64(uint8(c)) == c && int64(uint8(d)) == d -> (MOVDreg x)
-(MOVBZreg x:(MOVDEQ (MOVDconst [c]) (MOVDconst [d]) _)) && int64(uint8(c)) == c && int64(uint8(d)) == d -> (MOVDreg x)
-(MOVBZreg x:(MOVDNE (MOVDconst [c]) (MOVDconst [d]) _)) && int64(uint8(c)) == c && int64(uint8(d)) == d -> (MOVDreg x)
-(MOVBZreg x:(MOVDGTnoinv (MOVDconst [c]) (MOVDconst [d]) _)) && int64(uint8(c)) == c && int64(uint8(d)) == d -> (MOVDreg x)
-(MOVBZreg x:(MOVDGEnoinv (MOVDconst [c]) (MOVDconst [d]) _)) && int64(uint8(c)) == c && int64(uint8(d)) == d -> (MOVDreg x)
+// Note: when removing unnecessary sign/zero extensions.
+//
+// After a value is spilled it is restored using a sign- or zero-extension
+// to register-width as appropriate for its type. For example, a uint8 will
+// be restored using a MOVBZ (llgc) instruction which will zero extend the
+// 8-bit value to 64-bits.
+//
+// This is a hazard when folding sign- and zero-extensions since we need to
+// ensure not only that the value in the argument register is correctly
+// extended but also that it will still be correctly extended if it is
+// spilled and restored.
+//
+// In general this means we need type checks when the RHS of a rule is an
+// OpCopy (i.e. "(... x:(...) ...) -> x").
+
+// Merge double extensions.
+(MOV(H|HZ)reg e:(MOV(B|BZ)reg x)) && clobberIfDead(e) -> (MOV(B|BZ)reg x)
+(MOV(W|WZ)reg e:(MOV(B|BZ)reg x)) && clobberIfDead(e) -> (MOV(B|BZ)reg x)
+(MOV(W|WZ)reg e:(MOV(H|HZ)reg x)) && clobberIfDead(e) -> (MOV(H|HZ)reg x)
+
+// Bypass redundant sign extensions.
+(MOV(B|BZ)reg e:(MOVBreg x)) && clobberIfDead(e) -> (MOV(B|BZ)reg x)
+(MOV(B|BZ)reg e:(MOVHreg x)) && clobberIfDead(e) -> (MOV(B|BZ)reg x)
+(MOV(B|BZ)reg e:(MOVWreg x)) && clobberIfDead(e) -> (MOV(B|BZ)reg x)
+(MOV(H|HZ)reg e:(MOVHreg x)) && clobberIfDead(e) -> (MOV(H|HZ)reg x)
+(MOV(H|HZ)reg e:(MOVWreg x)) && clobberIfDead(e) -> (MOV(H|HZ)reg x)
+(MOV(W|WZ)reg e:(MOVWreg x)) && clobberIfDead(e) -> (MOV(W|WZ)reg x)
+
+// Bypass redundant zero extensions.
+(MOV(B|BZ)reg e:(MOVBZreg x)) && clobberIfDead(e) -> (MOV(B|BZ)reg x)
+(MOV(B|BZ)reg e:(MOVHZreg x)) && clobberIfDead(e) -> (MOV(B|BZ)reg x)
+(MOV(B|BZ)reg e:(MOVWZreg x)) && clobberIfDead(e) -> (MOV(B|BZ)reg x)
+(MOV(H|HZ)reg e:(MOVHZreg x)) && clobberIfDead(e) -> (MOV(H|HZ)reg x)
+(MOV(H|HZ)reg e:(MOVWZreg x)) && clobberIfDead(e) -> (MOV(H|HZ)reg x)
+(MOV(W|WZ)reg e:(MOVWZreg x)) && clobberIfDead(e) -> (MOV(W|WZ)reg x)
+
+// Remove zero extensions after zero extending load.
+// Note: take care that if x is spilled it is restored correctly.
+(MOV(B|H|W)Zreg x:(MOVBZload _ _)) && (!x.Type.IsSigned() || x.Type.Size() > 1) -> x
+(MOV(B|H|W)Zreg x:(MOVBZloadidx _ _ _)) && (!x.Type.IsSigned() || x.Type.Size() > 1) -> x
+(MOV(H|W)Zreg x:(MOVHZload _ _)) && (!x.Type.IsSigned() || x.Type.Size() > 2) -> x
+(MOV(H|W)Zreg x:(MOVHZloadidx _ _ _)) && (!x.Type.IsSigned() || x.Type.Size() > 2) -> x
+(MOVWZreg x:(MOVWZload _ _)) && (!x.Type.IsSigned() || x.Type.Size() > 4) -> x
+(MOVWZreg x:(MOVWZloadidx _ _ _)) && (!x.Type.IsSigned() || x.Type.Size() > 4) -> x
+
+// Remove sign extensions after sign extending load.
+// Note: take care that if x is spilled it is restored correctly.
+(MOV(B|H|W)reg x:(MOVBload _ _)) && (x.Type.IsSigned() || x.Type.Size() == 8) -> x
+(MOV(B|H|W)reg x:(MOVBloadidx _ _ _)) && (x.Type.IsSigned() || x.Type.Size() == 8) -> x
+(MOV(H|W)reg x:(MOVHload _ _)) && (x.Type.IsSigned() || x.Type.Size() == 8) -> x
+(MOV(H|W)reg x:(MOVHloadidx _ _ _)) && (x.Type.IsSigned() || x.Type.Size() == 8) -> x
+(MOVWreg x:(MOVWload _ _)) && (x.Type.IsSigned() || x.Type.Size() == 8) -> x
+(MOVWreg x:(MOVWloadidx _ _ _)) && (x.Type.IsSigned() || x.Type.Size() == 8) -> x
+
+// Remove sign extensions after zero extending load.
+// These type checks are probably unnecessary but do them anyway just in case.
+(MOV(H|W)reg x:(MOVBZload _ _)) && (!x.Type.IsSigned() || x.Type.Size() > 1) -> x
+(MOV(H|W)reg x:(MOVBZloadidx _ _ _)) && (!x.Type.IsSigned() || x.Type.Size() > 1) -> x
+(MOVWreg x:(MOVHZload _ _)) && (!x.Type.IsSigned() || x.Type.Size() > 2) -> x
+(MOVWreg x:(MOVHZloadidx _ _ _)) && (!x.Type.IsSigned() || x.Type.Size() > 2) -> x
+
+// Fold sign and zero extensions into loads.
+//
+// Note: The combined instruction must end up in the same block
+// as the original load. If not, we end up making a value with
+// memory type live in two different blocks, which can lead to
+// multiple memory values alive simultaneously.
+//
+// Make sure we don't combine these ops if the load has another use.
+// This prevents a single load from being split into multiple loads
+// which then might return different values. See test/atomicload.go.
+(MOV(B|H|W)Zreg x:(MOV(B|H|W)load [o] {s} p mem))
+ && x.Uses == 1
+ && clobber(x)
+ -> @x.Block (MOV(B|H|W)Zload [o] {s} p mem)
+(MOV(B|H|W)reg x:(MOV(B|H|W)Zload [o] {s} p mem))
+ && x.Uses == 1
+ && clobber(x)
+ -> @x.Block (MOV(B|H|W)load [o] {s} p mem)
+(MOV(B|H|W)Zreg x:(MOV(B|H|W)loadidx [o] {s} p i mem))
+ && x.Uses == 1
+ && clobber(x)
+ -> @x.Block (MOV(B|H|W)Zloadidx [o] {s} p i mem)
+(MOV(B|H|W)reg x:(MOV(B|H|W)Zloadidx [o] {s} p i mem))
+ && x.Uses == 1
+ && clobber(x)
+ -> @x.Block (MOV(B|H|W)loadidx [o] {s} p i mem)
+
+// Remove zero extensions after argument load.
+(MOVBZreg x:(Arg )) && !t.IsSigned() && t.Size() == 1 -> x
+(MOVHZreg x:(Arg )) && !t.IsSigned() && t.Size() <= 2 -> x
+(MOVWZreg x:(Arg )) && !t.IsSigned() && t.Size() <= 4 -> x
+
+// Remove sign extensions after argument load.
+(MOVBreg x:(Arg )) && t.IsSigned() && t.Size() == 1 -> x
+(MOVHreg x:(Arg )) && t.IsSigned() && t.Size() <= 2 -> x
+(MOVWreg x:(Arg )) && t.IsSigned() && t.Size() <= 4 -> x
+
+// Fold zero extensions into constants.
+(MOVBZreg (MOVDconst [c])) -> (MOVDconst [int64( uint8(c))])
+(MOVHZreg (MOVDconst [c])) -> (MOVDconst [int64(uint16(c))])
+(MOVWZreg (MOVDconst [c])) -> (MOVDconst [int64(uint32(c))])
+
+// Fold sign extensions into constants.
+(MOVBreg (MOVDconst [c])) -> (MOVDconst [int64( int8(c))])
+(MOVHreg (MOVDconst [c])) -> (MOVDconst [int64(int16(c))])
+(MOVWreg (MOVDconst [c])) -> (MOVDconst [int64(int32(c))])
+
+// Remove zero extension of conditional move.
+// Note: only for MOVBZreg for now since it is added as part of 'if' statement lowering.
+(MOVBZreg x:(MOVD(LT|LE|GT|GE|EQ|NE|GTnoinv|GEnoinv) (MOVDconst [c]) (MOVDconst [d]) _))
+ && int64(uint8(c)) == c
+ && int64(uint8(d)) == d
+ && (!x.Type.IsSigned() || x.Type.Size() > 1)
+ -> x
// Fold boolean tests into blocks.
(NE (CMPWconst [0] (MOVDLT (MOVDconst [0]) (MOVDconst [1]) cmp)) yes no) -> (LT cmp yes no)
@@ -547,12 +608,12 @@
-> (S(LD|RD|RAD|LW|RW|RAW) x (ANDWconst [c&63] y))
(S(LD|RD|RAD|LW|RW|RAW) x (ANDWconst [c] y)) && c&63 == 63
-> (S(LD|RD|RAD|LW|RW|RAW) x y)
-(SLD x (MOV(D|W|H|B|WZ|HZ|BZ)reg y)) -> (SLD x y)
-(SRD x (MOV(D|W|H|B|WZ|HZ|BZ)reg y)) -> (SRD x y)
-(SRAD x (MOV(D|W|H|B|WZ|HZ|BZ)reg y)) -> (SRAD x y)
-(SLW x (MOV(D|W|H|B|WZ|HZ|BZ)reg y)) -> (SLW x y)
-(SRW x (MOV(D|W|H|B|WZ|HZ|BZ)reg y)) -> (SRW x y)
-(SRAW x (MOV(D|W|H|B|WZ|HZ|BZ)reg y)) -> (SRAW x y)
+(SLD x (MOV(W|H|B|WZ|HZ|BZ)reg y)) -> (SLD x y)
+(SRD x (MOV(W|H|B|WZ|HZ|BZ)reg y)) -> (SRD x y)
+(SRAD x (MOV(W|H|B|WZ|HZ|BZ)reg y)) -> (SRAD x y)
+(SLW x (MOV(W|H|B|WZ|HZ|BZ)reg y)) -> (SLW x y)
+(SRW x (MOV(W|H|B|WZ|HZ|BZ)reg y)) -> (SRW x y)
+(SRAW x (MOV(W|H|B|WZ|HZ|BZ)reg y)) -> (SRAW x y)
// Constant rotate generation
(RLL x (MOVDconst [c])) -> (RLLconst x [c&31])
@@ -615,99 +676,8 @@
(MOVDEQ x y (InvertFlags cmp)) -> (MOVDEQ x y cmp)
(MOVDNE x y (InvertFlags cmp)) -> (MOVDNE x y cmp)
-// don't extend after proper load
-(MOVBreg x:(MOVBload _ _)) -> (MOVDreg x)
-(MOVBZreg x:(MOVBZload _ _)) -> (MOVDreg x)
-(MOVHreg x:(MOVBload _ _)) -> (MOVDreg x)
-(MOVHreg x:(MOVBZload _ _)) -> (MOVDreg x)
-(MOVHreg x:(MOVHload _ _)) -> (MOVDreg x)
-(MOVHZreg x:(MOVBZload _ _)) -> (MOVDreg x)
-(MOVHZreg x:(MOVHZload _ _)) -> (MOVDreg x)
-(MOVWreg x:(MOVBload _ _)) -> (MOVDreg x)
-(MOVWreg x:(MOVBZload _ _)) -> (MOVDreg x)
-(MOVWreg x:(MOVHload _ _)) -> (MOVDreg x)
-(MOVWreg x:(MOVHZload _ _)) -> (MOVDreg x)
-(MOVWreg x:(MOVWload _ _)) -> (MOVDreg x)
-(MOVWZreg x:(MOVBZload _ _)) -> (MOVDreg x)
-(MOVWZreg x:(MOVHZload _ _)) -> (MOVDreg x)
-(MOVWZreg x:(MOVWZload _ _)) -> (MOVDreg x)
-
-// don't extend if argument is already extended
-(MOVBreg x:(Arg )) && is8BitInt(t) && isSigned(t) -> (MOVDreg x)
-(MOVBZreg x:(Arg )) && is8BitInt(t) && !isSigned(t) -> (MOVDreg x)
-(MOVHreg x:(Arg )) && (is8BitInt(t) || is16BitInt(t)) && isSigned(t) -> (MOVDreg x)
-(MOVHZreg x:(Arg )) && (is8BitInt(t) || is16BitInt(t)) && !isSigned(t) -> (MOVDreg x)
-(MOVWreg x:(Arg )) && (is8BitInt(t) || is16BitInt(t) || is32BitInt(t)) && isSigned(t) -> (MOVDreg x)
-(MOVWZreg x:(Arg )) && (is8BitInt(t) || is16BitInt(t) || is32BitInt(t)) && !isSigned(t) -> (MOVDreg x)
-
-// fold double extensions
-(MOVBreg x:(MOVBreg _)) -> (MOVDreg x)
-(MOVBZreg x:(MOVBZreg _)) -> (MOVDreg x)
-(MOVHreg x:(MOVBreg _)) -> (MOVDreg x)
-(MOVHreg x:(MOVBZreg _)) -> (MOVDreg x)
-(MOVHreg x:(MOVHreg _)) -> (MOVDreg x)
-(MOVHZreg x:(MOVBZreg _)) -> (MOVDreg x)
-(MOVHZreg x:(MOVHZreg _)) -> (MOVDreg x)
-(MOVWreg x:(MOVBreg _)) -> (MOVDreg x)
-(MOVWreg x:(MOVBZreg _)) -> (MOVDreg x)
-(MOVWreg x:(MOVHreg _)) -> (MOVDreg x)
-(MOVWreg x:(MOVHZreg _)) -> (MOVDreg x)
-(MOVWreg x:(MOVWreg _)) -> (MOVDreg x)
-(MOVWZreg x:(MOVBZreg _)) -> (MOVDreg x)
-(MOVWZreg x:(MOVHZreg _)) -> (MOVDreg x)
-(MOVWZreg x:(MOVWZreg _)) -> (MOVDreg x)
-
-(MOVBreg (MOVBZreg x)) -> (MOVBreg x)
-(MOVBZreg (MOVBreg x)) -> (MOVBZreg x)
-(MOVHreg (MOVHZreg x)) -> (MOVHreg x)
-(MOVHZreg (MOVHreg x)) -> (MOVHZreg x)
-(MOVWreg (MOVWZreg x)) -> (MOVWreg x)
-(MOVWZreg (MOVWreg x)) -> (MOVWZreg x)
-
-// fold extensions into constants
-(MOVBreg (MOVDconst [c])) -> (MOVDconst [int64(int8(c))])
-(MOVBZreg (MOVDconst [c])) -> (MOVDconst [int64(uint8(c))])
-(MOVHreg (MOVDconst [c])) -> (MOVDconst [int64(int16(c))])
-(MOVHZreg (MOVDconst [c])) -> (MOVDconst [int64(uint16(c))])
-(MOVWreg (MOVDconst [c])) -> (MOVDconst [int64(int32(c))])
-(MOVWZreg (MOVDconst [c])) -> (MOVDconst [int64(uint32(c))])
-
-// sign extended loads
-// Note: The combined instruction must end up in the same block
-// as the original load. If not, we end up making a value with
-// memory type live in two different blocks, which can lead to
-// multiple memory values alive simultaneously.
-// Make sure we don't combine these ops if the load has another use.
-// This prevents a single load from being split into multiple loads
-// which then might return different values. See test/atomicload.go.
-(MOVBreg x:(MOVBZload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBload [off] {sym} ptr mem)
-(MOVBreg x:(MOVBload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBload [off] {sym} ptr mem)
-(MOVBZreg x:(MOVBZload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBZload [off] {sym} ptr mem)
-(MOVBZreg x:(MOVBload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBZload [off] {sym} ptr mem)
-(MOVHreg x:(MOVHZload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHload [off] {sym} ptr mem)
-(MOVHreg x:(MOVHload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHload [off] {sym} ptr mem)
-(MOVHZreg x:(MOVHZload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHZload [off] {sym} ptr mem)
-(MOVHZreg x:(MOVHload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHZload [off] {sym} ptr mem)
-(MOVWreg x:(MOVWZload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWload [off] {sym} ptr mem)
-(MOVWreg x:(MOVWload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWload [off] {sym} ptr mem)
-(MOVWZreg x:(MOVWZload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWZload [off] {sym} ptr mem)
-(MOVWZreg x:(MOVWload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWZload [off] {sym} ptr mem)
-
-(MOVBreg x:(MOVBZloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBloadidx [off] {sym} ptr idx mem)
-(MOVBreg x:(MOVBloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBloadidx [off] {sym} ptr idx mem)
-(MOVBZreg x:(MOVBZloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBZloadidx [off] {sym} ptr idx mem)
-(MOVBZreg x:(MOVBloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBZloadidx [off] {sym} ptr idx mem)
-(MOVHreg x:(MOVHZloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHloadidx [off] {sym} ptr idx mem)
-(MOVHreg x:(MOVHloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHloadidx [off] {sym} ptr idx mem)
-(MOVHZreg x:(MOVHZloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHZloadidx [off] {sym} ptr idx mem)
-(MOVHZreg x:(MOVHloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHZloadidx [off] {sym} ptr idx mem)
-(MOVWreg x:(MOVWZloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWloadidx [off] {sym} ptr idx mem)
-(MOVWreg x:(MOVWloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWloadidx [off] {sym} ptr idx mem)
-(MOVWZreg x:(MOVWZloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWZloadidx [off] {sym} ptr idx mem)
-(MOVWZreg x:(MOVWloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWZloadidx [off] {sym} ptr idx mem)
-
// replace load from same location as preceding store with copy
-(MOVDload [off] {sym} ptr1 (MOVDstore [off] {sym} ptr2 x _)) && isSamePtr(ptr1, ptr2) -> (MOVDreg x)
+(MOVDload [off] {sym} ptr1 (MOVDstore [off] {sym} ptr2 x _)) && isSamePtr(ptr1, ptr2) -> x
(MOVWload [off] {sym} ptr1 (MOVWstore [off] {sym} ptr2 x _)) && isSamePtr(ptr1, ptr2) -> (MOVWreg x)
(MOVHload [off] {sym} ptr1 (MOVHstore [off] {sym} ptr2 x _)) && isSamePtr(ptr1, ptr2) -> (MOVHreg x)
(MOVBload [off] {sym} ptr1 (MOVBstore [off] {sym} ptr2 x _)) && isSamePtr(ptr1, ptr2) -> (MOVBreg x)
@@ -755,7 +725,7 @@
// remove unnecessary FPR <-> GPR moves
(LDGR (LGDR x)) -> x
-(LGDR (LDGR x)) -> (MOVDreg x)
+(LGDR (LDGR x)) -> x
// Don't extend before storing
(MOVWstore [off] {sym} ptr (MOVWreg x) mem) -> (MOVWstore [off] {sym} ptr x mem)
diff --git a/src/cmd/compile/internal/ssa/gen/S390XOps.go b/src/cmd/compile/internal/ssa/gen/S390XOps.go
index 03c8b3de06..b064e46377 100644
--- a/src/cmd/compile/internal/ssa/gen/S390XOps.go
+++ b/src/cmd/compile/internal/ssa/gen/S390XOps.go
@@ -373,9 +373,6 @@ func init() {
{name: "MOVHZreg", argLength: 1, reg: gp11sp, asm: "MOVHZ", typ: "UInt64"}, // zero extend arg0 from int16 to int64
{name: "MOVWreg", argLength: 1, reg: gp11sp, asm: "MOVW", typ: "Int64"}, // sign extend arg0 from int32 to int64
{name: "MOVWZreg", argLength: 1, reg: gp11sp, asm: "MOVWZ", typ: "UInt64"}, // zero extend arg0 from int32 to int64
- {name: "MOVDreg", argLength: 1, reg: gp11sp, asm: "MOVD"}, // move from arg0
-
- {name: "MOVDnop", argLength: 1, reg: gp11, resultInArg0: true}, // nop, return arg0 in same register
{name: "MOVDconst", reg: gp01, asm: "MOVD", typ: "UInt64", aux: "Int64", rematerializeable: true}, // auxint
@@ -489,7 +486,7 @@ func init() {
{name: "LoweredPanicBoundsB", argLength: 3, aux: "Int64", reg: regInfo{inputs: []regMask{r1, r2}}, typ: "Mem"}, // arg0=idx, arg1=len, arg2=mem, returns memory. AuxInt contains report code (see PanicBounds in generic.go).
{name: "LoweredPanicBoundsC", argLength: 3, aux: "Int64", reg: regInfo{inputs: []regMask{r0, r1}}, typ: "Mem"}, // arg0=idx, arg1=len, arg2=mem, returns memory. AuxInt contains report code (see PanicBounds in generic.go).
- // Constant conditon code values. The condition code can be 0, 1, 2 or 3.
+ // Constant condition code values. The condition code can be 0, 1, 2 or 3.
{name: "FlagEQ"}, // CC=0 (equal)
{name: "FlagLT"}, // CC=1 (less than)
{name: "FlagGT"}, // CC=2 (greater than)
@@ -571,6 +568,19 @@ func init() {
clobberFlags: true,
},
+ // unsigned multiplication (64x64 → 128)
+ //
+ // Multiply the two 64-bit input operands together and place the 128-bit result into
+ // an even-odd register pair. The second register in the target pair also contains
+ // one of the input operands. Since we don't currently have a way to specify an
+ // even-odd register pair we hardcode this register pair as R2:R3.
+ {
+ name: "MLGR",
+ argLength: 2,
+ reg: regInfo{inputs: []regMask{gp, r3}, outputs: []regMask{r2, r3}},
+ asm: "MLGR",
+ },
+
// pseudo operations to sum the output of the POPCNT instruction
{name: "SumBytes2", argLength: 1, typ: "UInt8"}, // sum the rightmost 2 bytes in arg0 ignoring overflow
{name: "SumBytes4", argLength: 1, typ: "UInt8"}, // sum the rightmost 4 bytes in arg0 ignoring overflow
diff --git a/src/cmd/compile/internal/ssa/gen/Wasm.rules b/src/cmd/compile/internal/ssa/gen/Wasm.rules
index 72080703f5..c9dd6e8084 100644
--- a/src/cmd/compile/internal/ssa/gen/Wasm.rules
+++ b/src/cmd/compile/internal/ssa/gen/Wasm.rules
@@ -103,6 +103,8 @@
// Unsigned shifts need to return 0 if shift amount is >= width of shifted value.
(Lsh64x64 x y) && shiftIsBounded(v) -> (I64Shl x y)
+(Lsh64x64 x (I64Const [c])) && uint64(c) < 64 -> (I64Shl x (I64Const [c]))
+(Lsh64x64 x (I64Const [c])) && uint64(c) >= 64 -> (I64Const [0])
(Lsh64x64 x y) -> (Select (I64Shl x y) (I64Const [0]) (I64LtU y (I64Const [64])))
(Lsh64x(32|16|8) x y) -> (Lsh64x64 x (ZeroExt(32|16|8)to64 y))
@@ -116,6 +118,8 @@
(Lsh8x(32|16|8) x y) -> (Lsh64x64 x (ZeroExt(32|16|8)to64 y))
(Rsh64Ux64 x y) && shiftIsBounded(v) -> (I64ShrU x y)
+(Rsh64Ux64 x (I64Const [c])) && uint64(c) < 64 -> (I64ShrU x (I64Const [c]))
+(Rsh64Ux64 x (I64Const [c])) && uint64(c) >= 64 -> (I64Const [0])
(Rsh64Ux64 x y) -> (Select (I64ShrU x y) (I64Const [0]) (I64LtU y (I64Const [64])))
(Rsh64Ux(32|16|8) x y) -> (Rsh64Ux64 x (ZeroExt(32|16|8)to64 y))
@@ -132,6 +136,8 @@
// We implement this by setting the shift value to (width - 1) if the shift value is >= width.
(Rsh64x64 x y) && shiftIsBounded(v) -> (I64ShrS x y)
+(Rsh64x64 x (I64Const [c])) && uint64(c) < 64 -> (I64ShrS x (I64Const [c]))
+(Rsh64x64 x (I64Const [c])) && uint64(c) >= 64 -> (I64ShrS x (I64Const [63]))
(Rsh64x64 x y) -> (I64ShrS x (Select y (I64Const [63]) (I64LtU y (I64Const [64]))))
(Rsh64x(32|16|8) x y) -> (Rsh64x64 x (ZeroExt(32|16|8)to64 y))
diff --git a/src/cmd/compile/internal/ssa/gen/genericOps.go b/src/cmd/compile/internal/ssa/gen/genericOps.go
index 8933aa51ef..3ca773b595 100644
--- a/src/cmd/compile/internal/ssa/gen/genericOps.go
+++ b/src/cmd/compile/internal/ssa/gen/genericOps.go
@@ -484,7 +484,7 @@ var genericOps = []opData{
{name: "VarDef", argLength: 1, aux: "Sym", typ: "Mem", symEffect: "None", zeroWidth: true}, // aux is a *gc.Node of a variable that is about to be initialized. arg0=mem, returns mem
{name: "VarKill", argLength: 1, aux: "Sym", symEffect: "None"}, // aux is a *gc.Node of a variable that is known to be dead. arg0=mem, returns mem
- // TODO: what's the difference betweeen VarLive and KeepAlive?
+ // TODO: what's the difference between VarLive and KeepAlive?
{name: "VarLive", argLength: 1, aux: "Sym", symEffect: "Read", zeroWidth: true}, // aux is a *gc.Node of a variable that must be kept live. arg0=mem, returns mem
{name: "KeepAlive", argLength: 2, typ: "Mem", zeroWidth: true}, // arg[0] is a value that must be kept alive until this mark. arg[1]=mem, returns mem
diff --git a/src/cmd/compile/internal/ssa/gen/main.go b/src/cmd/compile/internal/ssa/gen/main.go
index 9c0e0904b2..ba17148fc9 100644
--- a/src/cmd/compile/internal/ssa/gen/main.go
+++ b/src/cmd/compile/internal/ssa/gen/main.go
@@ -74,7 +74,7 @@ type regInfo struct {
// clobbers encodes the set of registers that are overwritten by
// the instruction (other than the output registers).
clobbers regMask
- // outpus[i] encodes the set of registers allowed for the i'th output.
+ // outputs[i] encodes the set of registers allowed for the i'th output.
outputs []regMask
}
@@ -430,7 +430,7 @@ func (a arch) Name() string {
//
// Note that there is no limit on the concurrency at the moment. On a four-core
// laptop at the time of writing, peak RSS usually reached ~230MiB, which seems
-// doable by practially any machine nowadays. If that stops being the case, we
+// doable by practically any machine nowadays. If that stops being the case, we
// can cap this func to a fixed number of architectures being generated at once.
func genLower() {
var wg sync.WaitGroup
diff --git a/src/cmd/compile/internal/ssa/gen/rulegen.go b/src/cmd/compile/internal/ssa/gen/rulegen.go
index ad09975a6d..3a18ca252c 100644
--- a/src/cmd/compile/internal/ssa/gen/rulegen.go
+++ b/src/cmd/compile/internal/ssa/gen/rulegen.go
@@ -21,12 +21,13 @@ import (
"go/parser"
"go/printer"
"go/token"
- "go/types"
"io"
"log"
"os"
+ "path"
"regexp"
"sort"
+ "strconv"
"strings"
"golang.org/x/tools/go/ast/astutil"
@@ -266,38 +267,38 @@ func genRulesSuffix(arch arch, suff string) {
}
tfile := fset.File(file.Pos())
- for n := 0; n < 3; n++ {
- unused := make(map[token.Pos]bool)
- conf := types.Config{Error: func(err error) {
- if terr, ok := err.(types.Error); ok && strings.Contains(terr.Msg, "not used") {
- unused[terr.Pos] = true
- }
- }}
- _, _ = conf.Check("ssa", fset, []*ast.File{file}, nil)
- if len(unused) == 0 {
- break
- }
- pre := func(c *astutil.Cursor) bool {
- if node := c.Node(); node != nil && unused[node.Pos()] {
- c.Delete()
- // Unused imports and declarations use exactly
- // one line. Prevent leaving an empty line.
- tfile.MergeLine(tfile.Position(node.Pos()).Line)
- return false
- }
+ // First, use unusedInspector to find the unused declarations by their
+ // start position.
+ u := unusedInspector{unused: make(map[token.Pos]bool)}
+ u.node(file)
+
+ // Then, delete said nodes via astutil.Apply.
+ pre := func(c *astutil.Cursor) bool {
+ node := c.Node()
+ if node == nil {
return true
}
- post := func(c *astutil.Cursor) bool {
- switch node := c.Node().(type) {
- case *ast.GenDecl:
- if len(node.Specs) == 0 {
- c.Delete()
- }
+ if u.unused[node.Pos()] {
+ c.Delete()
+ // Unused imports and declarations use exactly
+ // one line. Prevent leaving an empty line.
+ tfile.MergeLine(tfile.Position(node.Pos()).Line)
+ return false
+ }
+ return true
+ }
+ post := func(c *astutil.Cursor) bool {
+ switch node := c.Node().(type) {
+ case *ast.GenDecl:
+ if len(node.Specs) == 0 {
+ // Don't leave a broken or empty GenDecl behind,
+ // such as "import ()".
+ c.Delete()
}
- return true
}
- file = astutil.Apply(file, pre, post).(*ast.File)
+ return true
}
+ file = astutil.Apply(file, pre, post).(*ast.File)
// Write the well-formatted source to file
f, err := os.Create("../rewrite" + arch.name + suff + ".go")
@@ -319,6 +320,234 @@ func genRulesSuffix(arch arch, suff string) {
}
}
+// unusedInspector can be used to detect unused variables and imports in an
+// ast.Node via its node method. The result is available in the "unused" map.
+//
+// note that unusedInspector is lazy and best-effort; it only supports the node
+// types and patterns used by the rulegen program.
+type unusedInspector struct {
+ // scope is the current scope, which can never be nil when a declaration
+ // is encountered. That is, the unusedInspector.node entrypoint should
+ // generally be an entire file or block.
+ scope *scope
+
+ // unused is the resulting set of unused declared names, indexed by the
+ // starting position of the node that declared the name.
+ unused map[token.Pos]bool
+
+ // defining is the object currently being defined; this is useful so
+ // that if "foo := bar" is unused and removed, we can then detect if
+ // "bar" becomes unused as well.
+ defining *object
+}
+
+// scoped opens a new scope when called, and returns a function which closes
+// that same scope. When a scope is closed, unused variables are recorded.
+func (u *unusedInspector) scoped() func() {
+ outer := u.scope
+ u.scope = &scope{outer: outer, objects: map[string]*object{}}
+ return func() {
+ for anyUnused := true; anyUnused; {
+ anyUnused = false
+ for _, obj := range u.scope.objects {
+ if obj.numUses > 0 {
+ continue
+ }
+ u.unused[obj.pos] = true
+ for _, used := range obj.used {
+ if used.numUses--; used.numUses == 0 {
+ anyUnused = true
+ }
+ }
+ // We've decremented numUses for each of the
+ // objects in used. Zero this slice too, to keep
+ // everything consistent.
+ obj.used = nil
+ }
+ }
+ u.scope = outer
+ }
+}
+
+func (u *unusedInspector) exprs(list []ast.Expr) {
+ for _, x := range list {
+ u.node(x)
+ }
+}
+
+func (u *unusedInspector) stmts(list []ast.Stmt) {
+ for _, x := range list {
+ u.node(x)
+ }
+}
+
+func (u *unusedInspector) decls(list []ast.Decl) {
+ for _, x := range list {
+ u.node(x)
+ }
+}
+
+func (u *unusedInspector) node(node ast.Node) {
+ switch node := node.(type) {
+ case *ast.File:
+ defer u.scoped()()
+ u.decls(node.Decls)
+ case *ast.GenDecl:
+ for _, spec := range node.Specs {
+ u.node(spec)
+ }
+ case *ast.ImportSpec:
+ impPath, _ := strconv.Unquote(node.Path.Value)
+ name := path.Base(impPath)
+ u.scope.objects[name] = &object{
+ name: name,
+ pos: node.Pos(),
+ }
+ case *ast.FuncDecl:
+ u.node(node.Type)
+ if node.Body != nil {
+ u.node(node.Body)
+ }
+ case *ast.FuncType:
+ if node.Params != nil {
+ u.node(node.Params)
+ }
+ if node.Results != nil {
+ u.node(node.Results)
+ }
+ case *ast.FieldList:
+ for _, field := range node.List {
+ u.node(field)
+ }
+ case *ast.Field:
+ u.node(node.Type)
+
+ // statements
+
+ case *ast.BlockStmt:
+ defer u.scoped()()
+ u.stmts(node.List)
+ case *ast.IfStmt:
+ if node.Init != nil {
+ u.node(node.Init)
+ }
+ u.node(node.Cond)
+ u.node(node.Body)
+ if node.Else != nil {
+ u.node(node.Else)
+ }
+ case *ast.ForStmt:
+ if node.Init != nil {
+ u.node(node.Init)
+ }
+ if node.Cond != nil {
+ u.node(node.Cond)
+ }
+ if node.Post != nil {
+ u.node(node.Post)
+ }
+ u.node(node.Body)
+ case *ast.SwitchStmt:
+ if node.Init != nil {
+ u.node(node.Init)
+ }
+ if node.Tag != nil {
+ u.node(node.Tag)
+ }
+ u.node(node.Body)
+ case *ast.CaseClause:
+ u.exprs(node.List)
+ defer u.scoped()()
+ u.stmts(node.Body)
+ case *ast.BranchStmt:
+ case *ast.ExprStmt:
+ u.node(node.X)
+ case *ast.AssignStmt:
+ if node.Tok != token.DEFINE {
+ u.exprs(node.Rhs)
+ u.exprs(node.Lhs)
+ break
+ }
+ if len(node.Lhs) != 1 {
+ panic("no support for := with multiple names")
+ }
+
+ name := node.Lhs[0].(*ast.Ident)
+ obj := &object{
+ name: name.Name,
+ pos: name.NamePos,
+ }
+
+ old := u.defining
+ u.defining = obj
+ u.exprs(node.Rhs)
+ u.defining = old
+
+ u.scope.objects[name.Name] = obj
+ case *ast.ReturnStmt:
+ u.exprs(node.Results)
+
+ // expressions
+
+ case *ast.CallExpr:
+ u.node(node.Fun)
+ u.exprs(node.Args)
+ case *ast.SelectorExpr:
+ u.node(node.X)
+ case *ast.UnaryExpr:
+ u.node(node.X)
+ case *ast.BinaryExpr:
+ u.node(node.X)
+ u.node(node.Y)
+ case *ast.StarExpr:
+ u.node(node.X)
+ case *ast.ParenExpr:
+ u.node(node.X)
+ case *ast.IndexExpr:
+ u.node(node.X)
+ u.node(node.Index)
+ case *ast.TypeAssertExpr:
+ u.node(node.X)
+ u.node(node.Type)
+ case *ast.Ident:
+ if obj := u.scope.Lookup(node.Name); obj != nil {
+ obj.numUses++
+ if u.defining != nil {
+ u.defining.used = append(u.defining.used, obj)
+ }
+ }
+ case *ast.BasicLit:
+ default:
+ panic(fmt.Sprintf("unhandled node: %T", node))
+ }
+}
+
+// scope keeps track of a certain scope and its declared names, as well as the
+// outer (parent) scope.
+type scope struct {
+ outer *scope // can be nil, if this is the top-level scope
+ objects map[string]*object // indexed by each declared name
+}
+
+func (s *scope) Lookup(name string) *object {
+ if obj := s.objects[name]; obj != nil {
+ return obj
+ }
+ if s.outer == nil {
+ return nil
+ }
+ return s.outer.Lookup(name)
+}
+
+// object keeps track of a declared name, such as a variable or import.
+type object struct {
+ name string
+ pos token.Pos // start position of the node declaring the object
+
+ numUses int // number of times this object is used
+ used []*object // objects that its declaration makes use of
+}
+
func fprint(w io.Writer, n Node) {
switch n := n.(type) {
case *File:
@@ -396,7 +625,7 @@ type Node interface{}
// ast.Stmt under some limited circumstances.
type Statement interface{}
-// bodyBase is shared by all of our statement psuedo-node types which can
+// bodyBase is shared by all of our statement pseudo-node types which can
// contain other statements.
type bodyBase struct {
list []Statement
diff --git a/src/cmd/compile/internal/ssa/loopbce.go b/src/cmd/compile/internal/ssa/loopbce.go
index 5f02643ccd..092e7aa35b 100644
--- a/src/cmd/compile/internal/ssa/loopbce.go
+++ b/src/cmd/compile/internal/ssa/loopbce.go
@@ -186,7 +186,7 @@ func findIndVar(f *Func) []indVar {
// Handle induction variables of these forms.
// KNN is known-not-negative.
// SIGNED ARITHMETIC ONLY. (see switch on b.Control.Op above)
- // Possibilitis for KNN are len and cap; perhaps we can infer others.
+ // Possibilities for KNN are len and cap; perhaps we can infer others.
// for i := 0; i <= KNN-k ; i += k
// for i := 0; i < KNN-(k-1); i += k
// Also handle decreasing.
diff --git a/src/cmd/compile/internal/ssa/opGen.go b/src/cmd/compile/internal/ssa/opGen.go
index 57ca6b01cc..de43cd14e8 100644
--- a/src/cmd/compile/internal/ssa/opGen.go
+++ b/src/cmd/compile/internal/ssa/opGen.go
@@ -2085,8 +2085,6 @@ const (
OpS390XMOVHZreg
OpS390XMOVWreg
OpS390XMOVWZreg
- OpS390XMOVDreg
- OpS390XMOVDnop
OpS390XMOVDconst
OpS390XLDGR
OpS390XLGDR
@@ -2179,6 +2177,7 @@ const (
OpS390XLoweredAtomicExchange64
OpS390XFLOGR
OpS390XPOPCNT
+ OpS390XMLGR
OpS390XSumBytes2
OpS390XSumBytes4
OpS390XSumBytes8
@@ -28087,32 +28086,6 @@ var opcodeTable = [...]opInfo{
},
},
},
- {
- name: "MOVDreg",
- argLen: 1,
- asm: s390x.AMOVD,
- reg: regInfo{
- inputs: []inputInfo{
- {0, 56319}, // R0 R1 R2 R3 R4 R5 R6 R7 R8 R9 R11 R12 R14 SP
- },
- outputs: []outputInfo{
- {0, 23551}, // R0 R1 R2 R3 R4 R5 R6 R7 R8 R9 R11 R12 R14
- },
- },
- },
- {
- name: "MOVDnop",
- argLen: 1,
- resultInArg0: true,
- reg: regInfo{
- inputs: []inputInfo{
- {0, 23551}, // R0 R1 R2 R3 R4 R5 R6 R7 R8 R9 R11 R12 R14
- },
- outputs: []outputInfo{
- {0, 23551}, // R0 R1 R2 R3 R4 R5 R6 R7 R8 R9 R11 R12 R14
- },
- },
- },
{
name: "MOVDconst",
auxType: auxInt64,
@@ -29388,6 +29361,21 @@ var opcodeTable = [...]opInfo{
},
},
},
+ {
+ name: "MLGR",
+ argLen: 2,
+ asm: s390x.AMLGR,
+ reg: regInfo{
+ inputs: []inputInfo{
+ {1, 8}, // R3
+ {0, 23551}, // R0 R1 R2 R3 R4 R5 R6 R7 R8 R9 R11 R12 R14
+ },
+ outputs: []outputInfo{
+ {0, 4}, // R2
+ {1, 8}, // R3
+ },
+ },
+ },
{
name: "SumBytes2",
argLen: 1,
diff --git a/src/cmd/compile/internal/ssa/regalloc.go b/src/cmd/compile/internal/ssa/regalloc.go
index 8abbf61507..3b7049b823 100644
--- a/src/cmd/compile/internal/ssa/regalloc.go
+++ b/src/cmd/compile/internal/ssa/regalloc.go
@@ -411,7 +411,7 @@ func (s *regAllocState) allocReg(mask regMask, v *Value) register {
if s.f.Config.ctxt.Arch.Arch == sys.ArchWasm {
// TODO(neelance): In theory this should never happen, because all wasm registers are equal.
- // So if there is still a free register, the allocation should have picked that one in the first place insead of
+ // So if there is still a free register, the allocation should have picked that one in the first place instead of
// trying to kick some other value out. In practice, this case does happen and it breaks the stack optimization.
s.freeReg(r)
return r
@@ -489,7 +489,7 @@ func (s *regAllocState) allocValToReg(v *Value, mask regMask, nospill bool, pos
}
var r register
- // If nospill is set, the value is used immedately, so it can live on the WebAssembly stack.
+ // If nospill is set, the value is used immediately, so it can live on the WebAssembly stack.
onWasmStack := nospill && s.f.Config.ctxt.Arch.Arch == sys.ArchWasm
if !onWasmStack {
// Allocate a register.
diff --git a/src/cmd/compile/internal/ssa/rewriteS390X.go b/src/cmd/compile/internal/ssa/rewriteS390X.go
index 2de5e1b83f..264bf255ce 100644
--- a/src/cmd/compile/internal/ssa/rewriteS390X.go
+++ b/src/cmd/compile/internal/ssa/rewriteS390X.go
@@ -335,6 +335,8 @@ func rewriteValueS390X(v *Value) bool {
return rewriteValueS390X_OpMul64_0(v)
case OpMul64F:
return rewriteValueS390X_OpMul64F_0(v)
+ case OpMul64uhilo:
+ return rewriteValueS390X_OpMul64uhilo_0(v)
case OpMul8:
return rewriteValueS390X_OpMul8_0(v)
case OpNeg16:
@@ -560,13 +562,13 @@ func rewriteValueS390X(v *Value) bool {
case OpS390XMOVBZloadidx:
return rewriteValueS390X_OpS390XMOVBZloadidx_0(v)
case OpS390XMOVBZreg:
- return rewriteValueS390X_OpS390XMOVBZreg_0(v) || rewriteValueS390X_OpS390XMOVBZreg_10(v)
+ return rewriteValueS390X_OpS390XMOVBZreg_0(v) || rewriteValueS390X_OpS390XMOVBZreg_10(v) || rewriteValueS390X_OpS390XMOVBZreg_20(v)
case OpS390XMOVBload:
return rewriteValueS390X_OpS390XMOVBload_0(v)
case OpS390XMOVBloadidx:
return rewriteValueS390X_OpS390XMOVBloadidx_0(v)
case OpS390XMOVBreg:
- return rewriteValueS390X_OpS390XMOVBreg_0(v)
+ return rewriteValueS390X_OpS390XMOVBreg_0(v) || rewriteValueS390X_OpS390XMOVBreg_10(v)
case OpS390XMOVBstore:
return rewriteValueS390X_OpS390XMOVBstore_0(v) || rewriteValueS390X_OpS390XMOVBstore_10(v)
case OpS390XMOVBstoreconst:
@@ -591,10 +593,6 @@ func rewriteValueS390X(v *Value) bool {
return rewriteValueS390X_OpS390XMOVDload_0(v)
case OpS390XMOVDloadidx:
return rewriteValueS390X_OpS390XMOVDloadidx_0(v)
- case OpS390XMOVDnop:
- return rewriteValueS390X_OpS390XMOVDnop_0(v) || rewriteValueS390X_OpS390XMOVDnop_10(v)
- case OpS390XMOVDreg:
- return rewriteValueS390X_OpS390XMOVDreg_0(v) || rewriteValueS390X_OpS390XMOVDreg_10(v)
case OpS390XMOVDstore:
return rewriteValueS390X_OpS390XMOVDstore_0(v)
case OpS390XMOVDstoreconst:
@@ -638,7 +636,7 @@ func rewriteValueS390X(v *Value) bool {
case OpS390XMOVWloadidx:
return rewriteValueS390X_OpS390XMOVWloadidx_0(v)
case OpS390XMOVWreg:
- return rewriteValueS390X_OpS390XMOVWreg_0(v) || rewriteValueS390X_OpS390XMOVWreg_10(v)
+ return rewriteValueS390X_OpS390XMOVWreg_0(v) || rewriteValueS390X_OpS390XMOVWreg_10(v) || rewriteValueS390X_OpS390XMOVWreg_20(v)
case OpS390XMOVWstore:
return rewriteValueS390X_OpS390XMOVWstore_0(v) || rewriteValueS390X_OpS390XMOVWstore_10(v)
case OpS390XMOVWstoreconst:
@@ -682,23 +680,23 @@ func rewriteValueS390X(v *Value) bool {
case OpS390XRLLG:
return rewriteValueS390X_OpS390XRLLG_0(v)
case OpS390XSLD:
- return rewriteValueS390X_OpS390XSLD_0(v) || rewriteValueS390X_OpS390XSLD_10(v)
+ return rewriteValueS390X_OpS390XSLD_0(v)
case OpS390XSLW:
- return rewriteValueS390X_OpS390XSLW_0(v) || rewriteValueS390X_OpS390XSLW_10(v)
+ return rewriteValueS390X_OpS390XSLW_0(v)
case OpS390XSRAD:
- return rewriteValueS390X_OpS390XSRAD_0(v) || rewriteValueS390X_OpS390XSRAD_10(v)
+ return rewriteValueS390X_OpS390XSRAD_0(v)
case OpS390XSRADconst:
return rewriteValueS390X_OpS390XSRADconst_0(v)
case OpS390XSRAW:
- return rewriteValueS390X_OpS390XSRAW_0(v) || rewriteValueS390X_OpS390XSRAW_10(v)
+ return rewriteValueS390X_OpS390XSRAW_0(v)
case OpS390XSRAWconst:
return rewriteValueS390X_OpS390XSRAWconst_0(v)
case OpS390XSRD:
- return rewriteValueS390X_OpS390XSRD_0(v) || rewriteValueS390X_OpS390XSRD_10(v)
+ return rewriteValueS390X_OpS390XSRD_0(v)
case OpS390XSRDconst:
return rewriteValueS390X_OpS390XSRDconst_0(v)
case OpS390XSRW:
- return rewriteValueS390X_OpS390XSRW_0(v) || rewriteValueS390X_OpS390XSRW_10(v)
+ return rewriteValueS390X_OpS390XSRW_0(v)
case OpS390XSTM2:
return rewriteValueS390X_OpS390XSTM2_0(v)
case OpS390XSTMG2:
@@ -4613,6 +4611,19 @@ func rewriteValueS390X_OpMul64F_0(v *Value) bool {
return true
}
}
+func rewriteValueS390X_OpMul64uhilo_0(v *Value) bool {
+ // match: (Mul64uhilo x y)
+ // cond:
+ // result: (MLGR x y)
+ for {
+ y := v.Args[1]
+ x := v.Args[0]
+ v.reset(OpS390XMLGR)
+ v.AddArg(x)
+ v.AddArg(y)
+ return true
+ }
+}
func rewriteValueS390X_OpMul8_0(v *Value) bool {
// match: (Mul8 x y)
// cond:
@@ -10676,14 +10687,15 @@ func rewriteValueS390X_OpS390XLEDBR_0(v *Value) bool {
func rewriteValueS390X_OpS390XLGDR_0(v *Value) bool {
// match: (LGDR (LDGR x))
// cond:
- // result: (MOVDreg x)
+ // result: x
for {
v_0 := v.Args[0]
if v_0.Op != OpS390XLDGR {
break
}
x := v_0.Args[0]
- v.reset(OpS390XMOVDreg)
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
@@ -10953,116 +10965,248 @@ func rewriteValueS390X_OpS390XMOVBZloadidx_0(v *Value) bool {
return false
}
func rewriteValueS390X_OpS390XMOVBZreg_0(v *Value) bool {
- // match: (MOVBZreg x:(MOVDLT (MOVDconst [c]) (MOVDconst [d]) _))
- // cond: int64(uint8(c)) == c && int64(uint8(d)) == d
- // result: (MOVDreg x)
+ b := v.Block
+ // match: (MOVBZreg e:(MOVBreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVBZreg x)
for {
- x := v.Args[0]
- if x.Op != OpS390XMOVDLT {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVBreg {
break
}
- _ = x.Args[2]
- x_0 := x.Args[0]
- if x_0.Op != OpS390XMOVDconst {
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
break
}
- c := x_0.AuxInt
- x_1 := x.Args[1]
- if x_1.Op != OpS390XMOVDconst {
+ v.reset(OpS390XMOVBZreg)
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVBZreg e:(MOVHreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVBZreg x)
+ for {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVHreg {
break
}
- d := x_1.AuxInt
- if !(int64(uint8(c)) == c && int64(uint8(d)) == d) {
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
break
}
- v.reset(OpS390XMOVDreg)
+ v.reset(OpS390XMOVBZreg)
v.AddArg(x)
return true
}
- // match: (MOVBZreg x:(MOVDLE (MOVDconst [c]) (MOVDconst [d]) _))
- // cond: int64(uint8(c)) == c && int64(uint8(d)) == d
- // result: (MOVDreg x)
+ // match: (MOVBZreg e:(MOVWreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVBZreg x)
for {
- x := v.Args[0]
- if x.Op != OpS390XMOVDLE {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVWreg {
break
}
- _ = x.Args[2]
- x_0 := x.Args[0]
- if x_0.Op != OpS390XMOVDconst {
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
break
}
- c := x_0.AuxInt
- x_1 := x.Args[1]
- if x_1.Op != OpS390XMOVDconst {
+ v.reset(OpS390XMOVBZreg)
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVBZreg e:(MOVBZreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVBZreg x)
+ for {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVBZreg {
break
}
- d := x_1.AuxInt
- if !(int64(uint8(c)) == c && int64(uint8(d)) == d) {
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
break
}
- v.reset(OpS390XMOVDreg)
+ v.reset(OpS390XMOVBZreg)
v.AddArg(x)
return true
}
- // match: (MOVBZreg x:(MOVDGT (MOVDconst [c]) (MOVDconst [d]) _))
- // cond: int64(uint8(c)) == c && int64(uint8(d)) == d
- // result: (MOVDreg x)
+ // match: (MOVBZreg e:(MOVHZreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVBZreg x)
+ for {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVHZreg {
+ break
+ }
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
+ break
+ }
+ v.reset(OpS390XMOVBZreg)
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVBZreg e:(MOVWZreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVBZreg x)
+ for {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVWZreg {
+ break
+ }
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
+ break
+ }
+ v.reset(OpS390XMOVBZreg)
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVBZreg x:(MOVBZload _ _))
+ // cond: (!x.Type.IsSigned() || x.Type.Size() > 1)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVDGT {
+ if x.Op != OpS390XMOVBZload {
break
}
- _ = x.Args[2]
- x_0 := x.Args[0]
- if x_0.Op != OpS390XMOVDconst {
+ _ = x.Args[1]
+ if !(!x.Type.IsSigned() || x.Type.Size() > 1) {
break
}
- c := x_0.AuxInt
- x_1 := x.Args[1]
- if x_1.Op != OpS390XMOVDconst {
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVBZreg x:(MOVBZloadidx _ _ _))
+ // cond: (!x.Type.IsSigned() || x.Type.Size() > 1)
+ // result: x
+ for {
+ x := v.Args[0]
+ if x.Op != OpS390XMOVBZloadidx {
break
}
- d := x_1.AuxInt
- if !(int64(uint8(c)) == c && int64(uint8(d)) == d) {
+ _ = x.Args[2]
+ if !(!x.Type.IsSigned() || x.Type.Size() > 1) {
break
}
- v.reset(OpS390XMOVDreg)
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVBZreg x:(MOVDGE (MOVDconst [c]) (MOVDconst [d]) _))
- // cond: int64(uint8(c)) == c && int64(uint8(d)) == d
- // result: (MOVDreg x)
+ // match: (MOVBZreg x:(MOVBZloadidx _ _ _))
+ // cond: (!x.Type.IsSigned() || x.Type.Size() > 1)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVDGE {
+ if x.Op != OpS390XMOVBZloadidx {
break
}
_ = x.Args[2]
- x_0 := x.Args[0]
- if x_0.Op != OpS390XMOVDconst {
+ if !(!x.Type.IsSigned() || x.Type.Size() > 1) {
break
}
- c := x_0.AuxInt
- x_1 := x.Args[1]
- if x_1.Op != OpS390XMOVDconst {
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVBZreg x:(MOVBload [o] {s} p mem))
+ // cond: x.Uses == 1 && clobber(x)
+ // result: @x.Block (MOVBZload [o] {s} p mem)
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if x.Op != OpS390XMOVBload {
break
}
- d := x_1.AuxInt
- if !(int64(uint8(c)) == c && int64(uint8(d)) == d) {
+ o := x.AuxInt
+ s := x.Aux
+ mem := x.Args[1]
+ p := x.Args[0]
+ if !(x.Uses == 1 && clobber(x)) {
+ break
+ }
+ b = x.Block
+ v0 := b.NewValue0(x.Pos, OpS390XMOVBZload, t)
+ v.reset(OpCopy)
+ v.AddArg(v0)
+ v0.AuxInt = o
+ v0.Aux = s
+ v0.AddArg(p)
+ v0.AddArg(mem)
+ return true
+ }
+ return false
+}
+func rewriteValueS390X_OpS390XMOVBZreg_10(v *Value) bool {
+ b := v.Block
+ // match: (MOVBZreg x:(MOVBloadidx [o] {s} p i mem))
+ // cond: x.Uses == 1 && clobber(x)
+ // result: @x.Block (MOVBZloadidx [o] {s} p i mem)
+ for {
+ t := v.Type
+ x := v.Args[0]
+ if x.Op != OpS390XMOVBloadidx {
+ break
+ }
+ o := x.AuxInt
+ s := x.Aux
+ mem := x.Args[2]
+ p := x.Args[0]
+ i := x.Args[1]
+ if !(x.Uses == 1 && clobber(x)) {
+ break
+ }
+ b = x.Block
+ v0 := b.NewValue0(v.Pos, OpS390XMOVBZloadidx, t)
+ v.reset(OpCopy)
+ v.AddArg(v0)
+ v0.AuxInt = o
+ v0.Aux = s
+ v0.AddArg(p)
+ v0.AddArg(i)
+ v0.AddArg(mem)
+ return true
+ }
+ // match: (MOVBZreg x:(Arg ))
+ // cond: !t.IsSigned() && t.Size() == 1
+ // result: x
+ for {
+ x := v.Args[0]
+ if x.Op != OpArg {
+ break
+ }
+ t := x.Type
+ if !(!t.IsSigned() && t.Size() == 1) {
break
}
- v.reset(OpS390XMOVDreg)
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVBZreg x:(MOVDEQ (MOVDconst [c]) (MOVDconst [d]) _))
- // cond: int64(uint8(c)) == c && int64(uint8(d)) == d
- // result: (MOVDreg x)
+ // match: (MOVBZreg (MOVDconst [c]))
+ // cond:
+ // result: (MOVDconst [int64( uint8(c))])
+ for {
+ v_0 := v.Args[0]
+ if v_0.Op != OpS390XMOVDconst {
+ break
+ }
+ c := v_0.AuxInt
+ v.reset(OpS390XMOVDconst)
+ v.AuxInt = int64(uint8(c))
+ return true
+ }
+ // match: (MOVBZreg x:(MOVDLT (MOVDconst [c]) (MOVDconst [d]) _))
+ // cond: int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVDEQ {
+ if x.Op != OpS390XMOVDLT {
break
}
_ = x.Args[2]
@@ -11076,19 +11220,20 @@ func rewriteValueS390X_OpS390XMOVBZreg_0(v *Value) bool {
break
}
d := x_1.AuxInt
- if !(int64(uint8(c)) == c && int64(uint8(d)) == d) {
+ if !(int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1)) {
break
}
- v.reset(OpS390XMOVDreg)
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVBZreg x:(MOVDNE (MOVDconst [c]) (MOVDconst [d]) _))
- // cond: int64(uint8(c)) == c && int64(uint8(d)) == d
- // result: (MOVDreg x)
+ // match: (MOVBZreg x:(MOVDLE (MOVDconst [c]) (MOVDconst [d]) _))
+ // cond: int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVDNE {
+ if x.Op != OpS390XMOVDLE {
break
}
_ = x.Args[2]
@@ -11102,19 +11247,20 @@ func rewriteValueS390X_OpS390XMOVBZreg_0(v *Value) bool {
break
}
d := x_1.AuxInt
- if !(int64(uint8(c)) == c && int64(uint8(d)) == d) {
+ if !(int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1)) {
break
}
- v.reset(OpS390XMOVDreg)
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVBZreg x:(MOVDGTnoinv (MOVDconst [c]) (MOVDconst [d]) _))
- // cond: int64(uint8(c)) == c && int64(uint8(d)) == d
- // result: (MOVDreg x)
+ // match: (MOVBZreg x:(MOVDGT (MOVDconst [c]) (MOVDconst [d]) _))
+ // cond: int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVDGTnoinv {
+ if x.Op != OpS390XMOVDGT {
break
}
_ = x.Args[2]
@@ -11128,19 +11274,20 @@ func rewriteValueS390X_OpS390XMOVBZreg_0(v *Value) bool {
break
}
d := x_1.AuxInt
- if !(int64(uint8(c)) == c && int64(uint8(d)) == d) {
+ if !(int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1)) {
break
}
- v.reset(OpS390XMOVDreg)
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVBZreg x:(MOVDGEnoinv (MOVDconst [c]) (MOVDconst [d]) _))
- // cond: int64(uint8(c)) == c && int64(uint8(d)) == d
- // result: (MOVDreg x)
+ // match: (MOVBZreg x:(MOVDGE (MOVDconst [c]) (MOVDconst [d]) _))
+ // cond: int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVDGEnoinv {
+ if x.Op != OpS390XMOVDGE {
break
}
_ = x.Args[2]
@@ -11154,187 +11301,125 @@ func rewriteValueS390X_OpS390XMOVBZreg_0(v *Value) bool {
break
}
d := x_1.AuxInt
- if !(int64(uint8(c)) == c && int64(uint8(d)) == d) {
+ if !(int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1)) {
break
}
- v.reset(OpS390XMOVDreg)
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVBZreg x:(MOVBZload _ _))
- // cond:
- // result: (MOVDreg x)
+ // match: (MOVBZreg x:(MOVDEQ (MOVDconst [c]) (MOVDconst [d]) _))
+ // cond: int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVBZload {
+ if x.Op != OpS390XMOVDEQ {
break
}
- _ = x.Args[1]
- v.reset(OpS390XMOVDreg)
- v.AddArg(x)
- return true
- }
- // match: (MOVBZreg x:(Arg ))
- // cond: is8BitInt(t) && !isSigned(t)
- // result: (MOVDreg x)
- for {
- x := v.Args[0]
- if x.Op != OpArg {
+ _ = x.Args[2]
+ x_0 := x.Args[0]
+ if x_0.Op != OpS390XMOVDconst {
break
}
- t := x.Type
- if !(is8BitInt(t) && !isSigned(t)) {
+ c := x_0.AuxInt
+ x_1 := x.Args[1]
+ if x_1.Op != OpS390XMOVDconst {
break
}
- v.reset(OpS390XMOVDreg)
- v.AddArg(x)
- return true
- }
- return false
-}
-func rewriteValueS390X_OpS390XMOVBZreg_10(v *Value) bool {
- b := v.Block
- typ := &b.Func.Config.Types
- // match: (MOVBZreg x:(MOVBZreg _))
- // cond:
- // result: (MOVDreg x)
- for {
- x := v.Args[0]
- if x.Op != OpS390XMOVBZreg {
+ d := x_1.AuxInt
+ if !(int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1)) {
break
}
- v.reset(OpS390XMOVDreg)
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVBZreg (MOVBreg x))
- // cond:
- // result: (MOVBZreg x)
+ // match: (MOVBZreg x:(MOVDNE (MOVDconst [c]) (MOVDconst [d]) _))
+ // cond: int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1)
+ // result: x
for {
- v_0 := v.Args[0]
- if v_0.Op != OpS390XMOVBreg {
+ x := v.Args[0]
+ if x.Op != OpS390XMOVDNE {
break
}
- x := v_0.Args[0]
- v.reset(OpS390XMOVBZreg)
- v.AddArg(x)
- return true
- }
- // match: (MOVBZreg (MOVDconst [c]))
- // cond:
- // result: (MOVDconst [int64(uint8(c))])
- for {
- v_0 := v.Args[0]
- if v_0.Op != OpS390XMOVDconst {
+ _ = x.Args[2]
+ x_0 := x.Args[0]
+ if x_0.Op != OpS390XMOVDconst {
break
}
- c := v_0.AuxInt
- v.reset(OpS390XMOVDconst)
- v.AuxInt = int64(uint8(c))
- return true
- }
- // match: (MOVBZreg x:(MOVBZload [off] {sym} ptr mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVBZload [off] {sym} ptr mem)
- for {
- x := v.Args[0]
- if x.Op != OpS390XMOVBZload {
+ c := x_0.AuxInt
+ x_1 := x.Args[1]
+ if x_1.Op != OpS390XMOVDconst {
break
}
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[1]
- ptr := x.Args[0]
- if !(x.Uses == 1 && clobber(x)) {
+ d := x_1.AuxInt
+ if !(int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1)) {
break
}
- b = x.Block
- v0 := b.NewValue0(x.Pos, OpS390XMOVBZload, v.Type)
v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(mem)
+ v.Type = x.Type
+ v.AddArg(x)
return true
}
- // match: (MOVBZreg x:(MOVBload [off] {sym} ptr mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVBZload [off] {sym} ptr mem)
+ // match: (MOVBZreg x:(MOVDGTnoinv (MOVDconst [c]) (MOVDconst [d]) _))
+ // cond: int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVBload {
+ if x.Op != OpS390XMOVDGTnoinv {
break
}
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[1]
- ptr := x.Args[0]
- if !(x.Uses == 1 && clobber(x)) {
+ _ = x.Args[2]
+ x_0 := x.Args[0]
+ if x_0.Op != OpS390XMOVDconst {
break
}
- b = x.Block
- v0 := b.NewValue0(x.Pos, OpS390XMOVBZload, v.Type)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(mem)
- return true
- }
- // match: (MOVBZreg x:(MOVBZloadidx [off] {sym} ptr idx mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVBZloadidx [off] {sym} ptr idx mem)
- for {
- x := v.Args[0]
- if x.Op != OpS390XMOVBZloadidx {
+ c := x_0.AuxInt
+ x_1 := x.Args[1]
+ if x_1.Op != OpS390XMOVDconst {
break
}
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[2]
- ptr := x.Args[0]
- idx := x.Args[1]
- if !(x.Uses == 1 && clobber(x)) {
+ d := x_1.AuxInt
+ if !(int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1)) {
break
}
- b = x.Block
- v0 := b.NewValue0(v.Pos, OpS390XMOVBZloadidx, v.Type)
v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(idx)
- v0.AddArg(mem)
+ v.Type = x.Type
+ v.AddArg(x)
return true
}
- // match: (MOVBZreg x:(MOVBloadidx [off] {sym} ptr idx mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVBZloadidx [off] {sym} ptr idx mem)
+ return false
+}
+func rewriteValueS390X_OpS390XMOVBZreg_20(v *Value) bool {
+ b := v.Block
+ typ := &b.Func.Config.Types
+ // match: (MOVBZreg x:(MOVDGEnoinv (MOVDconst [c]) (MOVDconst [d]) _))
+ // cond: int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVBloadidx {
+ if x.Op != OpS390XMOVDGEnoinv {
break
}
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[2]
- ptr := x.Args[0]
- idx := x.Args[1]
- if !(x.Uses == 1 && clobber(x)) {
+ _ = x.Args[2]
+ x_0 := x.Args[0]
+ if x_0.Op != OpS390XMOVDconst {
+ break
+ }
+ c := x_0.AuxInt
+ x_1 := x.Args[1]
+ if x_1.Op != OpS390XMOVDconst {
+ break
+ }
+ d := x_1.AuxInt
+ if !(int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1)) {
break
}
- b = x.Block
- v0 := b.NewValue0(v.Pos, OpS390XMOVBZloadidx, v.Type)
v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(idx)
- v0.AddArg(mem)
+ v.Type = x.Type
+ v.AddArg(x)
return true
}
// match: (MOVBZreg (ANDWconst [m] x))
@@ -11589,176 +11674,240 @@ func rewriteValueS390X_OpS390XMOVBloadidx_0(v *Value) bool {
}
func rewriteValueS390X_OpS390XMOVBreg_0(v *Value) bool {
b := v.Block
- typ := &b.Func.Config.Types
- // match: (MOVBreg x:(MOVBload _ _))
- // cond:
- // result: (MOVDreg x)
+ // match: (MOVBreg e:(MOVBreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVBreg x)
for {
- x := v.Args[0]
- if x.Op != OpS390XMOVBload {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVBreg {
break
}
- _ = x.Args[1]
- v.reset(OpS390XMOVDreg)
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
+ break
+ }
+ v.reset(OpS390XMOVBreg)
v.AddArg(x)
return true
}
- // match: (MOVBreg x:(Arg ))
- // cond: is8BitInt(t) && isSigned(t)
- // result: (MOVDreg x)
+ // match: (MOVBreg e:(MOVHreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVBreg x)
for {
- x := v.Args[0]
- if x.Op != OpArg {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVHreg {
break
}
- t := x.Type
- if !(is8BitInt(t) && isSigned(t)) {
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
break
}
- v.reset(OpS390XMOVDreg)
+ v.reset(OpS390XMOVBreg)
v.AddArg(x)
return true
}
- // match: (MOVBreg x:(MOVBreg _))
- // cond:
- // result: (MOVDreg x)
+ // match: (MOVBreg e:(MOVWreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVBreg x)
for {
- x := v.Args[0]
- if x.Op != OpS390XMOVBreg {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVWreg {
+ break
+ }
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
break
}
- v.reset(OpS390XMOVDreg)
+ v.reset(OpS390XMOVBreg)
v.AddArg(x)
return true
}
- // match: (MOVBreg (MOVBZreg x))
- // cond:
+ // match: (MOVBreg e:(MOVBZreg x))
+ // cond: clobberIfDead(e)
// result: (MOVBreg x)
for {
- v_0 := v.Args[0]
- if v_0.Op != OpS390XMOVBZreg {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVBZreg {
+ break
+ }
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
break
}
- x := v_0.Args[0]
v.reset(OpS390XMOVBreg)
v.AddArg(x)
return true
}
- // match: (MOVBreg (MOVDconst [c]))
- // cond:
- // result: (MOVDconst [int64(int8(c))])
+ // match: (MOVBreg e:(MOVHZreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVBreg x)
for {
- v_0 := v.Args[0]
- if v_0.Op != OpS390XMOVDconst {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVHZreg {
break
}
- c := v_0.AuxInt
- v.reset(OpS390XMOVDconst)
- v.AuxInt = int64(int8(c))
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
+ break
+ }
+ v.reset(OpS390XMOVBreg)
+ v.AddArg(x)
return true
}
- // match: (MOVBreg x:(MOVBZload [off] {sym} ptr mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVBload [off] {sym} ptr mem)
+ // match: (MOVBreg e:(MOVWZreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVBreg x)
+ for {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVWZreg {
+ break
+ }
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
+ break
+ }
+ v.reset(OpS390XMOVBreg)
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVBreg x:(MOVBload _ _))
+ // cond: (x.Type.IsSigned() || x.Type.Size() == 8)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVBZload {
+ if x.Op != OpS390XMOVBload {
break
}
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[1]
- ptr := x.Args[0]
- if !(x.Uses == 1 && clobber(x)) {
+ _ = x.Args[1]
+ if !(x.Type.IsSigned() || x.Type.Size() == 8) {
break
}
- b = x.Block
- v0 := b.NewValue0(x.Pos, OpS390XMOVBload, v.Type)
v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(mem)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVBreg x:(MOVBloadidx _ _ _))
+ // cond: (x.Type.IsSigned() || x.Type.Size() == 8)
+ // result: x
+ for {
+ x := v.Args[0]
+ if x.Op != OpS390XMOVBloadidx {
+ break
+ }
+ _ = x.Args[2]
+ if !(x.Type.IsSigned() || x.Type.Size() == 8) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVBreg x:(MOVBloadidx _ _ _))
+ // cond: (x.Type.IsSigned() || x.Type.Size() == 8)
+ // result: x
+ for {
+ x := v.Args[0]
+ if x.Op != OpS390XMOVBloadidx {
+ break
+ }
+ _ = x.Args[2]
+ if !(x.Type.IsSigned() || x.Type.Size() == 8) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
return true
}
- // match: (MOVBreg x:(MOVBload [off] {sym} ptr mem))
+ // match: (MOVBreg x:(MOVBZload [o] {s} p mem))
// cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVBload [off] {sym} ptr mem)
+ // result: @x.Block (MOVBload [o] {s} p mem)
for {
+ t := v.Type
x := v.Args[0]
- if x.Op != OpS390XMOVBload {
+ if x.Op != OpS390XMOVBZload {
break
}
- off := x.AuxInt
- sym := x.Aux
+ o := x.AuxInt
+ s := x.Aux
mem := x.Args[1]
- ptr := x.Args[0]
+ p := x.Args[0]
if !(x.Uses == 1 && clobber(x)) {
break
}
b = x.Block
- v0 := b.NewValue0(x.Pos, OpS390XMOVBload, v.Type)
+ v0 := b.NewValue0(x.Pos, OpS390XMOVBload, t)
v.reset(OpCopy)
v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
+ v0.AuxInt = o
+ v0.Aux = s
+ v0.AddArg(p)
v0.AddArg(mem)
return true
}
- // match: (MOVBreg x:(MOVBZloadidx [off] {sym} ptr idx mem))
+ return false
+}
+func rewriteValueS390X_OpS390XMOVBreg_10(v *Value) bool {
+ b := v.Block
+ typ := &b.Func.Config.Types
+ // match: (MOVBreg x:(MOVBZloadidx [o] {s} p i mem))
// cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVBloadidx [off] {sym} ptr idx mem)
+ // result: @x.Block (MOVBloadidx [o] {s} p i mem)
for {
+ t := v.Type
x := v.Args[0]
if x.Op != OpS390XMOVBZloadidx {
break
}
- off := x.AuxInt
- sym := x.Aux
+ o := x.AuxInt
+ s := x.Aux
mem := x.Args[2]
- ptr := x.Args[0]
- idx := x.Args[1]
+ p := x.Args[0]
+ i := x.Args[1]
if !(x.Uses == 1 && clobber(x)) {
break
}
b = x.Block
- v0 := b.NewValue0(v.Pos, OpS390XMOVBloadidx, v.Type)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVBloadidx, t)
v.reset(OpCopy)
v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(idx)
+ v0.AuxInt = o
+ v0.Aux = s
+ v0.AddArg(p)
+ v0.AddArg(i)
v0.AddArg(mem)
return true
}
- // match: (MOVBreg x:(MOVBloadidx [off] {sym} ptr idx mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVBloadidx [off] {sym} ptr idx mem)
+ // match: (MOVBreg x:(Arg ))
+ // cond: t.IsSigned() && t.Size() == 1
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVBloadidx {
+ if x.Op != OpArg {
break
}
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[2]
- ptr := x.Args[0]
- idx := x.Args[1]
- if !(x.Uses == 1 && clobber(x)) {
+ t := x.Type
+ if !(t.IsSigned() && t.Size() == 1) {
break
}
- b = x.Block
- v0 := b.NewValue0(v.Pos, OpS390XMOVBloadidx, v.Type)
v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(idx)
- v0.AddArg(mem)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVBreg (MOVDconst [c]))
+ // cond:
+ // result: (MOVDconst [int64( int8(c))])
+ for {
+ v_0 := v.Args[0]
+ if v_0.Op != OpS390XMOVDconst {
+ break
+ }
+ c := v_0.AuxInt
+ v.reset(OpS390XMOVDconst)
+ v.AuxInt = int64(int8(c))
return true
}
// match: (MOVBreg (ANDWconst [m] x))
@@ -14677,7 +14826,7 @@ func rewriteValueS390X_OpS390XMOVDaddridx_0(v *Value) bool {
func rewriteValueS390X_OpS390XMOVDload_0(v *Value) bool {
// match: (MOVDload [off] {sym} ptr1 (MOVDstore [off] {sym} ptr2 x _))
// cond: isSamePtr(ptr1, ptr2)
- // result: (MOVDreg x)
+ // result: x
for {
off := v.AuxInt
sym := v.Aux
@@ -14699,7 +14848,8 @@ func rewriteValueS390X_OpS390XMOVDload_0(v *Value) bool {
if !(isSamePtr(ptr1, ptr2)) {
break
}
- v.reset(OpS390XMOVDreg)
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
@@ -14934,844 +15084,6 @@ func rewriteValueS390X_OpS390XMOVDloadidx_0(v *Value) bool {
}
return false
}
-func rewriteValueS390X_OpS390XMOVDnop_0(v *Value) bool {
- b := v.Block
- // match: (MOVDnop x)
- // cond: t.Compare(x.Type) == types.CMPeq
- // result: x
- for {
- t := v.Type
- x := v.Args[0]
- if !(t.Compare(x.Type) == types.CMPeq) {
- break
- }
- v.reset(OpCopy)
- v.Type = x.Type
- v.AddArg(x)
- return true
- }
- // match: (MOVDnop (MOVDconst [c]))
- // cond:
- // result: (MOVDconst [c])
- for {
- v_0 := v.Args[0]
- if v_0.Op != OpS390XMOVDconst {
- break
- }
- c := v_0.AuxInt
- v.reset(OpS390XMOVDconst)
- v.AuxInt = c
- return true
- }
- // match: (MOVDnop x:(MOVBZload [off] {sym} ptr mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVBZload [off] {sym} ptr mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVBZload {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[1]
- ptr := x.Args[0]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(x.Pos, OpS390XMOVBZload, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(mem)
- return true
- }
- // match: (MOVDnop x:(MOVBload [off] {sym} ptr mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVBload [off] {sym} ptr mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVBload {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[1]
- ptr := x.Args[0]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(x.Pos, OpS390XMOVBload, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(mem)
- return true
- }
- // match: (MOVDnop x:(MOVHZload [off] {sym} ptr mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVHZload [off] {sym} ptr mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVHZload {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[1]
- ptr := x.Args[0]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(x.Pos, OpS390XMOVHZload, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(mem)
- return true
- }
- // match: (MOVDnop x:(MOVHload [off] {sym} ptr mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVHload [off] {sym} ptr mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVHload {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[1]
- ptr := x.Args[0]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(x.Pos, OpS390XMOVHload, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(mem)
- return true
- }
- // match: (MOVDnop x:(MOVWZload [off] {sym} ptr mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVWZload [off] {sym} ptr mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVWZload {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[1]
- ptr := x.Args[0]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(x.Pos, OpS390XMOVWZload, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(mem)
- return true
- }
- // match: (MOVDnop x:(MOVWload [off] {sym} ptr mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVWload [off] {sym} ptr mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVWload {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[1]
- ptr := x.Args[0]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(x.Pos, OpS390XMOVWload, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(mem)
- return true
- }
- // match: (MOVDnop x:(MOVDload [off] {sym} ptr mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVDload [off] {sym} ptr mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVDload {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[1]
- ptr := x.Args[0]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(x.Pos, OpS390XMOVDload, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(mem)
- return true
- }
- // match: (MOVDnop x:(MOVBZloadidx [off] {sym} ptr idx mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVBZloadidx [off] {sym} ptr idx mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVBZloadidx {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[2]
- ptr := x.Args[0]
- idx := x.Args[1]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(v.Pos, OpS390XMOVBZloadidx, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(idx)
- v0.AddArg(mem)
- return true
- }
- return false
-}
-func rewriteValueS390X_OpS390XMOVDnop_10(v *Value) bool {
- b := v.Block
- // match: (MOVDnop x:(MOVBloadidx [off] {sym} ptr idx mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVBloadidx [off] {sym} ptr idx mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVBloadidx {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[2]
- ptr := x.Args[0]
- idx := x.Args[1]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(v.Pos, OpS390XMOVBloadidx, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(idx)
- v0.AddArg(mem)
- return true
- }
- // match: (MOVDnop x:(MOVHZloadidx [off] {sym} ptr idx mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVHZloadidx [off] {sym} ptr idx mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVHZloadidx {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[2]
- ptr := x.Args[0]
- idx := x.Args[1]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(idx)
- v0.AddArg(mem)
- return true
- }
- // match: (MOVDnop x:(MOVHloadidx [off] {sym} ptr idx mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVHloadidx [off] {sym} ptr idx mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVHloadidx {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[2]
- ptr := x.Args[0]
- idx := x.Args[1]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(v.Pos, OpS390XMOVHloadidx, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(idx)
- v0.AddArg(mem)
- return true
- }
- // match: (MOVDnop x:(MOVWZloadidx [off] {sym} ptr idx mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVWZloadidx [off] {sym} ptr idx mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVWZloadidx {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[2]
- ptr := x.Args[0]
- idx := x.Args[1]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(idx)
- v0.AddArg(mem)
- return true
- }
- // match: (MOVDnop x:(MOVWloadidx [off] {sym} ptr idx mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVWloadidx [off] {sym} ptr idx mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVWloadidx {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[2]
- ptr := x.Args[0]
- idx := x.Args[1]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(v.Pos, OpS390XMOVWloadidx, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(idx)
- v0.AddArg(mem)
- return true
- }
- // match: (MOVDnop x:(MOVDloadidx [off] {sym} ptr idx mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVDloadidx [off] {sym} ptr idx mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVDloadidx {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[2]
- ptr := x.Args[0]
- idx := x.Args[1]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(v.Pos, OpS390XMOVDloadidx, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(idx)
- v0.AddArg(mem)
- return true
- }
- return false
-}
-func rewriteValueS390X_OpS390XMOVDreg_0(v *Value) bool {
- b := v.Block
- // match: (MOVDreg x)
- // cond: t.Compare(x.Type) == types.CMPeq
- // result: x
- for {
- t := v.Type
- x := v.Args[0]
- if !(t.Compare(x.Type) == types.CMPeq) {
- break
- }
- v.reset(OpCopy)
- v.Type = x.Type
- v.AddArg(x)
- return true
- }
- // match: (MOVDreg (MOVDconst [c]))
- // cond:
- // result: (MOVDconst [c])
- for {
- v_0 := v.Args[0]
- if v_0.Op != OpS390XMOVDconst {
- break
- }
- c := v_0.AuxInt
- v.reset(OpS390XMOVDconst)
- v.AuxInt = c
- return true
- }
- // match: (MOVDreg x)
- // cond: x.Uses == 1
- // result: (MOVDnop x)
- for {
- x := v.Args[0]
- if !(x.Uses == 1) {
- break
- }
- v.reset(OpS390XMOVDnop)
- v.AddArg(x)
- return true
- }
- // match: (MOVDreg x:(MOVBZload [off] {sym} ptr mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVBZload [off] {sym} ptr mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVBZload {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[1]
- ptr := x.Args[0]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(x.Pos, OpS390XMOVBZload, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(mem)
- return true
- }
- // match: (MOVDreg x:(MOVBload [off] {sym} ptr mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVBload [off] {sym} ptr mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVBload {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[1]
- ptr := x.Args[0]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(x.Pos, OpS390XMOVBload, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(mem)
- return true
- }
- // match: (MOVDreg x:(MOVHZload [off] {sym} ptr mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVHZload [off] {sym} ptr mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVHZload {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[1]
- ptr := x.Args[0]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(x.Pos, OpS390XMOVHZload, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(mem)
- return true
- }
- // match: (MOVDreg x:(MOVHload [off] {sym} ptr mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVHload [off] {sym} ptr mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVHload {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[1]
- ptr := x.Args[0]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(x.Pos, OpS390XMOVHload, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(mem)
- return true
- }
- // match: (MOVDreg x:(MOVWZload [off] {sym} ptr mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVWZload [off] {sym} ptr mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVWZload {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[1]
- ptr := x.Args[0]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(x.Pos, OpS390XMOVWZload, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(mem)
- return true
- }
- // match: (MOVDreg x:(MOVWload [off] {sym} ptr mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVWload [off] {sym} ptr mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVWload {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[1]
- ptr := x.Args[0]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(x.Pos, OpS390XMOVWload, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(mem)
- return true
- }
- // match: (MOVDreg x:(MOVDload [off] {sym} ptr mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVDload [off] {sym} ptr mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVDload {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[1]
- ptr := x.Args[0]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(x.Pos, OpS390XMOVDload, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(mem)
- return true
- }
- return false
-}
-func rewriteValueS390X_OpS390XMOVDreg_10(v *Value) bool {
- b := v.Block
- // match: (MOVDreg x:(MOVBZloadidx [off] {sym} ptr idx mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVBZloadidx [off] {sym} ptr idx mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVBZloadidx {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[2]
- ptr := x.Args[0]
- idx := x.Args[1]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(v.Pos, OpS390XMOVBZloadidx, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(idx)
- v0.AddArg(mem)
- return true
- }
- // match: (MOVDreg x:(MOVBloadidx [off] {sym} ptr idx mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVBloadidx [off] {sym} ptr idx mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVBloadidx {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[2]
- ptr := x.Args[0]
- idx := x.Args[1]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(v.Pos, OpS390XMOVBloadidx, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(idx)
- v0.AddArg(mem)
- return true
- }
- // match: (MOVDreg x:(MOVHZloadidx [off] {sym} ptr idx mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVHZloadidx [off] {sym} ptr idx mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVHZloadidx {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[2]
- ptr := x.Args[0]
- idx := x.Args[1]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(idx)
- v0.AddArg(mem)
- return true
- }
- // match: (MOVDreg x:(MOVHloadidx [off] {sym} ptr idx mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVHloadidx [off] {sym} ptr idx mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVHloadidx {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[2]
- ptr := x.Args[0]
- idx := x.Args[1]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(v.Pos, OpS390XMOVHloadidx, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(idx)
- v0.AddArg(mem)
- return true
- }
- // match: (MOVDreg x:(MOVWZloadidx [off] {sym} ptr idx mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVWZloadidx [off] {sym} ptr idx mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVWZloadidx {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[2]
- ptr := x.Args[0]
- idx := x.Args[1]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(idx)
- v0.AddArg(mem)
- return true
- }
- // match: (MOVDreg x:(MOVWloadidx [off] {sym} ptr idx mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVWloadidx [off] {sym} ptr idx mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVWloadidx {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[2]
- ptr := x.Args[0]
- idx := x.Args[1]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(v.Pos, OpS390XMOVWloadidx, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(idx)
- v0.AddArg(mem)
- return true
- }
- // match: (MOVDreg x:(MOVDloadidx [off] {sym} ptr idx mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVDloadidx [off] {sym} ptr idx mem)
- for {
- t := v.Type
- x := v.Args[0]
- if x.Op != OpS390XMOVDloadidx {
- break
- }
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[2]
- ptr := x.Args[0]
- idx := x.Args[1]
- if !(x.Uses == 1 && clobber(x)) {
- break
- }
- b = x.Block
- v0 := b.NewValue0(v.Pos, OpS390XMOVDloadidx, t)
- v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(idx)
- v0.AddArg(mem)
- return true
- }
- return false
-}
func rewriteValueS390X_OpS390XMOVDstore_0(v *Value) bool {
// match: (MOVDstore [off1] {sym} (ADDconst [off2] ptr) val mem)
// cond: is20Bit(off1+off2)
@@ -17431,206 +16743,275 @@ func rewriteValueS390X_OpS390XMOVHZloadidx_0(v *Value) bool {
return false
}
func rewriteValueS390X_OpS390XMOVHZreg_0(v *Value) bool {
- b := v.Block
- // match: (MOVHZreg x:(MOVBZload _ _))
- // cond:
- // result: (MOVDreg x)
+ // match: (MOVHZreg e:(MOVBZreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVBZreg x)
for {
- x := v.Args[0]
- if x.Op != OpS390XMOVBZload {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVBZreg {
break
}
- _ = x.Args[1]
- v.reset(OpS390XMOVDreg)
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
+ break
+ }
+ v.reset(OpS390XMOVBZreg)
v.AddArg(x)
return true
}
- // match: (MOVHZreg x:(MOVHZload _ _))
- // cond:
- // result: (MOVDreg x)
+ // match: (MOVHZreg e:(MOVHreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVHZreg x)
for {
- x := v.Args[0]
- if x.Op != OpS390XMOVHZload {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVHreg {
break
}
- _ = x.Args[1]
- v.reset(OpS390XMOVDreg)
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
+ break
+ }
+ v.reset(OpS390XMOVHZreg)
v.AddArg(x)
return true
}
- // match: (MOVHZreg x:(Arg ))
- // cond: (is8BitInt(t) || is16BitInt(t)) && !isSigned(t)
- // result: (MOVDreg x)
+ // match: (MOVHZreg e:(MOVWreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVHZreg x)
+ for {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVWreg {
+ break
+ }
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
+ break
+ }
+ v.reset(OpS390XMOVHZreg)
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVHZreg e:(MOVHZreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVHZreg x)
+ for {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVHZreg {
+ break
+ }
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
+ break
+ }
+ v.reset(OpS390XMOVHZreg)
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVHZreg e:(MOVWZreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVHZreg x)
+ for {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVWZreg {
+ break
+ }
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
+ break
+ }
+ v.reset(OpS390XMOVHZreg)
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVHZreg x:(MOVBZload _ _))
+ // cond: (!x.Type.IsSigned() || x.Type.Size() > 1)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpArg {
+ if x.Op != OpS390XMOVBZload {
break
}
- t := x.Type
- if !((is8BitInt(t) || is16BitInt(t)) && !isSigned(t)) {
+ _ = x.Args[1]
+ if !(!x.Type.IsSigned() || x.Type.Size() > 1) {
break
}
- v.reset(OpS390XMOVDreg)
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVHZreg x:(MOVBZreg _))
- // cond:
- // result: (MOVDreg x)
+ // match: (MOVHZreg x:(MOVBZloadidx _ _ _))
+ // cond: (!x.Type.IsSigned() || x.Type.Size() > 1)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVBZreg {
+ if x.Op != OpS390XMOVBZloadidx {
+ break
+ }
+ _ = x.Args[2]
+ if !(!x.Type.IsSigned() || x.Type.Size() > 1) {
break
}
- v.reset(OpS390XMOVDreg)
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVHZreg x:(MOVHZreg _))
- // cond:
- // result: (MOVDreg x)
+ // match: (MOVHZreg x:(MOVBZloadidx _ _ _))
+ // cond: (!x.Type.IsSigned() || x.Type.Size() > 1)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVHZreg {
+ if x.Op != OpS390XMOVBZloadidx {
+ break
+ }
+ _ = x.Args[2]
+ if !(!x.Type.IsSigned() || x.Type.Size() > 1) {
break
}
- v.reset(OpS390XMOVDreg)
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVHZreg (MOVHreg x))
- // cond:
- // result: (MOVHZreg x)
+ // match: (MOVHZreg x:(MOVHZload _ _))
+ // cond: (!x.Type.IsSigned() || x.Type.Size() > 2)
+ // result: x
for {
- v_0 := v.Args[0]
- if v_0.Op != OpS390XMOVHreg {
+ x := v.Args[0]
+ if x.Op != OpS390XMOVHZload {
break
}
- x := v_0.Args[0]
- v.reset(OpS390XMOVHZreg)
+ _ = x.Args[1]
+ if !(!x.Type.IsSigned() || x.Type.Size() > 2) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVHZreg (MOVDconst [c]))
- // cond:
- // result: (MOVDconst [int64(uint16(c))])
+ // match: (MOVHZreg x:(MOVHZloadidx _ _ _))
+ // cond: (!x.Type.IsSigned() || x.Type.Size() > 2)
+ // result: x
for {
- v_0 := v.Args[0]
- if v_0.Op != OpS390XMOVDconst {
+ x := v.Args[0]
+ if x.Op != OpS390XMOVHZloadidx {
break
}
- c := v_0.AuxInt
- v.reset(OpS390XMOVDconst)
- v.AuxInt = int64(uint16(c))
+ _ = x.Args[2]
+ if !(!x.Type.IsSigned() || x.Type.Size() > 2) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
return true
}
- // match: (MOVHZreg x:(MOVHZload [off] {sym} ptr mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVHZload [off] {sym} ptr mem)
+ return false
+}
+func rewriteValueS390X_OpS390XMOVHZreg_10(v *Value) bool {
+ b := v.Block
+ typ := &b.Func.Config.Types
+ // match: (MOVHZreg x:(MOVHZloadidx _ _ _))
+ // cond: (!x.Type.IsSigned() || x.Type.Size() > 2)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVHZload {
+ if x.Op != OpS390XMOVHZloadidx {
break
}
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[1]
- ptr := x.Args[0]
- if !(x.Uses == 1 && clobber(x)) {
+ _ = x.Args[2]
+ if !(!x.Type.IsSigned() || x.Type.Size() > 2) {
break
}
- b = x.Block
- v0 := b.NewValue0(x.Pos, OpS390XMOVHZload, v.Type)
v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(mem)
+ v.Type = x.Type
+ v.AddArg(x)
return true
}
- // match: (MOVHZreg x:(MOVHload [off] {sym} ptr mem))
+ // match: (MOVHZreg x:(MOVHload [o] {s} p mem))
// cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVHZload [off] {sym} ptr mem)
+ // result: @x.Block (MOVHZload [o] {s} p mem)
for {
+ t := v.Type
x := v.Args[0]
if x.Op != OpS390XMOVHload {
break
}
- off := x.AuxInt
- sym := x.Aux
+ o := x.AuxInt
+ s := x.Aux
mem := x.Args[1]
- ptr := x.Args[0]
+ p := x.Args[0]
if !(x.Uses == 1 && clobber(x)) {
break
}
b = x.Block
- v0 := b.NewValue0(x.Pos, OpS390XMOVHZload, v.Type)
+ v0 := b.NewValue0(x.Pos, OpS390XMOVHZload, t)
v.reset(OpCopy)
v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
+ v0.AuxInt = o
+ v0.Aux = s
+ v0.AddArg(p)
v0.AddArg(mem)
return true
}
- // match: (MOVHZreg x:(MOVHZloadidx [off] {sym} ptr idx mem))
+ // match: (MOVHZreg x:(MOVHloadidx [o] {s} p i mem))
// cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVHZloadidx [off] {sym} ptr idx mem)
+ // result: @x.Block (MOVHZloadidx [o] {s} p i mem)
for {
+ t := v.Type
x := v.Args[0]
- if x.Op != OpS390XMOVHZloadidx {
+ if x.Op != OpS390XMOVHloadidx {
break
}
- off := x.AuxInt
- sym := x.Aux
+ o := x.AuxInt
+ s := x.Aux
mem := x.Args[2]
- ptr := x.Args[0]
- idx := x.Args[1]
+ p := x.Args[0]
+ i := x.Args[1]
if !(x.Uses == 1 && clobber(x)) {
break
}
b = x.Block
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, v.Type)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, t)
v.reset(OpCopy)
v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(idx)
+ v0.AuxInt = o
+ v0.Aux = s
+ v0.AddArg(p)
+ v0.AddArg(i)
v0.AddArg(mem)
return true
}
- return false
-}
-func rewriteValueS390X_OpS390XMOVHZreg_10(v *Value) bool {
- b := v.Block
- typ := &b.Func.Config.Types
- // match: (MOVHZreg x:(MOVHloadidx [off] {sym} ptr idx mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVHZloadidx [off] {sym} ptr idx mem)
+ // match: (MOVHZreg x:(Arg ))
+ // cond: !t.IsSigned() && t.Size() <= 2
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVHloadidx {
+ if x.Op != OpArg {
break
}
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[2]
- ptr := x.Args[0]
- idx := x.Args[1]
- if !(x.Uses == 1 && clobber(x)) {
+ t := x.Type
+ if !(!t.IsSigned() && t.Size() <= 2) {
break
}
- b = x.Block
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, v.Type)
v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(idx)
- v0.AddArg(mem)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVHZreg (MOVDconst [c]))
+ // cond:
+ // result: (MOVDconst [int64(uint16(c))])
+ for {
+ v_0 := v.Args[0]
+ if v_0.Op != OpS390XMOVDconst {
+ break
+ }
+ c := v_0.AuxInt
+ v.reset(OpS390XMOVDconst)
+ v.AuxInt = int64(uint16(c))
return true
}
// match: (MOVHZreg (ANDWconst [m] x))
@@ -17885,147 +17266,169 @@ func rewriteValueS390X_OpS390XMOVHloadidx_0(v *Value) bool {
return false
}
func rewriteValueS390X_OpS390XMOVHreg_0(v *Value) bool {
- b := v.Block
- // match: (MOVHreg x:(MOVBload _ _))
- // cond:
- // result: (MOVDreg x)
+ // match: (MOVHreg e:(MOVBreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVBreg x)
for {
- x := v.Args[0]
- if x.Op != OpS390XMOVBload {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVBreg {
break
}
- _ = x.Args[1]
- v.reset(OpS390XMOVDreg)
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
+ break
+ }
+ v.reset(OpS390XMOVBreg)
v.AddArg(x)
return true
}
- // match: (MOVHreg x:(MOVBZload _ _))
- // cond:
- // result: (MOVDreg x)
+ // match: (MOVHreg e:(MOVHreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVHreg x)
for {
- x := v.Args[0]
- if x.Op != OpS390XMOVBZload {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVHreg {
break
}
- _ = x.Args[1]
- v.reset(OpS390XMOVDreg)
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
+ break
+ }
+ v.reset(OpS390XMOVHreg)
v.AddArg(x)
return true
}
- // match: (MOVHreg x:(MOVHload _ _))
- // cond:
- // result: (MOVDreg x)
+ // match: (MOVHreg e:(MOVWreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVHreg x)
for {
- x := v.Args[0]
- if x.Op != OpS390XMOVHload {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVWreg {
break
}
- _ = x.Args[1]
- v.reset(OpS390XMOVDreg)
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
+ break
+ }
+ v.reset(OpS390XMOVHreg)
v.AddArg(x)
return true
}
- // match: (MOVHreg x:(Arg ))
- // cond: (is8BitInt(t) || is16BitInt(t)) && isSigned(t)
- // result: (MOVDreg x)
+ // match: (MOVHreg e:(MOVHZreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVHreg x)
for {
- x := v.Args[0]
- if x.Op != OpArg {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVHZreg {
break
}
- t := x.Type
- if !((is8BitInt(t) || is16BitInt(t)) && isSigned(t)) {
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
break
}
- v.reset(OpS390XMOVDreg)
+ v.reset(OpS390XMOVHreg)
v.AddArg(x)
return true
}
- // match: (MOVHreg x:(MOVBreg _))
- // cond:
- // result: (MOVDreg x)
+ // match: (MOVHreg e:(MOVWZreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVHreg x)
for {
- x := v.Args[0]
- if x.Op != OpS390XMOVBreg {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVWZreg {
+ break
+ }
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
break
}
- v.reset(OpS390XMOVDreg)
+ v.reset(OpS390XMOVHreg)
v.AddArg(x)
return true
}
- // match: (MOVHreg x:(MOVBZreg _))
- // cond:
- // result: (MOVDreg x)
+ // match: (MOVHreg x:(MOVBload _ _))
+ // cond: (x.Type.IsSigned() || x.Type.Size() == 8)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVBZreg {
+ if x.Op != OpS390XMOVBload {
break
}
- v.reset(OpS390XMOVDreg)
+ _ = x.Args[1]
+ if !(x.Type.IsSigned() || x.Type.Size() == 8) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVHreg x:(MOVHreg _))
- // cond:
- // result: (MOVDreg x)
+ // match: (MOVHreg x:(MOVBloadidx _ _ _))
+ // cond: (x.Type.IsSigned() || x.Type.Size() == 8)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVHreg {
+ if x.Op != OpS390XMOVBloadidx {
break
}
- v.reset(OpS390XMOVDreg)
+ _ = x.Args[2]
+ if !(x.Type.IsSigned() || x.Type.Size() == 8) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVHreg (MOVHZreg x))
- // cond:
- // result: (MOVHreg x)
+ // match: (MOVHreg x:(MOVBloadidx _ _ _))
+ // cond: (x.Type.IsSigned() || x.Type.Size() == 8)
+ // result: x
for {
- v_0 := v.Args[0]
- if v_0.Op != OpS390XMOVHZreg {
+ x := v.Args[0]
+ if x.Op != OpS390XMOVBloadidx {
break
}
- x := v_0.Args[0]
- v.reset(OpS390XMOVHreg)
+ _ = x.Args[2]
+ if !(x.Type.IsSigned() || x.Type.Size() == 8) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVHreg (MOVDconst [c]))
- // cond:
- // result: (MOVDconst [int64(int16(c))])
+ // match: (MOVHreg x:(MOVHload _ _))
+ // cond: (x.Type.IsSigned() || x.Type.Size() == 8)
+ // result: x
for {
- v_0 := v.Args[0]
- if v_0.Op != OpS390XMOVDconst {
+ x := v.Args[0]
+ if x.Op != OpS390XMOVHload {
break
}
- c := v_0.AuxInt
- v.reset(OpS390XMOVDconst)
- v.AuxInt = int64(int16(c))
+ _ = x.Args[1]
+ if !(x.Type.IsSigned() || x.Type.Size() == 8) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
return true
}
- // match: (MOVHreg x:(MOVHZload [off] {sym} ptr mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVHload [off] {sym} ptr mem)
+ // match: (MOVHreg x:(MOVHloadidx _ _ _))
+ // cond: (x.Type.IsSigned() || x.Type.Size() == 8)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVHZload {
+ if x.Op != OpS390XMOVHloadidx {
break
}
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[1]
- ptr := x.Args[0]
- if !(x.Uses == 1 && clobber(x)) {
+ _ = x.Args[2]
+ if !(x.Type.IsSigned() || x.Type.Size() == 8) {
break
}
- b = x.Block
- v0 := b.NewValue0(x.Pos, OpS390XMOVHload, v.Type)
v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(mem)
+ v.Type = x.Type
+ v.AddArg(x)
return true
}
return false
@@ -18033,83 +17436,156 @@ func rewriteValueS390X_OpS390XMOVHreg_0(v *Value) bool {
func rewriteValueS390X_OpS390XMOVHreg_10(v *Value) bool {
b := v.Block
typ := &b.Func.Config.Types
- // match: (MOVHreg x:(MOVHload [off] {sym} ptr mem))
+ // match: (MOVHreg x:(MOVHloadidx _ _ _))
+ // cond: (x.Type.IsSigned() || x.Type.Size() == 8)
+ // result: x
+ for {
+ x := v.Args[0]
+ if x.Op != OpS390XMOVHloadidx {
+ break
+ }
+ _ = x.Args[2]
+ if !(x.Type.IsSigned() || x.Type.Size() == 8) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVHreg x:(MOVBZload _ _))
+ // cond: (!x.Type.IsSigned() || x.Type.Size() > 1)
+ // result: x
+ for {
+ x := v.Args[0]
+ if x.Op != OpS390XMOVBZload {
+ break
+ }
+ _ = x.Args[1]
+ if !(!x.Type.IsSigned() || x.Type.Size() > 1) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVHreg x:(MOVBZloadidx _ _ _))
+ // cond: (!x.Type.IsSigned() || x.Type.Size() > 1)
+ // result: x
+ for {
+ x := v.Args[0]
+ if x.Op != OpS390XMOVBZloadidx {
+ break
+ }
+ _ = x.Args[2]
+ if !(!x.Type.IsSigned() || x.Type.Size() > 1) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVHreg x:(MOVBZloadidx _ _ _))
+ // cond: (!x.Type.IsSigned() || x.Type.Size() > 1)
+ // result: x
+ for {
+ x := v.Args[0]
+ if x.Op != OpS390XMOVBZloadidx {
+ break
+ }
+ _ = x.Args[2]
+ if !(!x.Type.IsSigned() || x.Type.Size() > 1) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVHreg x:(MOVHZload [o] {s} p mem))
// cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVHload [off] {sym} ptr mem)
+ // result: @x.Block (MOVHload [o] {s} p mem)
for {
+ t := v.Type
x := v.Args[0]
- if x.Op != OpS390XMOVHload {
+ if x.Op != OpS390XMOVHZload {
break
}
- off := x.AuxInt
- sym := x.Aux
+ o := x.AuxInt
+ s := x.Aux
mem := x.Args[1]
- ptr := x.Args[0]
+ p := x.Args[0]
if !(x.Uses == 1 && clobber(x)) {
break
}
b = x.Block
- v0 := b.NewValue0(x.Pos, OpS390XMOVHload, v.Type)
+ v0 := b.NewValue0(x.Pos, OpS390XMOVHload, t)
v.reset(OpCopy)
v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
+ v0.AuxInt = o
+ v0.Aux = s
+ v0.AddArg(p)
v0.AddArg(mem)
return true
}
- // match: (MOVHreg x:(MOVHZloadidx [off] {sym} ptr idx mem))
+ // match: (MOVHreg x:(MOVHZloadidx [o] {s} p i mem))
// cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVHloadidx [off] {sym} ptr idx mem)
+ // result: @x.Block (MOVHloadidx [o] {s} p i mem)
for {
+ t := v.Type
x := v.Args[0]
if x.Op != OpS390XMOVHZloadidx {
break
}
- off := x.AuxInt
- sym := x.Aux
+ o := x.AuxInt
+ s := x.Aux
mem := x.Args[2]
- ptr := x.Args[0]
- idx := x.Args[1]
+ p := x.Args[0]
+ i := x.Args[1]
if !(x.Uses == 1 && clobber(x)) {
break
}
b = x.Block
- v0 := b.NewValue0(v.Pos, OpS390XMOVHloadidx, v.Type)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHloadidx, t)
v.reset(OpCopy)
v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(idx)
+ v0.AuxInt = o
+ v0.Aux = s
+ v0.AddArg(p)
+ v0.AddArg(i)
v0.AddArg(mem)
return true
}
- // match: (MOVHreg x:(MOVHloadidx [off] {sym} ptr idx mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVHloadidx [off] {sym} ptr idx mem)
+ // match: (MOVHreg x:(Arg ))
+ // cond: t.IsSigned() && t.Size() <= 2
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVHloadidx {
+ if x.Op != OpArg {
break
}
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[2]
- ptr := x.Args[0]
- idx := x.Args[1]
- if !(x.Uses == 1 && clobber(x)) {
+ t := x.Type
+ if !(t.IsSigned() && t.Size() <= 2) {
break
}
- b = x.Block
- v0 := b.NewValue0(v.Pos, OpS390XMOVHloadidx, v.Type)
v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(idx)
- v0.AddArg(mem)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVHreg (MOVDconst [c]))
+ // cond:
+ // result: (MOVDconst [int64(int16(c))])
+ for {
+ v_0 := v.Args[0]
+ if v_0.Op != OpS390XMOVDconst {
+ break
+ }
+ c := v_0.AuxInt
+ v.reset(OpS390XMOVDconst)
+ v.AuxInt = int64(int16(c))
return true
}
// match: (MOVHreg (ANDWconst [m] x))
@@ -20264,230 +19740,309 @@ func rewriteValueS390X_OpS390XMOVWZloadidx_0(v *Value) bool {
return false
}
func rewriteValueS390X_OpS390XMOVWZreg_0(v *Value) bool {
- b := v.Block
- // match: (MOVWZreg x:(MOVBZload _ _))
- // cond:
- // result: (MOVDreg x)
+ // match: (MOVWZreg e:(MOVBZreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVBZreg x)
for {
- x := v.Args[0]
- if x.Op != OpS390XMOVBZload {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVBZreg {
break
}
- _ = x.Args[1]
- v.reset(OpS390XMOVDreg)
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
+ break
+ }
+ v.reset(OpS390XMOVBZreg)
v.AddArg(x)
return true
}
- // match: (MOVWZreg x:(MOVHZload _ _))
- // cond:
- // result: (MOVDreg x)
+ // match: (MOVWZreg e:(MOVHZreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVHZreg x)
for {
- x := v.Args[0]
- if x.Op != OpS390XMOVHZload {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVHZreg {
break
}
- _ = x.Args[1]
- v.reset(OpS390XMOVDreg)
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
+ break
+ }
+ v.reset(OpS390XMOVHZreg)
v.AddArg(x)
return true
}
- // match: (MOVWZreg x:(MOVWZload _ _))
- // cond:
- // result: (MOVDreg x)
+ // match: (MOVWZreg e:(MOVWreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVWZreg x)
for {
- x := v.Args[0]
- if x.Op != OpS390XMOVWZload {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVWreg {
break
}
- _ = x.Args[1]
- v.reset(OpS390XMOVDreg)
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
+ break
+ }
+ v.reset(OpS390XMOVWZreg)
v.AddArg(x)
return true
}
- // match: (MOVWZreg x:(Arg ))
- // cond: (is8BitInt(t) || is16BitInt(t) || is32BitInt(t)) && !isSigned(t)
- // result: (MOVDreg x)
+ // match: (MOVWZreg e:(MOVWZreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVWZreg x)
for {
- x := v.Args[0]
- if x.Op != OpArg {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVWZreg {
break
}
- t := x.Type
- if !((is8BitInt(t) || is16BitInt(t) || is32BitInt(t)) && !isSigned(t)) {
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
break
}
- v.reset(OpS390XMOVDreg)
+ v.reset(OpS390XMOVWZreg)
v.AddArg(x)
return true
}
- // match: (MOVWZreg x:(MOVBZreg _))
- // cond:
- // result: (MOVDreg x)
+ // match: (MOVWZreg x:(MOVBZload _ _))
+ // cond: (!x.Type.IsSigned() || x.Type.Size() > 1)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVBZreg {
+ if x.Op != OpS390XMOVBZload {
break
}
- v.reset(OpS390XMOVDreg)
+ _ = x.Args[1]
+ if !(!x.Type.IsSigned() || x.Type.Size() > 1) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVWZreg x:(MOVHZreg _))
- // cond:
- // result: (MOVDreg x)
+ // match: (MOVWZreg x:(MOVBZloadidx _ _ _))
+ // cond: (!x.Type.IsSigned() || x.Type.Size() > 1)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVHZreg {
+ if x.Op != OpS390XMOVBZloadidx {
break
}
- v.reset(OpS390XMOVDreg)
+ _ = x.Args[2]
+ if !(!x.Type.IsSigned() || x.Type.Size() > 1) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVWZreg x:(MOVWZreg _))
- // cond:
- // result: (MOVDreg x)
+ // match: (MOVWZreg x:(MOVBZloadidx _ _ _))
+ // cond: (!x.Type.IsSigned() || x.Type.Size() > 1)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVWZreg {
+ if x.Op != OpS390XMOVBZloadidx {
+ break
+ }
+ _ = x.Args[2]
+ if !(!x.Type.IsSigned() || x.Type.Size() > 1) {
break
}
- v.reset(OpS390XMOVDreg)
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVWZreg (MOVWreg x))
- // cond:
- // result: (MOVWZreg x)
+ // match: (MOVWZreg x:(MOVHZload _ _))
+ // cond: (!x.Type.IsSigned() || x.Type.Size() > 2)
+ // result: x
for {
- v_0 := v.Args[0]
- if v_0.Op != OpS390XMOVWreg {
+ x := v.Args[0]
+ if x.Op != OpS390XMOVHZload {
break
}
- x := v_0.Args[0]
- v.reset(OpS390XMOVWZreg)
+ _ = x.Args[1]
+ if !(!x.Type.IsSigned() || x.Type.Size() > 2) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVWZreg (MOVDconst [c]))
- // cond:
- // result: (MOVDconst [int64(uint32(c))])
+ // match: (MOVWZreg x:(MOVHZloadidx _ _ _))
+ // cond: (!x.Type.IsSigned() || x.Type.Size() > 2)
+ // result: x
for {
- v_0 := v.Args[0]
- if v_0.Op != OpS390XMOVDconst {
+ x := v.Args[0]
+ if x.Op != OpS390XMOVHZloadidx {
break
}
- c := v_0.AuxInt
- v.reset(OpS390XMOVDconst)
- v.AuxInt = int64(uint32(c))
+ _ = x.Args[2]
+ if !(!x.Type.IsSigned() || x.Type.Size() > 2) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
return true
}
- // match: (MOVWZreg x:(MOVWZload [off] {sym} ptr mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVWZload [off] {sym} ptr mem)
+ // match: (MOVWZreg x:(MOVHZloadidx _ _ _))
+ // cond: (!x.Type.IsSigned() || x.Type.Size() > 2)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVWZload {
+ if x.Op != OpS390XMOVHZloadidx {
break
}
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[1]
- ptr := x.Args[0]
- if !(x.Uses == 1 && clobber(x)) {
+ _ = x.Args[2]
+ if !(!x.Type.IsSigned() || x.Type.Size() > 2) {
break
}
- b = x.Block
- v0 := b.NewValue0(x.Pos, OpS390XMOVWZload, v.Type)
v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(mem)
+ v.Type = x.Type
+ v.AddArg(x)
return true
}
return false
}
func rewriteValueS390X_OpS390XMOVWZreg_10(v *Value) bool {
b := v.Block
- // match: (MOVWZreg x:(MOVWload [off] {sym} ptr mem))
+ // match: (MOVWZreg x:(MOVWZload _ _))
+ // cond: (!x.Type.IsSigned() || x.Type.Size() > 4)
+ // result: x
+ for {
+ x := v.Args[0]
+ if x.Op != OpS390XMOVWZload {
+ break
+ }
+ _ = x.Args[1]
+ if !(!x.Type.IsSigned() || x.Type.Size() > 4) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVWZreg x:(MOVWZloadidx _ _ _))
+ // cond: (!x.Type.IsSigned() || x.Type.Size() > 4)
+ // result: x
+ for {
+ x := v.Args[0]
+ if x.Op != OpS390XMOVWZloadidx {
+ break
+ }
+ _ = x.Args[2]
+ if !(!x.Type.IsSigned() || x.Type.Size() > 4) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVWZreg x:(MOVWZloadidx _ _ _))
+ // cond: (!x.Type.IsSigned() || x.Type.Size() > 4)
+ // result: x
+ for {
+ x := v.Args[0]
+ if x.Op != OpS390XMOVWZloadidx {
+ break
+ }
+ _ = x.Args[2]
+ if !(!x.Type.IsSigned() || x.Type.Size() > 4) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVWZreg x:(MOVWload [o] {s} p mem))
// cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVWZload [off] {sym} ptr mem)
+ // result: @x.Block (MOVWZload [o] {s} p mem)
for {
+ t := v.Type
x := v.Args[0]
if x.Op != OpS390XMOVWload {
break
}
- off := x.AuxInt
- sym := x.Aux
+ o := x.AuxInt
+ s := x.Aux
mem := x.Args[1]
- ptr := x.Args[0]
+ p := x.Args[0]
if !(x.Uses == 1 && clobber(x)) {
break
}
b = x.Block
- v0 := b.NewValue0(x.Pos, OpS390XMOVWZload, v.Type)
+ v0 := b.NewValue0(x.Pos, OpS390XMOVWZload, t)
v.reset(OpCopy)
v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
+ v0.AuxInt = o
+ v0.Aux = s
+ v0.AddArg(p)
v0.AddArg(mem)
return true
}
- // match: (MOVWZreg x:(MOVWZloadidx [off] {sym} ptr idx mem))
+ // match: (MOVWZreg x:(MOVWloadidx [o] {s} p i mem))
// cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVWZloadidx [off] {sym} ptr idx mem)
+ // result: @x.Block (MOVWZloadidx [o] {s} p i mem)
for {
+ t := v.Type
x := v.Args[0]
- if x.Op != OpS390XMOVWZloadidx {
+ if x.Op != OpS390XMOVWloadidx {
break
}
- off := x.AuxInt
- sym := x.Aux
+ o := x.AuxInt
+ s := x.Aux
mem := x.Args[2]
- ptr := x.Args[0]
- idx := x.Args[1]
+ p := x.Args[0]
+ i := x.Args[1]
if !(x.Uses == 1 && clobber(x)) {
break
}
b = x.Block
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, v.Type)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, t)
v.reset(OpCopy)
v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(idx)
+ v0.AuxInt = o
+ v0.Aux = s
+ v0.AddArg(p)
+ v0.AddArg(i)
v0.AddArg(mem)
return true
}
- // match: (MOVWZreg x:(MOVWloadidx [off] {sym} ptr idx mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVWZloadidx [off] {sym} ptr idx mem)
+ // match: (MOVWZreg x:(Arg ))
+ // cond: !t.IsSigned() && t.Size() <= 4
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVWloadidx {
+ if x.Op != OpArg {
break
}
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[2]
- ptr := x.Args[0]
- idx := x.Args[1]
- if !(x.Uses == 1 && clobber(x)) {
+ t := x.Type
+ if !(!t.IsSigned() && t.Size() <= 4) {
break
}
- b = x.Block
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, v.Type)
v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(idx)
- v0.AddArg(mem)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVWZreg (MOVDconst [c]))
+ // cond:
+ // result: (MOVDconst [int64(uint32(c))])
+ for {
+ v_0 := v.Args[0]
+ if v_0.Op != OpS390XMOVDconst {
+ break
+ }
+ c := v_0.AuxInt
+ v.reset(OpS390XMOVDconst)
+ v.AuxInt = int64(uint32(c))
return true
}
return false
@@ -20725,279 +20280,415 @@ func rewriteValueS390X_OpS390XMOVWloadidx_0(v *Value) bool {
return false
}
func rewriteValueS390X_OpS390XMOVWreg_0(v *Value) bool {
+ // match: (MOVWreg e:(MOVBreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVBreg x)
+ for {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVBreg {
+ break
+ }
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
+ break
+ }
+ v.reset(OpS390XMOVBreg)
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVWreg e:(MOVHreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVHreg x)
+ for {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVHreg {
+ break
+ }
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
+ break
+ }
+ v.reset(OpS390XMOVHreg)
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVWreg e:(MOVWreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVWreg x)
+ for {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVWreg {
+ break
+ }
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
+ break
+ }
+ v.reset(OpS390XMOVWreg)
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVWreg e:(MOVWZreg x))
+ // cond: clobberIfDead(e)
+ // result: (MOVWreg x)
+ for {
+ e := v.Args[0]
+ if e.Op != OpS390XMOVWZreg {
+ break
+ }
+ x := e.Args[0]
+ if !(clobberIfDead(e)) {
+ break
+ }
+ v.reset(OpS390XMOVWreg)
+ v.AddArg(x)
+ return true
+ }
// match: (MOVWreg x:(MOVBload _ _))
- // cond:
- // result: (MOVDreg x)
+ // cond: (x.Type.IsSigned() || x.Type.Size() == 8)
+ // result: x
for {
x := v.Args[0]
if x.Op != OpS390XMOVBload {
break
}
_ = x.Args[1]
- v.reset(OpS390XMOVDreg)
+ if !(x.Type.IsSigned() || x.Type.Size() == 8) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVWreg x:(MOVBZload _ _))
- // cond:
- // result: (MOVDreg x)
+ // match: (MOVWreg x:(MOVBloadidx _ _ _))
+ // cond: (x.Type.IsSigned() || x.Type.Size() == 8)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVBZload {
+ if x.Op != OpS390XMOVBloadidx {
break
}
- _ = x.Args[1]
- v.reset(OpS390XMOVDreg)
+ _ = x.Args[2]
+ if !(x.Type.IsSigned() || x.Type.Size() == 8) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVWreg x:(MOVBloadidx _ _ _))
+ // cond: (x.Type.IsSigned() || x.Type.Size() == 8)
+ // result: x
+ for {
+ x := v.Args[0]
+ if x.Op != OpS390XMOVBloadidx {
+ break
+ }
+ _ = x.Args[2]
+ if !(x.Type.IsSigned() || x.Type.Size() == 8) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
// match: (MOVWreg x:(MOVHload _ _))
- // cond:
- // result: (MOVDreg x)
+ // cond: (x.Type.IsSigned() || x.Type.Size() == 8)
+ // result: x
for {
x := v.Args[0]
if x.Op != OpS390XMOVHload {
break
}
_ = x.Args[1]
- v.reset(OpS390XMOVDreg)
+ if !(x.Type.IsSigned() || x.Type.Size() == 8) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVWreg x:(MOVHZload _ _))
- // cond:
- // result: (MOVDreg x)
+ // match: (MOVWreg x:(MOVHloadidx _ _ _))
+ // cond: (x.Type.IsSigned() || x.Type.Size() == 8)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVHZload {
+ if x.Op != OpS390XMOVHloadidx {
break
}
- _ = x.Args[1]
- v.reset(OpS390XMOVDreg)
+ _ = x.Args[2]
+ if !(x.Type.IsSigned() || x.Type.Size() == 8) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVWreg x:(MOVWload _ _))
- // cond:
- // result: (MOVDreg x)
+ // match: (MOVWreg x:(MOVHloadidx _ _ _))
+ // cond: (x.Type.IsSigned() || x.Type.Size() == 8)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVWload {
+ if x.Op != OpS390XMOVHloadidx {
break
}
- _ = x.Args[1]
- v.reset(OpS390XMOVDreg)
+ _ = x.Args[2]
+ if !(x.Type.IsSigned() || x.Type.Size() == 8) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVWreg x:(Arg ))
- // cond: (is8BitInt(t) || is16BitInt(t) || is32BitInt(t)) && isSigned(t)
- // result: (MOVDreg x)
+ return false
+}
+func rewriteValueS390X_OpS390XMOVWreg_10(v *Value) bool {
+ b := v.Block
+ // match: (MOVWreg x:(MOVWload _ _))
+ // cond: (x.Type.IsSigned() || x.Type.Size() == 8)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpArg {
+ if x.Op != OpS390XMOVWload {
break
}
- t := x.Type
- if !((is8BitInt(t) || is16BitInt(t) || is32BitInt(t)) && isSigned(t)) {
+ _ = x.Args[1]
+ if !(x.Type.IsSigned() || x.Type.Size() == 8) {
break
}
- v.reset(OpS390XMOVDreg)
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVWreg x:(MOVBreg _))
- // cond:
- // result: (MOVDreg x)
+ // match: (MOVWreg x:(MOVWloadidx _ _ _))
+ // cond: (x.Type.IsSigned() || x.Type.Size() == 8)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVBreg {
+ if x.Op != OpS390XMOVWloadidx {
+ break
+ }
+ _ = x.Args[2]
+ if !(x.Type.IsSigned() || x.Type.Size() == 8) {
break
}
- v.reset(OpS390XMOVDreg)
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVWreg x:(MOVBZreg _))
- // cond:
- // result: (MOVDreg x)
+ // match: (MOVWreg x:(MOVWloadidx _ _ _))
+ // cond: (x.Type.IsSigned() || x.Type.Size() == 8)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVBZreg {
+ if x.Op != OpS390XMOVWloadidx {
+ break
+ }
+ _ = x.Args[2]
+ if !(x.Type.IsSigned() || x.Type.Size() == 8) {
break
}
- v.reset(OpS390XMOVDreg)
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVWreg x:(MOVHreg _))
- // cond:
- // result: (MOVDreg x)
+ // match: (MOVWreg x:(MOVBZload _ _))
+ // cond: (!x.Type.IsSigned() || x.Type.Size() > 1)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVHreg {
+ if x.Op != OpS390XMOVBZload {
break
}
- v.reset(OpS390XMOVDreg)
+ _ = x.Args[1]
+ if !(!x.Type.IsSigned() || x.Type.Size() > 1) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVWreg x:(MOVHZreg _))
- // cond:
- // result: (MOVDreg x)
+ // match: (MOVWreg x:(MOVBZloadidx _ _ _))
+ // cond: (!x.Type.IsSigned() || x.Type.Size() > 1)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVHZreg {
+ if x.Op != OpS390XMOVBZloadidx {
+ break
+ }
+ _ = x.Args[2]
+ if !(!x.Type.IsSigned() || x.Type.Size() > 1) {
break
}
- v.reset(OpS390XMOVDreg)
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- return false
-}
-func rewriteValueS390X_OpS390XMOVWreg_10(v *Value) bool {
- b := v.Block
- // match: (MOVWreg x:(MOVWreg _))
- // cond:
- // result: (MOVDreg x)
+ // match: (MOVWreg x:(MOVBZloadidx _ _ _))
+ // cond: (!x.Type.IsSigned() || x.Type.Size() > 1)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVWreg {
+ if x.Op != OpS390XMOVBZloadidx {
+ break
+ }
+ _ = x.Args[2]
+ if !(!x.Type.IsSigned() || x.Type.Size() > 1) {
break
}
- v.reset(OpS390XMOVDreg)
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVWreg (MOVWZreg x))
- // cond:
- // result: (MOVWreg x)
+ // match: (MOVWreg x:(MOVHZload _ _))
+ // cond: (!x.Type.IsSigned() || x.Type.Size() > 2)
+ // result: x
for {
- v_0 := v.Args[0]
- if v_0.Op != OpS390XMOVWZreg {
+ x := v.Args[0]
+ if x.Op != OpS390XMOVHZload {
break
}
- x := v_0.Args[0]
- v.reset(OpS390XMOVWreg)
+ _ = x.Args[1]
+ if !(!x.Type.IsSigned() || x.Type.Size() > 2) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
v.AddArg(x)
return true
}
- // match: (MOVWreg (MOVDconst [c]))
- // cond:
- // result: (MOVDconst [int64(int32(c))])
+ // match: (MOVWreg x:(MOVHZloadidx _ _ _))
+ // cond: (!x.Type.IsSigned() || x.Type.Size() > 2)
+ // result: x
for {
- v_0 := v.Args[0]
- if v_0.Op != OpS390XMOVDconst {
+ x := v.Args[0]
+ if x.Op != OpS390XMOVHZloadidx {
break
}
- c := v_0.AuxInt
- v.reset(OpS390XMOVDconst)
- v.AuxInt = int64(int32(c))
+ _ = x.Args[2]
+ if !(!x.Type.IsSigned() || x.Type.Size() > 2) {
+ break
+ }
+ v.reset(OpCopy)
+ v.Type = x.Type
+ v.AddArg(x)
return true
}
- // match: (MOVWreg x:(MOVWZload [off] {sym} ptr mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVWload [off] {sym} ptr mem)
+ // match: (MOVWreg x:(MOVHZloadidx _ _ _))
+ // cond: (!x.Type.IsSigned() || x.Type.Size() > 2)
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVWZload {
+ if x.Op != OpS390XMOVHZloadidx {
break
}
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[1]
- ptr := x.Args[0]
- if !(x.Uses == 1 && clobber(x)) {
+ _ = x.Args[2]
+ if !(!x.Type.IsSigned() || x.Type.Size() > 2) {
break
}
- b = x.Block
- v0 := b.NewValue0(x.Pos, OpS390XMOVWload, v.Type)
v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(mem)
+ v.Type = x.Type
+ v.AddArg(x)
return true
}
- // match: (MOVWreg x:(MOVWload [off] {sym} ptr mem))
+ // match: (MOVWreg x:(MOVWZload [o] {s} p mem))
// cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVWload [off] {sym} ptr mem)
+ // result: @x.Block (MOVWload [o] {s} p mem)
for {
+ t := v.Type
x := v.Args[0]
- if x.Op != OpS390XMOVWload {
+ if x.Op != OpS390XMOVWZload {
break
}
- off := x.AuxInt
- sym := x.Aux
+ o := x.AuxInt
+ s := x.Aux
mem := x.Args[1]
- ptr := x.Args[0]
+ p := x.Args[0]
if !(x.Uses == 1 && clobber(x)) {
break
}
b = x.Block
- v0 := b.NewValue0(x.Pos, OpS390XMOVWload, v.Type)
+ v0 := b.NewValue0(x.Pos, OpS390XMOVWload, t)
v.reset(OpCopy)
v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
+ v0.AuxInt = o
+ v0.Aux = s
+ v0.AddArg(p)
v0.AddArg(mem)
return true
}
- // match: (MOVWreg x:(MOVWZloadidx [off] {sym} ptr idx mem))
+ return false
+}
+func rewriteValueS390X_OpS390XMOVWreg_20(v *Value) bool {
+ b := v.Block
+ // match: (MOVWreg x:(MOVWZloadidx [o] {s} p i mem))
// cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVWloadidx [off] {sym} ptr idx mem)
+ // result: @x.Block (MOVWloadidx [o] {s} p i mem)
for {
+ t := v.Type
x := v.Args[0]
if x.Op != OpS390XMOVWZloadidx {
break
}
- off := x.AuxInt
- sym := x.Aux
+ o := x.AuxInt
+ s := x.Aux
mem := x.Args[2]
- ptr := x.Args[0]
- idx := x.Args[1]
+ p := x.Args[0]
+ i := x.Args[1]
if !(x.Uses == 1 && clobber(x)) {
break
}
b = x.Block
- v0 := b.NewValue0(v.Pos, OpS390XMOVWloadidx, v.Type)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWloadidx, t)
v.reset(OpCopy)
v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(idx)
+ v0.AuxInt = o
+ v0.Aux = s
+ v0.AddArg(p)
+ v0.AddArg(i)
v0.AddArg(mem)
return true
}
- // match: (MOVWreg x:(MOVWloadidx [off] {sym} ptr idx mem))
- // cond: x.Uses == 1 && clobber(x)
- // result: @x.Block (MOVWloadidx [off] {sym} ptr idx mem)
+ // match: (MOVWreg x:(Arg ))
+ // cond: t.IsSigned() && t.Size() <= 4
+ // result: x
for {
x := v.Args[0]
- if x.Op != OpS390XMOVWloadidx {
+ if x.Op != OpArg {
break
}
- off := x.AuxInt
- sym := x.Aux
- mem := x.Args[2]
- ptr := x.Args[0]
- idx := x.Args[1]
- if !(x.Uses == 1 && clobber(x)) {
+ t := x.Type
+ if !(t.IsSigned() && t.Size() <= 4) {
break
}
- b = x.Block
- v0 := b.NewValue0(v.Pos, OpS390XMOVWloadidx, v.Type)
v.reset(OpCopy)
- v.AddArg(v0)
- v0.AuxInt = off
- v0.Aux = sym
- v0.AddArg(ptr)
- v0.AddArg(idx)
- v0.AddArg(mem)
+ v.Type = x.Type
+ v.AddArg(x)
+ return true
+ }
+ // match: (MOVWreg (MOVDconst [c]))
+ // cond:
+ // result: (MOVDconst [int64(int32(c))])
+ for {
+ v_0 := v.Args[0]
+ if v_0.Op != OpS390XMOVDconst {
+ break
+ }
+ c := v_0.AuxInt
+ v.reset(OpS390XMOVDconst)
+ v.AuxInt = int64(int32(c))
return true
}
return false
@@ -37836,22 +37527,6 @@ func rewriteValueS390X_OpS390XSLD_0(v *Value) bool {
v.AddArg(y)
return true
}
- // match: (SLD x (MOVDreg y))
- // cond:
- // result: (SLD x y)
- for {
- _ = v.Args[1]
- x := v.Args[0]
- v_1 := v.Args[1]
- if v_1.Op != OpS390XMOVDreg {
- break
- }
- y := v_1.Args[0]
- v.reset(OpS390XSLD)
- v.AddArg(x)
- v.AddArg(y)
- return true
- }
// match: (SLD x (MOVWreg y))
// cond:
// result: (SLD x y)
@@ -37932,9 +37607,6 @@ func rewriteValueS390X_OpS390XSLD_0(v *Value) bool {
v.AddArg(y)
return true
}
- return false
-}
-func rewriteValueS390X_OpS390XSLD_10(v *Value) bool {
// match: (SLD x (MOVBZreg y))
// cond:
// result: (SLD x y)
@@ -38041,22 +37713,6 @@ func rewriteValueS390X_OpS390XSLW_0(v *Value) bool {
v.AddArg(y)
return true
}
- // match: (SLW x (MOVDreg y))
- // cond:
- // result: (SLW x y)
- for {
- _ = v.Args[1]
- x := v.Args[0]
- v_1 := v.Args[1]
- if v_1.Op != OpS390XMOVDreg {
- break
- }
- y := v_1.Args[0]
- v.reset(OpS390XSLW)
- v.AddArg(x)
- v.AddArg(y)
- return true
- }
// match: (SLW x (MOVWreg y))
// cond:
// result: (SLW x y)
@@ -38137,9 +37793,6 @@ func rewriteValueS390X_OpS390XSLW_0(v *Value) bool {
v.AddArg(y)
return true
}
- return false
-}
-func rewriteValueS390X_OpS390XSLW_10(v *Value) bool {
// match: (SLW x (MOVBZreg y))
// cond:
// result: (SLW x y)
@@ -38246,22 +37899,6 @@ func rewriteValueS390X_OpS390XSRAD_0(v *Value) bool {
v.AddArg(y)
return true
}
- // match: (SRAD x (MOVDreg y))
- // cond:
- // result: (SRAD x y)
- for {
- _ = v.Args[1]
- x := v.Args[0]
- v_1 := v.Args[1]
- if v_1.Op != OpS390XMOVDreg {
- break
- }
- y := v_1.Args[0]
- v.reset(OpS390XSRAD)
- v.AddArg(x)
- v.AddArg(y)
- return true
- }
// match: (SRAD x (MOVWreg y))
// cond:
// result: (SRAD x y)
@@ -38342,9 +37979,6 @@ func rewriteValueS390X_OpS390XSRAD_0(v *Value) bool {
v.AddArg(y)
return true
}
- return false
-}
-func rewriteValueS390X_OpS390XSRAD_10(v *Value) bool {
// match: (SRAD x (MOVBZreg y))
// cond:
// result: (SRAD x y)
@@ -38468,22 +38102,6 @@ func rewriteValueS390X_OpS390XSRAW_0(v *Value) bool {
v.AddArg(y)
return true
}
- // match: (SRAW x (MOVDreg y))
- // cond:
- // result: (SRAW x y)
- for {
- _ = v.Args[1]
- x := v.Args[0]
- v_1 := v.Args[1]
- if v_1.Op != OpS390XMOVDreg {
- break
- }
- y := v_1.Args[0]
- v.reset(OpS390XSRAW)
- v.AddArg(x)
- v.AddArg(y)
- return true
- }
// match: (SRAW x (MOVWreg y))
// cond:
// result: (SRAW x y)
@@ -38564,9 +38182,6 @@ func rewriteValueS390X_OpS390XSRAW_0(v *Value) bool {
v.AddArg(y)
return true
}
- return false
-}
-func rewriteValueS390X_OpS390XSRAW_10(v *Value) bool {
// match: (SRAW x (MOVBZreg y))
// cond:
// result: (SRAW x y)
@@ -38690,22 +38305,6 @@ func rewriteValueS390X_OpS390XSRD_0(v *Value) bool {
v.AddArg(y)
return true
}
- // match: (SRD x (MOVDreg y))
- // cond:
- // result: (SRD x y)
- for {
- _ = v.Args[1]
- x := v.Args[0]
- v_1 := v.Args[1]
- if v_1.Op != OpS390XMOVDreg {
- break
- }
- y := v_1.Args[0]
- v.reset(OpS390XSRD)
- v.AddArg(x)
- v.AddArg(y)
- return true
- }
// match: (SRD x (MOVWreg y))
// cond:
// result: (SRD x y)
@@ -38786,9 +38385,6 @@ func rewriteValueS390X_OpS390XSRD_0(v *Value) bool {
v.AddArg(y)
return true
}
- return false
-}
-func rewriteValueS390X_OpS390XSRD_10(v *Value) bool {
// match: (SRD x (MOVBZreg y))
// cond:
// result: (SRD x y)
@@ -38926,22 +38522,6 @@ func rewriteValueS390X_OpS390XSRW_0(v *Value) bool {
v.AddArg(y)
return true
}
- // match: (SRW x (MOVDreg y))
- // cond:
- // result: (SRW x y)
- for {
- _ = v.Args[1]
- x := v.Args[0]
- v_1 := v.Args[1]
- if v_1.Op != OpS390XMOVDreg {
- break
- }
- y := v_1.Args[0]
- v.reset(OpS390XSRW)
- v.AddArg(x)
- v.AddArg(y)
- return true
- }
// match: (SRW x (MOVWreg y))
// cond:
// result: (SRW x y)
@@ -39022,9 +38602,6 @@ func rewriteValueS390X_OpS390XSRW_0(v *Value) bool {
v.AddArg(y)
return true
}
- return false
-}
-func rewriteValueS390X_OpS390XSRW_10(v *Value) bool {
// match: (SRW x (MOVBZreg y))
// cond:
// result: (SRW x y)
diff --git a/src/cmd/compile/internal/ssa/rewriteWasm.go b/src/cmd/compile/internal/ssa/rewriteWasm.go
index 45b855027d..c9384af16a 100644
--- a/src/cmd/compile/internal/ssa/rewriteWasm.go
+++ b/src/cmd/compile/internal/ssa/rewriteWasm.go
@@ -2742,6 +2742,44 @@ func rewriteValueWasm_OpLsh64x64_0(v *Value) bool {
v.AddArg(y)
return true
}
+ // match: (Lsh64x64 x (I64Const [c]))
+ // cond: uint64(c) < 64
+ // result: (I64Shl x (I64Const [c]))
+ for {
+ _ = v.Args[1]
+ x := v.Args[0]
+ v_1 := v.Args[1]
+ if v_1.Op != OpWasmI64Const {
+ break
+ }
+ c := v_1.AuxInt
+ if !(uint64(c) < 64) {
+ break
+ }
+ v.reset(OpWasmI64Shl)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Pos, OpWasmI64Const, typ.Int64)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ return true
+ }
+ // match: (Lsh64x64 x (I64Const [c]))
+ // cond: uint64(c) >= 64
+ // result: (I64Const [0])
+ for {
+ _ = v.Args[1]
+ v_1 := v.Args[1]
+ if v_1.Op != OpWasmI64Const {
+ break
+ }
+ c := v_1.AuxInt
+ if !(uint64(c) >= 64) {
+ break
+ }
+ v.reset(OpWasmI64Const)
+ v.AuxInt = 0
+ return true
+ }
// match: (Lsh64x64 x y)
// cond:
// result: (Select (I64Shl x y) (I64Const [0]) (I64LtU y (I64Const [64])))
@@ -4264,6 +4302,44 @@ func rewriteValueWasm_OpRsh64Ux64_0(v *Value) bool {
v.AddArg(y)
return true
}
+ // match: (Rsh64Ux64 x (I64Const [c]))
+ // cond: uint64(c) < 64
+ // result: (I64ShrU x (I64Const [c]))
+ for {
+ _ = v.Args[1]
+ x := v.Args[0]
+ v_1 := v.Args[1]
+ if v_1.Op != OpWasmI64Const {
+ break
+ }
+ c := v_1.AuxInt
+ if !(uint64(c) < 64) {
+ break
+ }
+ v.reset(OpWasmI64ShrU)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Pos, OpWasmI64Const, typ.Int64)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ return true
+ }
+ // match: (Rsh64Ux64 x (I64Const [c]))
+ // cond: uint64(c) >= 64
+ // result: (I64Const [0])
+ for {
+ _ = v.Args[1]
+ v_1 := v.Args[1]
+ if v_1.Op != OpWasmI64Const {
+ break
+ }
+ c := v_1.AuxInt
+ if !(uint64(c) >= 64) {
+ break
+ }
+ v.reset(OpWasmI64Const)
+ v.AuxInt = 0
+ return true
+ }
// match: (Rsh64Ux64 x y)
// cond:
// result: (Select (I64ShrU x y) (I64Const [0]) (I64LtU y (I64Const [64])))
@@ -4355,6 +4431,48 @@ func rewriteValueWasm_OpRsh64x64_0(v *Value) bool {
v.AddArg(y)
return true
}
+ // match: (Rsh64x64 x (I64Const [c]))
+ // cond: uint64(c) < 64
+ // result: (I64ShrS x (I64Const [c]))
+ for {
+ _ = v.Args[1]
+ x := v.Args[0]
+ v_1 := v.Args[1]
+ if v_1.Op != OpWasmI64Const {
+ break
+ }
+ c := v_1.AuxInt
+ if !(uint64(c) < 64) {
+ break
+ }
+ v.reset(OpWasmI64ShrS)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Pos, OpWasmI64Const, typ.Int64)
+ v0.AuxInt = c
+ v.AddArg(v0)
+ return true
+ }
+ // match: (Rsh64x64 x (I64Const [c]))
+ // cond: uint64(c) >= 64
+ // result: (I64ShrS x (I64Const [63]))
+ for {
+ _ = v.Args[1]
+ x := v.Args[0]
+ v_1 := v.Args[1]
+ if v_1.Op != OpWasmI64Const {
+ break
+ }
+ c := v_1.AuxInt
+ if !(uint64(c) >= 64) {
+ break
+ }
+ v.reset(OpWasmI64ShrS)
+ v.AddArg(x)
+ v0 := b.NewValue0(v.Pos, OpWasmI64Const, typ.Int64)
+ v0.AuxInt = 63
+ v.AddArg(v0)
+ return true
+ }
// match: (Rsh64x64 x y)
// cond:
// result: (I64ShrS x (Select y (I64Const [63]) (I64LtU y (I64Const [64]))))
diff --git a/src/cmd/compile/internal/ssa/sparsetree.go b/src/cmd/compile/internal/ssa/sparsetree.go
index 546da8348d..fe96912c00 100644
--- a/src/cmd/compile/internal/ssa/sparsetree.go
+++ b/src/cmd/compile/internal/ssa/sparsetree.go
@@ -223,7 +223,7 @@ func (t SparseTree) domorder(x *Block) int32 {
// entry(x) < entry(y) allows cases x-dom-y and x-then-y.
// But by supposition, x does not dominate y. So we have x-then-y.
//
- // For contractidion, assume x dominates z.
+ // For contradiction, assume x dominates z.
// Then entry(x) < entry(z) < exit(z) < exit(x).
// But we know x-then-y, so entry(x) < exit(x) < entry(y) < exit(y).
// Combining those, entry(x) < entry(z) < exit(z) < exit(x) < entry(y) < exit(y).
diff --git a/src/cmd/compile/internal/syntax/scanner.go b/src/cmd/compile/internal/syntax/scanner.go
index 30ee6c0e5f..fef87171bc 100644
--- a/src/cmd/compile/internal/syntax/scanner.go
+++ b/src/cmd/compile/internal/syntax/scanner.go
@@ -35,8 +35,8 @@ type scanner struct {
// current token, valid after calling next()
line, col uint
tok token
- lit string // valid if tok is _Name, _Literal, or _Semi ("semicolon", "newline", or "EOF")
- bad bool // valid if tok is _Literal, true if a syntax error occurred, lit may be incorrect
+ lit string // valid if tok is _Name, _Literal, or _Semi ("semicolon", "newline", or "EOF"); may be malformed if bad is true
+ bad bool // valid if tok is _Literal, true if a syntax error occurred, lit may be malformed
kind LitKind // valid if tok is _Literal
op Operator // valid if tok is _Operator, _AssignOp, or _IncOp
prec int // valid if tok is _Operator, _AssignOp, or _IncOp
@@ -50,8 +50,6 @@ func (s *scanner) init(src io.Reader, errh func(line, col uint, msg string), mod
// errorf reports an error at the most recently read character position.
func (s *scanner) errorf(format string, args ...interface{}) {
- // TODO(gri) Consider using s.bad to consistently suppress multiple errors
- // per token, here and below.
s.bad = true
s.error(fmt.Sprintf(format, args...))
}
@@ -495,17 +493,19 @@ func (s *scanner) number(c rune) {
digsep |= ds
}
- if digsep&1 == 0 {
+ if digsep&1 == 0 && !s.bad {
s.errorf("%s has no digits", litname(prefix))
}
// exponent
if e := lower(c); e == 'e' || e == 'p' {
- switch {
- case e == 'e' && prefix != 0 && prefix != '0':
- s.errorf("%q exponent requires decimal mantissa", c)
- case e == 'p' && prefix != 'x':
- s.errorf("%q exponent requires hexadecimal mantissa", c)
+ if !s.bad {
+ switch {
+ case e == 'e' && prefix != 0 && prefix != '0':
+ s.errorf("%q exponent requires decimal mantissa", c)
+ case e == 'p' && prefix != 'x':
+ s.errorf("%q exponent requires hexadecimal mantissa", c)
+ }
}
c = s.getr()
s.kind = FloatLit
@@ -514,10 +514,10 @@ func (s *scanner) number(c rune) {
}
c, ds = s.digits(c, 10, nil)
digsep |= ds
- if ds&1 == 0 {
+ if ds&1 == 0 && !s.bad {
s.errorf("exponent has no digits")
}
- } else if prefix == 'x' && s.kind == FloatLit {
+ } else if prefix == 'x' && s.kind == FloatLit && !s.bad {
s.errorf("hexadecimal mantissa requires a 'p' exponent")
}
@@ -532,11 +532,11 @@ func (s *scanner) number(c rune) {
s.lit = string(s.stopLit())
s.tok = _Literal
- if s.kind == IntLit && invalid >= 0 {
+ if s.kind == IntLit && invalid >= 0 && !s.bad {
s.errorAtf(invalid, "invalid digit %q in %s", s.lit[invalid], litname(prefix))
}
- if digsep&2 != 0 {
+ if digsep&2 != 0 && !s.bad {
if i := invalidSep(s.lit); i >= 0 {
s.errorAtf(i, "'_' must separate successive digits")
}
diff --git a/src/cmd/compile/internal/syntax/scanner_test.go b/src/cmd/compile/internal/syntax/scanner_test.go
index 3030bfd4c0..717deb9073 100644
--- a/src/cmd/compile/internal/syntax/scanner_test.go
+++ b/src/cmd/compile/internal/syntax/scanner_test.go
@@ -652,3 +652,25 @@ func TestIssue21938(t *testing.T) {
t.Errorf("got %s %q; want %s %q", got.tok, got.lit, _Literal, ".5")
}
}
+
+func TestIssue33961(t *testing.T) {
+ literals := `08__ 0b.p 0b_._p 0x.e 0x.p`
+ for _, lit := range strings.Split(literals, " ") {
+ n := 0
+ var got scanner
+ got.init(strings.NewReader(lit), func(_, _ uint, msg string) {
+ // fmt.Printf("%s: %s\n", lit, msg) // uncomment for debugging
+ n++
+ }, 0)
+ got.next()
+
+ if n != 1 {
+ t.Errorf("%q: got %d errors; want 1", lit, n)
+ continue
+ }
+
+ if !got.bad {
+ t.Errorf("%q: got error but bad not set", lit)
+ }
+ }
+}
diff --git a/src/cmd/compile/internal/types/etype_string.go b/src/cmd/compile/internal/types/etype_string.go
index f234a31fd0..0ff05a8c2a 100644
--- a/src/cmd/compile/internal/types/etype_string.go
+++ b/src/cmd/compile/internal/types/etype_string.go
@@ -4,9 +4,52 @@ package types
import "strconv"
-const _EType_name = "xxxINT8UINT8INT16UINT16INT32UINT32INT64UINT64INTUINTUINTPTRCOMPLEX64COMPLEX128FLOAT32FLOAT64BOOLPTRFUNCSLICEARRAYSTRUCTCHANMAPINTERFORWANYSTRINGUNSAFEPTRIDEALNILBLANKFUNCARGSCHANARGSDDDFIELDSSATUPLENTYPE"
+func _() {
+ // An "invalid array index" compiler error signifies that the constant values have changed.
+ // Re-run the stringer command to generate them again.
+ var x [1]struct{}
+ _ = x[Txxx-0]
+ _ = x[TINT8-1]
+ _ = x[TUINT8-2]
+ _ = x[TINT16-3]
+ _ = x[TUINT16-4]
+ _ = x[TINT32-5]
+ _ = x[TUINT32-6]
+ _ = x[TINT64-7]
+ _ = x[TUINT64-8]
+ _ = x[TINT-9]
+ _ = x[TUINT-10]
+ _ = x[TUINTPTR-11]
+ _ = x[TCOMPLEX64-12]
+ _ = x[TCOMPLEX128-13]
+ _ = x[TFLOAT32-14]
+ _ = x[TFLOAT64-15]
+ _ = x[TBOOL-16]
+ _ = x[TPTR-17]
+ _ = x[TFUNC-18]
+ _ = x[TSLICE-19]
+ _ = x[TARRAY-20]
+ _ = x[TSTRUCT-21]
+ _ = x[TCHAN-22]
+ _ = x[TMAP-23]
+ _ = x[TINTER-24]
+ _ = x[TFORW-25]
+ _ = x[TANY-26]
+ _ = x[TSTRING-27]
+ _ = x[TUNSAFEPTR-28]
+ _ = x[TIDEAL-29]
+ _ = x[TNIL-30]
+ _ = x[TBLANK-31]
+ _ = x[TFUNCARGS-32]
+ _ = x[TCHANARGS-33]
+ _ = x[TSSA-34]
+ _ = x[TTUPLE-35]
+ _ = x[NTYPE-36]
+}
+
+const _EType_name = "xxxINT8UINT8INT16UINT16INT32UINT32INT64UINT64INTUINTUINTPTRCOMPLEX64COMPLEX128FLOAT32FLOAT64BOOLPTRFUNCSLICEARRAYSTRUCTCHANMAPINTERFORWANYSTRINGUNSAFEPTRIDEALNILBLANKFUNCARGSCHANARGSSSATUPLENTYPE"
-var _EType_index = [...]uint8{0, 3, 7, 12, 17, 23, 28, 34, 39, 45, 48, 52, 59, 68, 78, 85, 92, 96, 99, 103, 108, 113, 119, 123, 126, 131, 135, 138, 144, 153, 158, 161, 166, 174, 182, 190, 193, 198, 203}
+var _EType_index = [...]uint8{0, 3, 7, 12, 17, 23, 28, 34, 39, 45, 48, 52, 59, 68, 78, 85, 92, 96, 99, 103, 108, 113, 119, 123, 126, 131, 135, 138, 144, 153, 158, 161, 166, 174, 182, 185, 190, 195}
func (i EType) String() string {
if i >= EType(len(_EType_index)-1) {
diff --git a/src/cmd/compile/internal/types/identity.go b/src/cmd/compile/internal/types/identity.go
index 7c14a03ba1..a77f514df9 100644
--- a/src/cmd/compile/internal/types/identity.go
+++ b/src/cmd/compile/internal/types/identity.go
@@ -53,6 +53,13 @@ func identical(t1, t2 *Type, cmpTags bool, assumedEqual map[typePair]struct{}) b
assumedEqual[typePair{t1, t2}] = struct{}{}
switch t1.Etype {
+ case TIDEAL:
+ // Historically, cmd/compile used a single "untyped
+ // number" type, so all untyped number types were
+ // identical. Match this behavior.
+ // TODO(mdempsky): Revisit this.
+ return true
+
case TINTER:
if t1.NumFields() != t2.NumFields() {
return false
diff --git a/src/cmd/compile/internal/types/sizeof_test.go b/src/cmd/compile/internal/types/sizeof_test.go
index 2633ef2ddd..09b852f343 100644
--- a/src/cmd/compile/internal/types/sizeof_test.go
+++ b/src/cmd/compile/internal/types/sizeof_test.go
@@ -31,7 +31,6 @@ func TestSizeof(t *testing.T) {
{Interface{}, 8, 16},
{Chan{}, 8, 16},
{Array{}, 12, 16},
- {DDDField{}, 4, 8},
{FuncArgs{}, 4, 8},
{ChanArgs{}, 4, 8},
{Ptr{}, 4, 8},
diff --git a/src/cmd/compile/internal/types/type.go b/src/cmd/compile/internal/types/type.go
index 2fcd6057f3..e61a5573dd 100644
--- a/src/cmd/compile/internal/types/type.go
+++ b/src/cmd/compile/internal/types/type.go
@@ -57,7 +57,7 @@ const (
TUNSAFEPTR
// pseudo-types for literals
- TIDEAL
+ TIDEAL // untyped numeric constants
TNIL
TBLANK
@@ -65,9 +65,6 @@ const (
TFUNCARGS
TCHANARGS
- // pseudo-types for import/export
- TDDDFIELD // wrapper: contained type is a ... field
-
// SSA backend types
TSSA // internal types used by SSA backend (flags, memory, etc.)
TTUPLE // a pair of types, used by SSA backend
@@ -94,7 +91,6 @@ const (
// It also stores pointers to several special types:
// - Types[TANY] is the placeholder "any" type recognized by substArgTypes.
// - Types[TBLANK] represents the blank variable's type.
-// - Types[TIDEAL] represents untyped numeric constants.
// - Types[TNIL] represents the predeclared "nil" value's type.
// - Types[TUNSAFEPTR] is package unsafe's Pointer type.
var Types [NTYPE]*Type
@@ -112,8 +108,6 @@ var (
Idealbool *Type
// Types to represent untyped numeric constants.
- // Note: Currently these are only used within the binary export
- // data format. The rest of the compiler only uses Types[TIDEAL].
Idealint = New(TIDEAL)
Idealrune = New(TIDEAL)
Idealfloat = New(TIDEAL)
@@ -131,7 +125,6 @@ type Type struct {
// TFUNC: *Func
// TSTRUCT: *Struct
// TINTER: *Interface
- // TDDDFIELD: DDDField
// TFUNCARGS: FuncArgs
// TCHANARGS: ChanArgs
// TCHAN: *Chan
@@ -308,11 +301,6 @@ type Ptr struct {
Elem *Type // element type
}
-// DDDField contains Type fields specific to TDDDFIELD types.
-type DDDField struct {
- T *Type // reference to a slice type for ... args
-}
-
// ChanArgs contains Type fields specific to TCHANARGS types.
type ChanArgs struct {
T *Type // reference to a chan type whose elements need a width check
@@ -473,8 +461,6 @@ func New(et EType) *Type {
t.Extra = ChanArgs{}
case TFUNCARGS:
t.Extra = FuncArgs{}
- case TDDDFIELD:
- t.Extra = DDDField{}
case TCHAN:
t.Extra = new(Chan)
case TTUPLE:
@@ -576,13 +562,6 @@ func NewPtr(elem *Type) *Type {
return t
}
-// NewDDDField returns a new TDDDFIELD type for slice type s.
-func NewDDDField(s *Type) *Type {
- t := New(TDDDFIELD)
- t.Extra = DDDField{T: s}
- return t
-}
-
// NewChanArgs returns a new TCHANARGS type for channel type c.
func NewChanArgs(c *Type) *Type {
t := New(TCHANARGS)
@@ -802,12 +781,6 @@ func (t *Type) Elem() *Type {
return nil
}
-// DDDField returns the slice ... type for TDDDFIELD type t.
-func (t *Type) DDDField() *Type {
- t.wantEtype(TDDDFIELD)
- return t.Extra.(DDDField).T
-}
-
// ChanArgs returns the channel type for TCHANARGS type t.
func (t *Type) ChanArgs() *Type {
t.wantEtype(TCHANARGS)
diff --git a/src/cmd/go/alldocs.go b/src/cmd/go/alldocs.go
index ebbead5d31..36fa528a90 100644
--- a/src/cmd/go/alldocs.go
+++ b/src/cmd/go/alldocs.go
@@ -110,11 +110,13 @@
// The default is the number of CPUs available.
// -race
// enable data race detection.
-// Supported only on linux/amd64, freebsd/amd64, darwin/amd64 and windows/amd64.
+// Supported only on linux/amd64, freebsd/amd64, darwin/amd64, windows/amd64,
+// linux/ppc64le and linux/arm64 (only for 48-bit VMA).
// -msan
// enable interoperation with memory sanitizer.
// Supported only on linux/amd64, linux/arm64
// and only with Clang/LLVM as the host C compiler.
+// On linux/arm64, pie build mode will be used.
// -v
// print the names of packages as they are compiled.
// -work
@@ -2060,8 +2062,8 @@
//
// The GET requests sent to a Go module proxy are:
//
-// GET $GOPROXY//@v/list returns a list of all known versions of the
-// given module, one per line.
+// GET $GOPROXY//@v/list returns a list of known versions of the given
+// module, one per line.
//
// GET $GOPROXY//@v/.info returns JSON-formatted metadata
// about that version of the given module.
@@ -2072,6 +2074,21 @@
// GET $GOPROXY//@v/.zip returns the zip archive
// for that version of the given module.
//
+// GET $GOPROXY//@latest returns JSON-formatted metadata about the
+// latest known version of the given module in the same format as
+// /@v/.info. The latest version should be the version of
+// the module the go command may use if /@v/list is empty or no
+// listed version is suitable. /@latest is optional and may not
+// be implemented by a module proxy.
+//
+// When resolving the latest version of a module, the go command will request
+// /@v/list, then, if no suitable versions are found, /@latest.
+// The go command prefers, in order: the semantically highest release version,
+// the semantically highest pre-release version, and the chronologically
+// most recent pseudo-version. In Go 1.12 and earlier, the go command considered
+// pseudo-versions in /@v/list to be pre-release versions, but this is
+// no longer true since Go 1.13.
+//
// To avoid problems when serving from case-sensitive file systems,
// the and elements are case-encoded, replacing every
// uppercase letter with an exclamation mark followed by the corresponding
diff --git a/src/cmd/go/go_test.go b/src/cmd/go/go_test.go
index f6caa01fd2..71a2e01fa3 100644
--- a/src/cmd/go/go_test.go
+++ b/src/cmd/go/go_test.go
@@ -206,7 +206,7 @@ func TestMain(m *testing.M) {
// (installed in GOROOT/pkg/tool/GOOS_GOARCH).
// If these are not the same toolchain, then the entire standard library
// will look out of date (the compilers in those two different tool directories
- // are built for different architectures and have different buid IDs),
+ // are built for different architectures and have different build IDs),
// which will cause many tests to do unnecessary rebuilds and some
// tests to attempt to overwrite the installed standard library.
// Bail out entirely in this case.
diff --git a/src/cmd/go/internal/cache/cache.go b/src/cmd/go/internal/cache/cache.go
index a05a08f75f..8797398765 100644
--- a/src/cmd/go/internal/cache/cache.go
+++ b/src/cmd/go/internal/cache/cache.go
@@ -74,7 +74,22 @@ func (c *Cache) fileName(id [HashSize]byte, key string) string {
return filepath.Join(c.dir, fmt.Sprintf("%02x", id[0]), fmt.Sprintf("%x", id)+"-"+key)
}
-var errMissing = errors.New("cache entry not found")
+// An entryNotFoundError indicates that a cache entry was not found, with an
+// optional underlying reason.
+type entryNotFoundError struct {
+ Err error
+}
+
+func (e *entryNotFoundError) Error() string {
+ if e.Err == nil {
+ return "cache entry not found"
+ }
+ return fmt.Sprintf("cache entry not found: %v", e.Err)
+}
+
+func (e *entryNotFoundError) Unwrap() error {
+ return e.Err
+}
const (
// action entry file is "v1 \n"
@@ -93,6 +108,8 @@ const (
// GODEBUG=gocacheverify=1.
var verify = false
+var errVerifyMode = errors.New("gocachverify=1")
+
// DebugTest is set when GODEBUG=gocachetest=1 is in the environment.
var DebugTest = false
@@ -121,7 +138,7 @@ func initEnv() {
// saved file for that output ID is still available.
func (c *Cache) Get(id ActionID) (Entry, error) {
if verify {
- return Entry{}, errMissing
+ return Entry{}, &entryNotFoundError{Err: errVerifyMode}
}
return c.get(id)
}
@@ -134,47 +151,60 @@ type Entry struct {
// get is Get but does not respect verify mode, so that Put can use it.
func (c *Cache) get(id ActionID) (Entry, error) {
- missing := func() (Entry, error) {
- return Entry{}, errMissing
+ missing := func(reason error) (Entry, error) {
+ return Entry{}, &entryNotFoundError{Err: reason}
}
f, err := os.Open(c.fileName(id, "a"))
if err != nil {
- return missing()
+ return missing(err)
}
defer f.Close()
entry := make([]byte, entrySize+1) // +1 to detect whether f is too long
- if n, err := io.ReadFull(f, entry); n != entrySize || err != io.ErrUnexpectedEOF {
- return missing()
+ if n, err := io.ReadFull(f, entry); n > entrySize {
+ return missing(errors.New("too long"))
+ } else if err != io.ErrUnexpectedEOF {
+ if err == io.EOF {
+ return missing(errors.New("file is empty"))
+ }
+ return missing(err)
+ } else if n < entrySize {
+ return missing(errors.New("entry file incomplete"))
}
if entry[0] != 'v' || entry[1] != '1' || entry[2] != ' ' || entry[3+hexSize] != ' ' || entry[3+hexSize+1+hexSize] != ' ' || entry[3+hexSize+1+hexSize+1+20] != ' ' || entry[entrySize-1] != '\n' {
- return missing()
+ return missing(errors.New("invalid header"))
}
eid, entry := entry[3:3+hexSize], entry[3+hexSize:]
eout, entry := entry[1:1+hexSize], entry[1+hexSize:]
esize, entry := entry[1:1+20], entry[1+20:]
etime, entry := entry[1:1+20], entry[1+20:]
var buf [HashSize]byte
- if _, err := hex.Decode(buf[:], eid); err != nil || buf != id {
- return missing()
+ if _, err := hex.Decode(buf[:], eid); err != nil {
+ return missing(fmt.Errorf("decoding ID: %v", err))
+ } else if buf != id {
+ return missing(errors.New("mismatched ID"))
}
if _, err := hex.Decode(buf[:], eout); err != nil {
- return missing()
+ return missing(fmt.Errorf("decoding output ID: %v", err))
}
i := 0
for i < len(esize) && esize[i] == ' ' {
i++
}
size, err := strconv.ParseInt(string(esize[i:]), 10, 64)
- if err != nil || size < 0 {
- return missing()
+ if err != nil {
+ return missing(fmt.Errorf("parsing size: %v", err))
+ } else if size < 0 {
+ return missing(errors.New("negative size"))
}
i = 0
for i < len(etime) && etime[i] == ' ' {
i++
}
tm, err := strconv.ParseInt(string(etime[i:]), 10, 64)
- if err != nil || tm < 0 {
- return missing()
+ if err != nil {
+ return missing(fmt.Errorf("parsing timestamp: %v", err))
+ } else if tm < 0 {
+ return missing(errors.New("negative timestamp"))
}
c.used(c.fileName(id, "a"))
@@ -191,8 +221,11 @@ func (c *Cache) GetFile(id ActionID) (file string, entry Entry, err error) {
}
file = c.OutputFile(entry.OutputID)
info, err := os.Stat(file)
- if err != nil || info.Size() != entry.Size {
- return "", Entry{}, errMissing
+ if err != nil {
+ return "", Entry{}, &entryNotFoundError{Err: err}
+ }
+ if info.Size() != entry.Size {
+ return "", Entry{}, &entryNotFoundError{Err: errors.New("file incomplete")}
}
return file, entry, nil
}
@@ -207,7 +240,7 @@ func (c *Cache) GetBytes(id ActionID) ([]byte, Entry, error) {
}
data, _ := ioutil.ReadFile(c.OutputFile(entry.OutputID))
if sha256.Sum256(data) != entry.OutputID {
- return nil, entry, errMissing
+ return nil, entry, &entryNotFoundError{Err: errors.New("bad checksum")}
}
return data, entry, nil
}
diff --git a/src/cmd/go/internal/get/discovery.go b/src/cmd/go/internal/get/discovery.go
index 6ba5c091e3..afa6ef455f 100644
--- a/src/cmd/go/internal/get/discovery.go
+++ b/src/cmd/go/internal/get/discovery.go
@@ -11,15 +11,15 @@ import (
"strings"
)
-// charsetReader returns a reader for the given charset. Currently
-// it only supports UTF-8 and ASCII. Otherwise, it returns a meaningful
+// charsetReader returns a reader that converts from the given charset to UTF-8.
+// Currently it only supports UTF-8 and ASCII. Otherwise, it returns a meaningful
// error which is printed by go get, so the user can find why the package
// wasn't downloaded if the encoding is not supported. Note that, in
// order to reduce potential errors, ASCII is treated as UTF-8 (i.e. characters
// greater than 0x7f are not rejected).
func charsetReader(charset string, input io.Reader) (io.Reader, error) {
switch strings.ToLower(charset) {
- case "ascii":
+ case "utf-8", "ascii":
return input, nil
default:
return nil, fmt.Errorf("can't decode XML document using charset %q", charset)
@@ -28,16 +28,16 @@ func charsetReader(charset string, input io.Reader) (io.Reader, error) {
// parseMetaGoImports returns meta imports from the HTML in r.
// Parsing ends at the end of the section or the beginning of the .
-func parseMetaGoImports(r io.Reader, mod ModuleMode) (imports []metaImport, err error) {
+func parseMetaGoImports(r io.Reader, mod ModuleMode) ([]metaImport, error) {
d := xml.NewDecoder(r)
d.CharsetReader = charsetReader
d.Strict = false
- var t xml.Token
+ var imports []metaImport
for {
- t, err = d.RawToken()
+ t, err := d.RawToken()
if err != nil {
- if err == io.EOF || len(imports) > 0 {
- err = nil
+ if err != io.EOF && len(imports) == 0 {
+ return nil, err
}
break
}
diff --git a/src/cmd/go/internal/get/vcs.go b/src/cmd/go/internal/get/vcs.go
index 6ae3cffd93..3ccfbb8837 100644
--- a/src/cmd/go/internal/get/vcs.go
+++ b/src/cmd/go/internal/get/vcs.go
@@ -661,7 +661,7 @@ func RepoRootForImportPath(importPath string, mod ModuleMode, security web.Secur
if err == errUnknownSite {
rr, err = repoRootForImportDynamic(importPath, mod, security)
if err != nil {
- err = fmt.Errorf("unrecognized import path %q (%v)", importPath, err)
+ err = fmt.Errorf("unrecognized import path %q: %v", importPath, err)
}
}
if err != nil {
@@ -799,6 +799,13 @@ func repoRootForImportDynamic(importPath string, mod ModuleMode, security web.Se
body := resp.Body
defer body.Close()
imports, err := parseMetaGoImports(body, mod)
+ if len(imports) == 0 {
+ if respErr := resp.Err(); respErr != nil {
+ // If the server's status was not OK, prefer to report that instead of
+ // an XML parse error.
+ return nil, respErr
+ }
+ }
if err != nil {
return nil, fmt.Errorf("parsing %s: %v", importPath, err)
}
@@ -909,6 +916,13 @@ func metaImportsForPrefix(importPrefix string, mod ModuleMode, security web.Secu
body := resp.Body
defer body.Close()
imports, err := parseMetaGoImports(body, mod)
+ if len(imports) == 0 {
+ if respErr := resp.Err(); respErr != nil {
+ // If the server's status was not OK, prefer to report that instead of
+ // an XML parse error.
+ return setCache(fetchResult{url: url, err: respErr})
+ }
+ }
if err != nil {
return setCache(fetchResult{url: url, err: fmt.Errorf("parsing %s: %v", resp.URL, err)})
}
@@ -962,7 +976,7 @@ func (m ImportMismatchError) Error() string {
// matchGoImport returns the metaImport from imports matching importPath.
// An error is returned if there are multiple matches.
-// errNoMatch is returned if none match.
+// An ImportMismatchError is returned if none match.
func matchGoImport(imports []metaImport, importPath string) (metaImport, error) {
match := -1
diff --git a/src/cmd/go/internal/load/test.go b/src/cmd/go/internal/load/test.go
index afff5deaaa..2864fb5ebb 100644
--- a/src/cmd/go/internal/load/test.go
+++ b/src/cmd/go/internal/load/test.go
@@ -399,10 +399,13 @@ func recompileForTest(pmain, preal, ptest, pxtest *Package) {
}
}
- // Don't compile build info from a main package. This can happen
- // if -coverpkg patterns include main packages, since those packages
- // are imported by pmain. See golang.org/issue/30907.
- if p.Internal.BuildInfo != "" && p != pmain {
+ // Force main packages the test imports to be built as libraries.
+ // Normal imports of main packages are forbidden by the package loader,
+ // but this can still happen if -coverpkg patterns include main packages:
+ // covered packages are imported by pmain. Linking multiple packages
+ // compiled with '-p main' causes duplicate symbol errors.
+ // See golang.org/issue/30907, golang.org/issue/34114.
+ if p.Name == "main" && p != pmain {
split()
}
}
diff --git a/src/cmd/go/internal/modfetch/codehost/git.go b/src/cmd/go/internal/modfetch/codehost/git.go
index d382e8ac9a..df895ec91b 100644
--- a/src/cmd/go/internal/modfetch/codehost/git.go
+++ b/src/cmd/go/internal/modfetch/codehost/git.go
@@ -6,9 +6,11 @@ package codehost
import (
"bytes"
+ "errors"
"fmt"
"io"
"io/ioutil"
+ "net/url"
"os"
"os/exec"
"path/filepath"
@@ -21,6 +23,7 @@ import (
"cmd/go/internal/lockedfile"
"cmd/go/internal/par"
"cmd/go/internal/semver"
+ "cmd/go/internal/web"
)
// GitRepo returns the code repository at the given Git remote reference.
@@ -34,6 +37,15 @@ func LocalGitRepo(remote string) (Repo, error) {
return newGitRepoCached(remote, true)
}
+// A notExistError wraps another error to retain its original text
+// but makes it opaquely equivalent to os.ErrNotExist.
+type notExistError struct {
+ err error
+}
+
+func (e notExistError) Error() string { return e.err.Error() }
+func (notExistError) Is(err error) bool { return err == os.ErrNotExist }
+
const gitWorkDirType = "git3"
var gitRepoCache par.Cache
@@ -85,8 +97,9 @@ func newGitRepo(remote string, localOK bool) (Repo, error) {
os.RemoveAll(r.dir)
return nil, err
}
- r.remote = "origin"
}
+ r.remoteURL = r.remote
+ r.remote = "origin"
} else {
// Local path.
// Disallow colon (not in ://) because sometimes
@@ -113,9 +126,9 @@ func newGitRepo(remote string, localOK bool) (Repo, error) {
}
type gitRepo struct {
- remote string
- local bool
- dir string
+ remote, remoteURL string
+ local bool
+ dir string
mu lockedfile.Mutex // protects fetchLevel and git repo state
@@ -166,14 +179,25 @@ func (r *gitRepo) loadRefs() {
// The git protocol sends all known refs and ls-remote filters them on the client side,
// so we might as well record both heads and tags in one shot.
// Most of the time we only care about tags but sometimes we care about heads too.
- out, err := Run(r.dir, "git", "ls-remote", "-q", r.remote)
- if err != nil {
- if rerr, ok := err.(*RunError); ok {
+ out, gitErr := Run(r.dir, "git", "ls-remote", "-q", r.remote)
+ if gitErr != nil {
+ if rerr, ok := gitErr.(*RunError); ok {
if bytes.Contains(rerr.Stderr, []byte("fatal: could not read Username")) {
rerr.HelpText = "Confirm the import path was entered correctly.\nIf this is a private repository, see https://golang.org/doc/faq#git_https for additional information."
}
}
- r.refsErr = err
+
+ // If the remote URL doesn't exist at all, ideally we should treat the whole
+ // repository as nonexistent by wrapping the error in a notExistError.
+ // For HTTP and HTTPS, that's easy to detect: we'll try to fetch the URL
+ // ourselves and see what code it serves.
+ if u, err := url.Parse(r.remoteURL); err == nil && (u.Scheme == "http" || u.Scheme == "https") {
+ if _, err := web.GetBytes(u); errors.Is(err, os.ErrNotExist) {
+ gitErr = notExistError{gitErr}
+ }
+ }
+
+ r.refsErr = gitErr
return
}
diff --git a/src/cmd/go/internal/modfetch/coderepo.go b/src/cmd/go/internal/modfetch/coderepo.go
index f15ce67d46..541d856b28 100644
--- a/src/cmd/go/internal/modfetch/coderepo.go
+++ b/src/cmd/go/internal/modfetch/coderepo.go
@@ -140,7 +140,10 @@ func (r *codeRepo) Versions(prefix string) ([]string, error) {
}
tags, err := r.code.Tags(p)
if err != nil {
- return nil, err
+ return nil, &module.ModuleError{
+ Path: r.modPath,
+ Err: err,
+ }
}
list := []string{}
@@ -171,7 +174,10 @@ func (r *codeRepo) Versions(prefix string) ([]string, error) {
// by referring to them with a +incompatible suffix, as in v17.0.0+incompatible.
files, err := r.code.ReadFileRevs(incompatible, "go.mod", codehost.MaxGoMod)
if err != nil {
- return nil, err
+ return nil, &module.ModuleError{
+ Path: r.modPath,
+ Err: err,
+ }
}
for _, rev := range incompatible {
f := files[rev]
@@ -632,7 +638,7 @@ func (r *codeRepo) findDir(version string) (rev, dir string, gomod []byte, err e
// the real module, found at a different path, usable only in
// a replace directive.
//
- // TODO(bcmills): This doesn't seem right. Investigate futher.
+ // TODO(bcmills): This doesn't seem right. Investigate further.
// (Notably: why can't we replace foo/v2 with fork-of-foo/v3?)
dir2 := path.Join(r.codeDir, r.pathMajor[1:])
file2 = path.Join(dir2, "go.mod")
diff --git a/src/cmd/go/internal/modfetch/fetch.go b/src/cmd/go/internal/modfetch/fetch.go
index 51a56028c4..2eead5f746 100644
--- a/src/cmd/go/internal/modfetch/fetch.go
+++ b/src/cmd/go/internal/modfetch/fetch.go
@@ -7,6 +7,7 @@ package modfetch
import (
"archive/zip"
"bytes"
+ "errors"
"fmt"
"io"
"io/ioutil"
@@ -361,19 +362,19 @@ func checkMod(mod module.Version) {
// Do the file I/O before acquiring the go.sum lock.
ziphash, err := CachePath(mod, "ziphash")
if err != nil {
- base.Fatalf("verifying %s@%s: %v", mod.Path, mod.Version, err)
+ base.Fatalf("verifying %v", module.VersionError(mod, err))
}
data, err := renameio.ReadFile(ziphash)
if err != nil {
- if os.IsNotExist(err) {
+ if errors.Is(err, os.ErrNotExist) {
// This can happen if someone does rm -rf GOPATH/src/cache/download. So it goes.
return
}
- base.Fatalf("verifying %s@%s: %v", mod.Path, mod.Version, err)
+ base.Fatalf("verifying %v", module.VersionError(mod, err))
}
h := strings.TrimSpace(string(data))
if !strings.HasPrefix(h, "h1:") {
- base.Fatalf("verifying %s@%s: unexpected ziphash: %q", mod.Path, mod.Version, h)
+ base.Fatalf("verifying %v", module.VersionError(mod, fmt.Errorf("unexpected ziphash: %q", h)))
}
checkModSum(mod, h)
diff --git a/src/cmd/go/internal/modfetch/proxy.go b/src/cmd/go/internal/modfetch/proxy.go
index 569ef3a57a..8f75ad92a8 100644
--- a/src/cmd/go/internal/modfetch/proxy.go
+++ b/src/cmd/go/internal/modfetch/proxy.go
@@ -38,8 +38,8 @@ can be a module proxy.
The GET requests sent to a Go module proxy are:
-GET $GOPROXY//@v/list returns a list of all known versions of the
-given module, one per line.
+GET $GOPROXY//@v/list returns a list of known versions of the given
+module, one per line.
GET $GOPROXY//@v/.info returns JSON-formatted metadata
about that version of the given module.
@@ -50,6 +50,21 @@ for that version of the given module.
GET $GOPROXY//@v/.zip returns the zip archive
for that version of the given module.
+GET $GOPROXY//@latest returns JSON-formatted metadata about the
+latest known version of the given module in the same format as
+/@v/.info. The latest version should be the version of
+the module the go command may use if /@v/list is empty or no
+listed version is suitable.