diff --git a/AUTHORS b/AUTHORS index b2a6b3fc45..6d2ae6378b 100644 --- a/AUTHORS +++ b/AUTHORS @@ -47,7 +47,7 @@ Ahmy Yulrizka Aiden Scandella Ainar Garipov Aishraj Dahal -Akhil Indurti +Akhil Indurti Akihiro Suda Akshat Kumar Alan Shreve diff --git a/CONTRIBUTORS b/CONTRIBUTORS index 12e3b16f67..2ba9916b8b 100644 --- a/CONTRIBUTORS +++ b/CONTRIBUTORS @@ -67,7 +67,7 @@ Ahsun Ahmed Aiden Scandella Ainar Garipov Aishraj Dahal -Akhil Indurti +Akhil Indurti Akihiro Suda Akshat Kumar Al Cutter diff --git a/doc/gccgo_install.html b/doc/gccgo_install.html index a974bb3680..5b026ba57e 100644 --- a/doc/gccgo_install.html +++ b/doc/gccgo_install.html @@ -5,8 +5,8 @@

This document explains how to use gccgo, a compiler for -the Go language. The gccgo compiler is a new frontend -for GCC, the widely used GNU compiler. Although the +the Go language. The gccgo compiler is a new frontend +for GCC, the widely used GNU compiler. Although the frontend itself is under a BSD-style license, gccgo is normally used as part of GCC and is then covered by the GNU General Public @@ -24,10 +24,10 @@

Releases

The simplest way to install gccgo is to install a GCC binary release -built to include Go support. GCC binary releases are available from +built to include Go support. GCC binary releases are available from various websites and are typically included as part of GNU/Linux -distributions. We expect that most people who build these binaries +distributions. We expect that most people who build these binaries will include Go support.

@@ -38,7 +38,7 @@

Releases

Due to timing, the GCC 4.8.0 and 4.8.1 releases are close to but not -identical to Go 1.1. The GCC 4.8.2 release includes a complete Go +identical to Go 1.1. The GCC 4.8.2 release includes a complete Go 1.1.2 implementation.

@@ -48,28 +48,32 @@

Releases

The GCC 5 releases include a complete implementation of the Go 1.4 -user libraries. The Go 1.4 runtime is not fully merged, but that +user libraries. The Go 1.4 runtime is not fully merged, but that should not be visible to Go programs.

The GCC 6 releases include a complete implementation of the Go 1.6.1 -user libraries. The Go 1.6 runtime is not fully merged, but that +user libraries. The Go 1.6 runtime is not fully merged, but that should not be visible to Go programs.

The GCC 7 releases include a complete implementation of the Go 1.8.1 -user libraries. As with earlier releases, the Go 1.8 runtime is not +user libraries. As with earlier releases, the Go 1.8 runtime is not fully merged, but that should not be visible to Go programs.

-The GCC 8 releases are expected to include a complete implementation -of the Go 1.10 release, depending on release timing. The Go 1.10 -runtime has now been fully merged into the GCC development sources, -and concurrent garbage collection is expected to be fully supported in -GCC 8. +The GCC 8 releases include a complete implementation of the Go 1.10.1 +release. The Go 1.10 runtime has now been fully merged into the GCC +development sources, and concurrent garbage collection is fully +supported. +

+ +

+The GCC 9 releases include a complete implementation of the Go 1.12.2 +release.

Source code

@@ -77,10 +81,10 @@

Source code

If you cannot use a release, or prefer to build gccgo for yourself, -the gccgo source code is accessible via Subversion. The +the gccgo source code is accessible via Subversion. The GCC web site has instructions for getting the -GCC source code. The gccgo source code is included. As a +GCC source code. The gccgo source code is included. As a convenience, a stable version of the Go support is available in a branch of the main GCC code repository: svn://gcc.gnu.org/svn/gcc/branches/gccgo. @@ -90,7 +94,7 @@

Source code

Note that although gcc.gnu.org is the most convenient way to get the source code for the Go frontend, it is not where the master -sources live. If you want to contribute changes to the Go frontend +sources live. If you want to contribute changes to the Go frontend compiler, see Contributing to gccgo.

@@ -100,16 +104,16 @@

Building

Building gccgo is just like building GCC -with one or two additional options. See +with one or two additional options. See the instructions on the gcc web -site. When you run configure, add the +site. When you run configure, add the option --enable-languages=c,c++,go (along with other -languages you may want to build). If you are targeting a 32-bit x86, +languages you may want to build). If you are targeting a 32-bit x86, then you will want to build gccgo to default to supporting locked compare and exchange instructions; do this by also using the configure option --with-arch=i586 (or a newer architecture, depending on where you need your programs to -run). If you are targeting a 64-bit x86, but sometimes want to use +run). If you are targeting a 64-bit x86, but sometimes want to use the -m32 option, then use the configure option --with-arch-32=i586.

@@ -118,18 +122,18 @@

Gold

On x86 GNU/Linux systems the gccgo compiler is able to -use a small discontiguous stack for goroutines. This permits programs +use a small discontiguous stack for goroutines. This permits programs to run many more goroutines, since each goroutine can use a relatively -small stack. Doing this requires using the gold linker version 2.22 -or later. You can either install GNU binutils 2.22 or later, or you +small stack. Doing this requires using the gold linker version 2.22 +or later. You can either install GNU binutils 2.22 or later, or you can build gold yourself.

To build gold yourself, build the GNU binutils, using --enable-gold=default when you run -the configure script. Before building, you must install -the flex and bison packages. A typical sequence would look like +the configure script. Before building, you must install +the flex and bison packages. A typical sequence would look like this (you can replace /opt/gold with any directory to which you have write access):

@@ -157,7 +161,7 @@

Prerequisites

A number of prerequisites are required to build GCC, as described on the gcc web -site. It is important to install all the prerequisites before +site. It is important to install all the prerequisites before running the gcc configure script. The prerequisite libraries can be conveniently downloaded using the script contrib/download_prerequisites in the GCC sources. @@ -183,7 +187,7 @@

Build commands

Using gccgo

-The gccgo compiler works like other gcc frontends. As of GCC 5 the gccgo +The gccgo compiler works like other gcc frontends. As of GCC 5 the gccgo installation also includes a version of the go command, which may be used to build Go programs as described at https://golang.org/cmd/go. @@ -208,7 +212,7 @@

Using gccgo

To run the resulting file, you will need to tell the program where to -find the compiled Go packages. There are a few ways to do this: +find the compiled Go packages. There are a few ways to do this:

    @@ -226,11 +230,11 @@

    Using gccgo

    Here ${prefix} is the --prefix option used -when building gccgo. For a binary install this is -normally /usr. Whether to use lib +when building gccgo. For a binary install this is +normally /usr. Whether to use lib or lib64 depends on the target. Typically lib64 is correct for x86_64 systems, -and lib is correct for other systems. The idea is to +and lib is correct for other systems. The idea is to name the directory where libgo.so is found.

    @@ -325,9 +329,9 @@

    Imports

    The gccgo compiler will look in the current -directory for import files. In more complex scenarios you +directory for import files. In more complex scenarios you may pass the -I or -L option to -gccgo. Both options take directories to search. The +gccgo. Both options take directories to search. The -L option is also passed to the linker.

    @@ -348,11 +352,11 @@

    Debugging

    If you use the -g option when you compile, you can run -gdb on your executable. The debugger has only limited -knowledge about Go. You can set breakpoints, single-step, -etc. You can print variables, but they will be printed as though they -had C/C++ types. For numeric types this doesn't matter. Go strings -and interfaces will show up as two-element structures. Go +gdb on your executable. The debugger has only limited +knowledge about Go. You can set breakpoints, single-step, +etc. You can print variables, but they will be printed as though they +had C/C++ types. For numeric types this doesn't matter. Go strings +and interfaces will show up as two-element structures. Go maps and channels are always represented as C pointers to run-time structures.

    @@ -399,7 +403,7 @@

    Types

    -A slice in Go is a structure. The current definition is +A slice in Go is a structure. The current definition is (this is subject to change):

    @@ -413,15 +417,15 @@

    Types

    The type of a Go function is a pointer to a struct (this is -subject to change). The first field in the +subject to change). The first field in the struct points to the code of the function, which will be equivalent to a pointer to a C function whose parameter types are equivalent, with -an additional trailing parameter. The trailing parameter is the +an additional trailing parameter. The trailing parameter is the closure, and the argument to pass is a pointer to the Go function struct. When a Go function returns more than one value, the C function returns -a struct. For example, these functions are roughly equivalent: +a struct. For example, these functions are roughly equivalent:

    @@ -458,7 +462,7 @@ 

    Function names

    Go code can call C functions directly using a Go extension implemented in gccgo: a function declaration may be preceded by -//extern NAME. For example, here is how the C function +//extern NAME. For example, here is how the C function open can be declared in Go:

    @@ -518,11 +522,11 @@

    Compile your C code as usual, and add the option --fdump-go-spec=FILENAME. This will create the +-fdump-go-spec=FILENAME. This will create the file FILENAME as a side effect of the -compilation. This file will contain Go declarations for the types, -variables and functions declared in the C code. C types that can not -be represented in Go will be recorded as comments in the Go code. The +compilation. This file will contain Go declarations for the types, +variables and functions declared in the C code. C types that can not +be represented in Go will be recorded as comments in the Go code. The generated file will not have a package declaration, but can otherwise be compiled directly by gccgo.

    diff --git a/doc/go1.14.html b/doc/go1.14.html new file mode 100644 index 0000000000..525a1421f7 --- /dev/null +++ b/doc/go1.14.html @@ -0,0 +1,140 @@ + + + + + + +

    DRAFT RELEASE NOTES — Introduction to Go 1.14

    + +

    + + Go 1.14 is not yet released. These are work-in-progress + release notes. Go 1.14 is expected to be released in February 2020. + +

    + +

    Changes to the language

    + +

    +TODO +

    + +

    Ports

    + +

    +TODO +

    + +

    Native Client (NaCl)

    + +

    + As announced in the Go 1.13 release notes, + Go 1.14 drops support for the Native Client platform (GOOS=nacl). +

    + +

    Tools

    + +

    +TODO +

    + +

    Go command

    + +

    + The go command now includes snippets of plain-text error messages + from module proxies and other HTTP servers. + An error message will only be shown if it is valid UTF-8 and consists of only + graphic characters and spaces. +

    + +

    Runtime

    + +

    +TODO +

    + + +

    Core library

    + +

    +TODO +

    + +
    bytes/hash
    +
    +

    + TODO: https://golang.org/cl/186877: add hashing package for bytes and strings +

    + +
    + +
    crypto/tls
    +
    +

    + TODO: https://golang.org/cl/191976: remove SSLv3 support +

    + +

    + TODO: https://golang.org/cl/191999: remove TLS 1.3 opt-out +

    + +
    + +
    encoding/asn1
    +
    +

    + TODO: https://golang.org/cl/126624: handle ASN1's string type BMPString +

    + +
    + +
    mime
    +
    +

    + TODO: https://golang.org/cl/186927: update type of .js and .mjs files to text/javascript +

    + +
    + +
    plugin
    +
    +

    + TODO: https://golang.org/cl/191617: add freebsd/amd64 plugin support +

    + +
    + +
    runtime
    +
    +

    + TODO: https://golang.org/cl/187739: treat CTRL_CLOSE_EVENT, CTRL_LOGOFF_EVENT, CTRL_SHUTDOWN_EVENT as SIGTERM on Windows +

    + +

    + TODO: https://golang.org/cl/188297: don't forward SIGPIPE on macOS +

    + +
    + +

    Minor changes to the library

    + +

    + As always, there are various minor changes and updates to the library, + made with the Go 1 promise of compatibility + in mind. +

    + +

    +TODO +

    diff --git a/src/cmd/asm/internal/arch/arch.go b/src/cmd/asm/internal/arch/arch.go index f6c6c88f64..5d1f9a5326 100644 --- a/src/cmd/asm/internal/arch/arch.go +++ b/src/cmd/asm/internal/arch/arch.go @@ -88,6 +88,14 @@ func jumpX86(word string) bool { return word[0] == 'J' || word == "CALL" || strings.HasPrefix(word, "LOOP") || word == "XBEGIN" } +func jumpRISCV(word string) bool { + switch word { + case "BEQ", "BNE", "BLT", "BGE", "BLTU", "BGEU", "CALL", "JAL", "JALR", "JMP": + return true + } + return false +} + func jumpWasm(word string) bool { return word == "JMP" || word == "CALL" || word == "Call" || word == "Br" || word == "BrIf" } @@ -519,34 +527,114 @@ func archMips64(linkArch *obj.LinkArch) *Arch { } } -var riscvJumps = map[string]bool{ - "BEQ": true, - "BNE": true, - "BLT": true, - "BGE": true, - "BLTU": true, - "BGEU": true, - "CALL": true, - "JAL": true, - "JALR": true, - "JMP": true, -} - func archRISCV64() *Arch { + register := make(map[string]int16) + + // Standard register names. + for i := riscv.REG_X0; i <= riscv.REG_X31; i++ { + name := fmt.Sprintf("X%d", i-riscv.REG_X0) + register[name] = int16(i) + } + for i := riscv.REG_F0; i <= riscv.REG_F31; i++ { + name := fmt.Sprintf("F%d", i-riscv.REG_F0) + register[name] = int16(i) + } + + // General registers with ABI names. + register["ZERO"] = riscv.REG_ZERO + register["RA"] = riscv.REG_RA + register["SP"] = riscv.REG_SP + register["GP"] = riscv.REG_GP + register["TP"] = riscv.REG_TP + register["T0"] = riscv.REG_T0 + register["T1"] = riscv.REG_T1 + register["T2"] = riscv.REG_T2 + register["S0"] = riscv.REG_S0 + register["S1"] = riscv.REG_S1 + register["A0"] = riscv.REG_A0 + register["A1"] = riscv.REG_A1 + register["A2"] = riscv.REG_A2 + register["A3"] = riscv.REG_A3 + register["A4"] = riscv.REG_A4 + register["A5"] = riscv.REG_A5 + register["A6"] = riscv.REG_A6 + register["A7"] = riscv.REG_A7 + register["S2"] = riscv.REG_S2 + register["S3"] = riscv.REG_S3 + register["S4"] = riscv.REG_S4 + register["S5"] = riscv.REG_S5 + register["S6"] = riscv.REG_S6 + register["S7"] = riscv.REG_S7 + register["S8"] = riscv.REG_S8 + register["S9"] = riscv.REG_S9 + register["S10"] = riscv.REG_S10 + register["S11"] = riscv.REG_S11 + register["T3"] = riscv.REG_T3 + register["T4"] = riscv.REG_T4 + register["T5"] = riscv.REG_T5 + register["T6"] = riscv.REG_T6 + + // Go runtime register names. + register["g"] = riscv.REG_G + register["CTXT"] = riscv.REG_CTXT + register["TMP"] = riscv.REG_TMP + + // ABI names for floating point register. + register["FT0"] = riscv.REG_FT0 + register["FT1"] = riscv.REG_FT1 + register["FT2"] = riscv.REG_FT2 + register["FT3"] = riscv.REG_FT3 + register["FT4"] = riscv.REG_FT4 + register["FT5"] = riscv.REG_FT5 + register["FT6"] = riscv.REG_FT6 + register["FT7"] = riscv.REG_FT7 + register["FS0"] = riscv.REG_FS0 + register["FS1"] = riscv.REG_FS1 + register["FA0"] = riscv.REG_FA0 + register["FA1"] = riscv.REG_FA1 + register["FA2"] = riscv.REG_FA2 + register["FA3"] = riscv.REG_FA3 + register["FA4"] = riscv.REG_FA4 + register["FA5"] = riscv.REG_FA5 + register["FA6"] = riscv.REG_FA6 + register["FA7"] = riscv.REG_FA7 + register["FS2"] = riscv.REG_FS2 + register["FS3"] = riscv.REG_FS3 + register["FS4"] = riscv.REG_FS4 + register["FS5"] = riscv.REG_FS5 + register["FS6"] = riscv.REG_FS6 + register["FS7"] = riscv.REG_FS7 + register["FS8"] = riscv.REG_FS8 + register["FS9"] = riscv.REG_FS9 + register["FS10"] = riscv.REG_FS10 + register["FS11"] = riscv.REG_FS11 + register["FT8"] = riscv.REG_FT8 + register["FT9"] = riscv.REG_FT9 + register["FT10"] = riscv.REG_FT10 + register["FT11"] = riscv.REG_FT11 + // Pseudo-registers. - riscv.Registers["SB"] = RSB - riscv.Registers["FP"] = RFP - riscv.Registers["PC"] = RPC + register["SB"] = RSB + register["FP"] = RFP + register["PC"] = RPC + + instructions := make(map[string]obj.As) + for i, s := range obj.Anames { + instructions[s] = obj.As(i) + } + for i, s := range riscv.Anames { + if obj.As(i) >= obj.A_ARCHSPECIFIC { + instructions[s] = obj.As(i) + obj.ABaseRISCV + } + } return &Arch{ LinkArch: &riscv.LinkRISCV64, - Instructions: riscv.Instructions, - Register: riscv.Registers, + Instructions: instructions, + Register: register, RegisterPrefix: nil, RegisterNumber: nilRegisterNumber, - IsJump: func(s string) bool { - return riscvJumps[s] - }, + IsJump: jumpRISCV, } } diff --git a/src/cmd/asm/internal/asm/testdata/riscvenc.s b/src/cmd/asm/internal/asm/testdata/riscvenc.s index 629ca3c67c..76b3df8cfc 100644 --- a/src/cmd/asm/internal/asm/testdata/riscvenc.s +++ b/src/cmd/asm/internal/asm/testdata/riscvenc.s @@ -6,6 +6,10 @@ TEXT asmtest(SB),DUPOK|NOSPLIT,$0 start: + // Arbitrary bytes (entered in little-endian mode) + WORD $0x12345678 // WORD $305419896 // 78563412 + WORD $0x9abcdef0 // WORD $2596069104 // f0debc9a + ADD T1, T0, T2 // b3836200 ADD T0, T1 // 33035300 ADD $2047, T0, T1 // 1383f27f @@ -117,11 +121,6 @@ start: SEQZ A5, A5 // 93b71700 SNEZ A5, A5 // b337f000 - // Arbitrary bytes (entered in little-endian mode) - WORD $0x12345678 // WORD $305419896 // 78563412 - WORD $0x9abcdef0 // WORD $2596069104 // f0debc9a - - // M extension MUL T0, T1, T2 // b3035302 MULH T0, T1, T2 // b3135302 diff --git a/src/cmd/asm/internal/asm/testdata/s390x.s b/src/cmd/asm/internal/asm/testdata/s390x.s index c9f4d69736..62563d885e 100644 --- a/src/cmd/asm/internal/asm/testdata/s390x.s +++ b/src/cmd/asm/internal/asm/testdata/s390x.s @@ -33,12 +33,12 @@ TEXT main·foo(SB),DUPOK|NOSPLIT,$16-0 // TEXT main.foo(SB), DUPOK|NOSPLIT, $16- MOVWBR (R15), R9 // e390f000001e MOVD R1, n-8(SP) // e310f0100024 - MOVW R2, n-8(SP) // e320f0100050 - MOVH R3, n-8(SP) // e330f0100070 - MOVB R4, n-8(SP) // e340f0100072 - MOVWZ R5, n-8(SP) // e350f0100050 - MOVHZ R6, n-8(SP) // e360f0100070 - MOVBZ R7, n-8(SP) // e370f0100072 + MOVW R2, n-8(SP) // 5020f010 + MOVH R3, n-8(SP) // 4030f010 + MOVB R4, n-8(SP) // 4240f010 + MOVWZ R5, n-8(SP) // 5050f010 + MOVHZ R6, n-8(SP) // 4060f010 + MOVBZ R7, n-8(SP) // 4270f010 MOVDBR R8, n-8(SP) // e380f010002f MOVWBR R9, n-8(SP) // e390f010003e @@ -58,6 +58,20 @@ TEXT main·foo(SB),DUPOK|NOSPLIT,$16-0 // TEXT main.foo(SB), DUPOK|NOSPLIT, $16- MOVB $-128, -524288(R5) // eb8050008052 MOVB $1, -524289(R6) // c0a1fff7ffff41aa60009201a000 + // RX (12-bit displacement) and RXY (20-bit displacement) instruction encoding (e.g: ST vs STY) + MOVW R1, 4095(R2)(R3) // 50132fff + MOVW R1, 4096(R2)(R3) // e31320000150 + MOVWZ R1, 4095(R2)(R3) // 50132fff + MOVWZ R1, 4096(R2)(R3) // e31320000150 + MOVH R1, 4095(R2)(R3) // 40132fff + MOVHZ R1, 4095(R2)(R3) // 40132fff + MOVH R1, 4096(R2)(R3) // e31320000170 + MOVHZ R1, 4096(R2)(R3) // e31320000170 + MOVB R1, 4095(R2)(R3) // 42132fff + MOVBZ R1, 4095(R2)(R3) // 42132fff + MOVB R1, 4096(R2)(R3) // e31320000172 + MOVBZ R1, 4096(R2)(R3) // e31320000172 + ADD R1, R2 // b9e81022 ADD R1, R2, R3 // b9e81032 ADD $8192, R1 // a71b2000 @@ -95,6 +109,7 @@ TEXT main·foo(SB),DUPOK|NOSPLIT,$16-0 // TEXT main.foo(SB), DUPOK|NOSPLIT, $16- MULHD R7, R2, R1 // b90400b2b98600a7ebb7003f000ab98000b2b90900abebb2003f000ab98000b7b9e9b01a MULHDU R3, R4 // b90400b4b98600a3b904004a MULHDU R5, R6, R7 // b90400b6b98600a5b904007a + MLGR R1, R2 // b9860021 DIVD R1, R2 // b90400b2b90d00a1b904002b DIVD R1, R2, R3 // b90400b2b90d00a1b904003b DIVW R4, R5 // b90400b5b91d00a4b904005b @@ -300,10 +315,22 @@ TEXT main·foo(SB),DUPOK|NOSPLIT,$16-0 // TEXT main.foo(SB), DUPOK|NOSPLIT, $16- FMOVS $0, F11 // b37400b0 FMOVD $0, F12 // b37500c0 - FMOVS (R1)(R2*1), F0 // ed0210000064 - FMOVS n-8(SP), F15 // edf0f0100064 - FMOVD -9999999(R8)(R9*1), F8 // c0a1ff67698141aa9000ed8a80000065 + FMOVS (R1)(R2*1), F0 // 78021000 + FMOVS n-8(SP), F15 // 78f0f010 + FMOVD -9999999(R8)(R9*1), F8 // c0a1ff67698141aa9000688a8000 FMOVD F4, F5 // 2854 + + // RX (12-bit displacement) and RXY (20-bit displacement) instruction encoding (e.g. LD vs LDY) + FMOVD (R1), F0 // 68001000 + FMOVD 4095(R2), F13 // 68d02fff + FMOVD 4096(R2), F15 // edf020000165 + FMOVS 4095(R2)(R3), F13 // 78d32fff + FMOVS 4096(R2)(R4), F15 // edf420000164 + FMOVD F0, 4095(R1) // 60001fff + FMOVD F0, 4096(R1) // ed0010000167 + FMOVS F13, 4095(R2)(R3) // 70d32fff + FMOVS F13, 4096(R2)(R3) // edd320000166 + FADDS F0, F15 // b30a00f0 FADD F1, F14 // b31a00e1 FSUBS F2, F13 // b30b00d2 @@ -329,6 +356,9 @@ TEXT main·foo(SB),DUPOK|NOSPLIT,$16-0 // TEXT main.foo(SB), DUPOK|NOSPLIT, $16- TCEB F5, $8 // ed5000080010 TCDB F15, $4095 // edf00fff0011 + UNDEF // 00000000 + NOPH // 0700 + VL (R15), V1 // e710f0000006 VST V1, (R15) // e710f000000e VL (R15), V31 // e7f0f0000806 diff --git a/src/cmd/compile/internal/gc/bexport.go b/src/cmd/compile/internal/gc/bexport.go index 7c09ab5a34..e67506f4e1 100644 --- a/src/cmd/compile/internal/gc/bexport.go +++ b/src/cmd/compile/internal/gc/bexport.go @@ -43,11 +43,14 @@ func (p *exporter) markType(t *types.Type) { // the user already needs some way to construct values of // those types. switch t.Etype { - case TPTR, TARRAY, TSLICE, TCHAN: - // TODO(mdempsky): Skip marking element type for - // send-only channels? + case TPTR, TARRAY, TSLICE: p.markType(t.Elem()) + case TCHAN: + if t.ChanDir().CanRecv() { + p.markType(t.Elem()) + } + case TMAP: p.markType(t.Key()) p.markType(t.Elem()) diff --git a/src/cmd/compile/internal/gc/bimport.go b/src/cmd/compile/internal/gc/bimport.go index 7aabae764e..911ac4c0dc 100644 --- a/src/cmd/compile/internal/gc/bimport.go +++ b/src/cmd/compile/internal/gc/bimport.go @@ -5,7 +5,6 @@ package gc import ( - "cmd/compile/internal/types" "cmd/internal/src" ) @@ -15,15 +14,6 @@ import ( // the same name appears in an error message. var numImport = make(map[string]int) -func idealType(typ *types.Type) *types.Type { - switch typ { - case types.Idealint, types.Idealrune, types.Idealfloat, types.Idealcomplex: - // canonicalize ideal types - typ = types.Types[TIDEAL] - } - return typ -} - func npos(pos src.XPos, n *Node) *Node { n.Pos = pos return n diff --git a/src/cmd/compile/internal/gc/closure.go b/src/cmd/compile/internal/gc/closure.go index fb04924121..21b3461b47 100644 --- a/src/cmd/compile/internal/gc/closure.go +++ b/src/cmd/compile/internal/gc/closure.go @@ -73,6 +73,12 @@ func (p *noder) funcLit(expr *syntax.FuncLit) *Node { func typecheckclosure(clo *Node, top int) { xfunc := clo.Func.Closure + // Set current associated iota value, so iota can be used inside + // function in ConstSpec, see issue #22344 + if x := getIotaValue(); x >= 0 { + xfunc.SetIota(x) + } + clo.Func.Ntype = typecheck(clo.Func.Ntype, ctxType) clo.Type = clo.Func.Ntype.Type clo.Func.Top = top diff --git a/src/cmd/compile/internal/gc/const.go b/src/cmd/compile/internal/gc/const.go index 47601ecf5f..e40c23b8ef 100644 --- a/src/cmd/compile/internal/gc/const.go +++ b/src/cmd/compile/internal/gc/const.go @@ -998,6 +998,11 @@ func setconst(n *Node, v Val) { Xoffset: BADWIDTH, } n.SetVal(v) + if n.Type.IsUntyped() { + // TODO(mdempsky): Make typecheck responsible for setting + // the correct untyped type. + n.Type = idealType(v.Ctype()) + } // Check range. lno := setlineno(n) @@ -1030,24 +1035,29 @@ func setintconst(n *Node, v int64) { func nodlit(v Val) *Node { n := nod(OLITERAL, nil, nil) n.SetVal(v) - switch v.Ctype() { - default: - Fatalf("nodlit ctype %d", v.Ctype()) + n.Type = idealType(v.Ctype()) + return n +} +func idealType(ct Ctype) *types.Type { + switch ct { case CTSTR: - n.Type = types.Idealstring - + return types.Idealstring case CTBOOL: - n.Type = types.Idealbool - - case CTINT, CTRUNE, CTFLT, CTCPLX: - n.Type = types.Types[TIDEAL] - + return types.Idealbool + case CTINT: + return types.Idealint + case CTRUNE: + return types.Idealrune + case CTFLT: + return types.Idealfloat + case CTCPLX: + return types.Idealcomplex case CTNIL: - n.Type = types.Types[TNIL] + return types.Types[TNIL] } - - return n + Fatalf("unexpected Ctype: %v", ct) + return nil } // idealkind returns a constant kind like consttype @@ -1294,7 +1304,7 @@ type constSetKey struct { // equal value and identical type has already been added, then add // reports an error about the duplicate value. // -// pos provides position information for where expression n occured +// pos provides position information for where expression n occurred // (in case n does not have its own position information). what and // where are used in the error message. // diff --git a/src/cmd/compile/internal/gc/esc.go b/src/cmd/compile/internal/gc/esc.go index 5ffc3c543a..c350d7c1bc 100644 --- a/src/cmd/compile/internal/gc/esc.go +++ b/src/cmd/compile/internal/gc/esc.go @@ -212,6 +212,8 @@ var tags [1 << (bitsPerOutputInTag + EscReturnBits)]string // mktag returns the string representation for an escape analysis tag. func mktag(mask int) string { switch mask & EscMask { + case EscHeap: + return "" case EscNone, EscReturn: default: Fatalf("escape mktag") @@ -403,91 +405,93 @@ const unsafeUintptrTag = "unsafe-uintptr" // marked go:uintptrescapes. const uintptrEscapesTag = "uintptr-escapes" -func esctag(fn *Node) { - fn.Esc = EscFuncTagged - - name := func(s *types.Sym, narg int) string { - if s != nil { - return s.Name +func (e *Escape) paramTag(fn *Node, narg int, f *types.Field) string { + name := func() string { + if f.Sym != nil { + return f.Sym.Name } return fmt.Sprintf("arg#%d", narg) } - // External functions are assumed unsafe, - // unless //go:noescape is given before the declaration. if fn.Nbody.Len() == 0 { - if fn.Noescape() { - for _, f := range fn.Type.Params().Fields().Slice() { - if types.Haspointers(f.Type) { - f.Note = mktag(EscNone) - } - } - } - // Assume that uintptr arguments must be held live across the call. // This is most important for syscall.Syscall. // See golang.org/issue/13372. // This really doesn't have much to do with escape analysis per se, // but we are reusing the ability to annotate an individual function // argument and pass those annotations along to importing code. - narg := 0 - for _, f := range fn.Type.Params().Fields().Slice() { - narg++ - if f.Type.Etype == TUINTPTR { - if Debug['m'] != 0 { - Warnl(fn.Pos, "%v assuming %v is unsafe uintptr", funcSym(fn), name(f.Sym, narg)) - } - f.Note = unsafeUintptrTag + if f.Type.Etype == TUINTPTR { + if Debug['m'] != 0 { + Warnl(fn.Pos, "%v assuming %v is unsafe uintptr", funcSym(fn), name()) } + return unsafeUintptrTag } - return - } + if !types.Haspointers(f.Type) { // don't bother tagging for scalars + return "" + } - if fn.Func.Pragma&UintptrEscapes != 0 { - narg := 0 - for _, f := range fn.Type.Params().Fields().Slice() { - narg++ - if f.Type.Etype == TUINTPTR { - if Debug['m'] != 0 { - Warnl(fn.Pos, "%v marking %v as escaping uintptr", funcSym(fn), name(f.Sym, narg)) - } - f.Note = uintptrEscapesTag + // External functions are assumed unsafe, unless + // //go:noescape is given before the declaration. + if fn.Noescape() { + if Debug['m'] != 0 && f.Sym != nil { + Warnl(fn.Pos, "%S %v does not escape", funcSym(fn), name()) } + return mktag(EscNone) + } - if f.IsDDD() && f.Type.Elem().Etype == TUINTPTR { - // final argument is ...uintptr. - if Debug['m'] != 0 { - Warnl(fn.Pos, "%v marking %v as escaping ...uintptr", funcSym(fn), name(f.Sym, narg)) - } - f.Note = uintptrEscapesTag - } + if Debug['m'] != 0 && f.Sym != nil { + Warnl(fn.Pos, "leaking param: %v", name()) } + return mktag(EscHeap) } - for _, fs := range types.RecvsParams { - for _, f := range fs(fn.Type).Fields().Slice() { - if !types.Haspointers(f.Type) { // don't bother tagging for scalars - continue + if fn.Func.Pragma&UintptrEscapes != 0 { + if f.Type.Etype == TUINTPTR { + if Debug['m'] != 0 { + Warnl(fn.Pos, "%v marking %v as escaping uintptr", funcSym(fn), name()) } - if f.Note == uintptrEscapesTag { - // Note is already set in the loop above. - continue + return uintptrEscapesTag + } + if f.IsDDD() && f.Type.Elem().Etype == TUINTPTR { + // final argument is ...uintptr. + if Debug['m'] != 0 { + Warnl(fn.Pos, "%v marking %v as escaping ...uintptr", funcSym(fn), name()) } + return uintptrEscapesTag + } + } - // Unnamed parameters are unused and therefore do not escape. - if f.Sym == nil || f.Sym.IsBlank() { - f.Note = mktag(EscNone) - continue - } + if !types.Haspointers(f.Type) { // don't bother tagging for scalars + return "" + } - switch esc := asNode(f.Nname).Esc; esc & EscMask { - case EscNone, // not touched by escflood - EscReturn: - f.Note = mktag(int(esc)) + // Unnamed parameters are unused and therefore do not escape. + if f.Sym == nil || f.Sym.IsBlank() { + return mktag(EscNone) + } - case EscHeap: // touched by escflood, moved to heap + n := asNode(f.Nname) + loc := e.oldLoc(n) + esc := finalizeEsc(loc.paramEsc) + + if Debug['m'] != 0 && !loc.escapes { + if esc == EscNone { + Warnl(n.Pos, "%S %S does not escape", funcSym(fn), n) + } else if esc == EscHeap { + Warnl(n.Pos, "leaking param: %S", n) + } else { + if esc&EscContentEscapes != 0 { + Warnl(n.Pos, "leaking param content: %S", n) + } + for i := 0; i < numEscReturns; i++ { + if x := getEscReturn(esc, i); x >= 0 { + res := n.Name.Curfn.Type.Results().Field(i).Sym + Warnl(n.Pos, "leaking param: %S to result %v level=%d", n, res, x) + } } } } + + return mktag(int(esc)) } diff --git a/src/cmd/compile/internal/gc/escape.go b/src/cmd/compile/internal/gc/escape.go index d2e096d0f0..ff958beef3 100644 --- a/src/cmd/compile/internal/gc/escape.go +++ b/src/cmd/compile/internal/gc/escape.go @@ -24,7 +24,7 @@ import ( // First, we construct a directed weighted graph where vertices // (termed "locations") represent variables allocated by statements // and expressions, and edges represent assignments between variables -// (with weights reperesenting addressing/dereference counts). +// (with weights representing addressing/dereference counts). // // Next we walk the graph looking for assignment paths that might // violate the invariants stated above. If a variable v's address is @@ -33,7 +33,7 @@ import ( // // To support interprocedural analysis, we also record data-flow from // each function's parameters to the heap and to its result -// parameters. This information is summarized as "paremeter tags", +// parameters. This information is summarized as "parameter tags", // which are used at static call sites to improve escape analysis of // function arguments. @@ -44,7 +44,7 @@ import ( // "location." // // We also model every Go assignment as a directed edges between -// locations. The number of derefence operations minus the number of +// locations. The number of dereference operations minus the number of // addressing operations is recorded as the edge's weight (termed // "derefs"). For example: // @@ -147,12 +147,7 @@ func escapeFuncs(fns []*Node, recursive bool) { e.curfn = nil e.walkAll() - e.finish() - - // Record parameter tags for package export data. - for _, fn := range fns { - esctag(fn) - } + e.finish(fns) } func (e *Escape) initFunc(fn *Node) { @@ -1258,7 +1253,20 @@ func (l *EscLocation) leakTo(sink *EscLocation, derefs int) { } } -func (e *Escape) finish() { +func (e *Escape) finish(fns []*Node) { + // Record parameter tags for package export data. + for _, fn := range fns { + fn.Esc = EscFuncTagged + + narg := 0 + for _, fs := range types.RecvsParams { + for _, f := range fs(fn.Type).Fields().Slice() { + narg++ + f.Note = e.paramTag(fn, narg, f) + } + } + } + for _, loc := range e.allLocs { n := loc.n if n == nil { @@ -1279,27 +1287,10 @@ func (e *Escape) finish() { } n.Esc = EscHeap addrescapes(n) - } else if loc.isName(PPARAM) { - n.Esc = finalizeEsc(loc.paramEsc) - - if Debug['m'] != 0 && types.Haspointers(n.Type) { - if n.Esc == EscNone { - Warnl(n.Pos, "%S %S does not escape", funcSym(loc.curfn), n) - } else if n.Esc == EscHeap { - Warnl(n.Pos, "leaking param: %S", n) - } else { - if n.Esc&EscContentEscapes != 0 { - Warnl(n.Pos, "leaking param content: %S", n) - } - for i := 0; i < numEscReturns; i++ { - if x := getEscReturn(n.Esc, i); x >= 0 { - res := n.Name.Curfn.Type.Results().Field(i).Sym - Warnl(n.Pos, "leaking param: %S to result %v level=%d", n, res, x) - } - } - } - } } else { + if Debug['m'] != 0 && n.Op != ONAME && n.Op != OTYPESW && n.Op != ORANGE && n.Op != ODEFER { + Warnl(n.Pos, "%S %S does not escape", funcSym(loc.curfn), n) + } n.Esc = EscNone if loc.transient { switch n.Op { @@ -1307,10 +1298,6 @@ func (e *Escape) finish() { n.SetNoescape(true) } } - - if Debug['m'] != 0 && n.Op != ONAME && n.Op != OTYPESW && n.Op != ORANGE && n.Op != ODEFER { - Warnl(n.Pos, "%S %S does not escape", funcSym(loc.curfn), n) - } } } } diff --git a/src/cmd/compile/internal/gc/fmt.go b/src/cmd/compile/internal/gc/fmt.go index 782e4cb840..53d6b9d2cc 100644 --- a/src/cmd/compile/internal/gc/fmt.go +++ b/src/cmd/compile/internal/gc/fmt.go @@ -686,11 +686,21 @@ func typefmt(t *types.Type, flag FmtFlag, mode fmtMode, depth int) string { } if int(t.Etype) < len(basicnames) && basicnames[t.Etype] != "" { - name := basicnames[t.Etype] - if t == types.Idealbool || t == types.Idealstring { - name = "untyped " + name - } - return name + switch t { + case types.Idealbool: + return "untyped bool" + case types.Idealstring: + return "untyped string" + case types.Idealint: + return "untyped int" + case types.Idealrune: + return "untyped rune" + case types.Idealfloat: + return "untyped float" + case types.Idealcomplex: + return "untyped complex" + } + return basicnames[t.Etype] } if mode == FDbg { @@ -854,9 +864,6 @@ func typefmt(t *types.Type, flag FmtFlag, mode fmtMode, depth int) string { case TUNSAFEPTR: return "unsafe.Pointer" - case TDDDFIELD: - return mode.Sprintf("%v <%v> %v", t.Etype, t.Sym, t.DDDField()) - case Txxx: return "Txxx" } diff --git a/src/cmd/compile/internal/gc/iimport.go b/src/cmd/compile/internal/gc/iimport.go index 1d4329b4b1..7d134f3a5f 100644 --- a/src/cmd/compile/internal/gc/iimport.go +++ b/src/cmd/compile/internal/gc/iimport.go @@ -374,8 +374,6 @@ func (p *importReader) value() (typ *types.Type, v Val) { p.float(&x.Imag, typ) v.U = x } - - typ = idealType(typ) return } diff --git a/src/cmd/compile/internal/gc/main.go b/src/cmd/compile/internal/gc/main.go index 12ebfb871b..dff33ee530 100644 --- a/src/cmd/compile/internal/gc/main.go +++ b/src/cmd/compile/internal/gc/main.go @@ -570,7 +570,7 @@ func Main(archInit func(*Arch)) { fcount++ } } - // With all types ckecked, it's now safe to verify map keys. One single + // With all types checked, it's now safe to verify map keys. One single // check past phase 9 isn't sufficient, as we may exit with other errors // before then, thus skipping map key errors. checkMapKeys() diff --git a/src/cmd/compile/internal/gc/plive.go b/src/cmd/compile/internal/gc/plive.go index f436af8141..27c9674a7d 100644 --- a/src/cmd/compile/internal/gc/plive.go +++ b/src/cmd/compile/internal/gc/plive.go @@ -304,7 +304,7 @@ func (lv *Liveness) valueEffects(v *ssa.Value) (int32, liveEffect) { var effect liveEffect // Read is a read, obviously. // - // Addr is a read also, as any subseqent holder of the pointer must be able + // Addr is a read also, as any subsequent holder of the pointer must be able // to see all the values (including initialization) written so far. // This also prevents a variable from "coming back from the dead" and presenting // stale pointers to the garbage collector. See issue 28445. @@ -1453,12 +1453,25 @@ func liveness(e *ssafn, f *ssa.Func, pp *Progs) LivenessMap { return lv.livenessMap } +// TODO(cuonglm,mdempsky): Revisit after #24416 is fixed. func isfat(t *types.Type) bool { if t != nil { switch t.Etype { - case TSTRUCT, TARRAY, TSLICE, TSTRING, + case TSLICE, TSTRING, TINTER: // maybe remove later return true + case TARRAY: + // Array of 1 element, check if element is fat + if t.NumElem() == 1 { + return isfat(t.Elem()) + } + return true + case TSTRUCT: + // Struct with 1 field, check if field is fat + if t.NumFields() == 1 { + return isfat(t.Field(0).Type) + } + return true } } diff --git a/src/cmd/compile/internal/gc/sinit.go b/src/cmd/compile/internal/gc/sinit.go index a506bfe31f..ae8e79d854 100644 --- a/src/cmd/compile/internal/gc/sinit.go +++ b/src/cmd/compile/internal/gc/sinit.go @@ -495,7 +495,7 @@ func isStaticCompositeLiteral(n *Node) bool { // literal components of composite literals. // Dynamic initialization represents non-literals and // non-literal components of composite literals. -// LocalCode initializion represents initialization +// LocalCode initialization represents initialization // that occurs purely in generated code local to the function of use. // Initialization code is sometimes generated in passes, // first static then dynamic. diff --git a/src/cmd/compile/internal/gc/ssa.go b/src/cmd/compile/internal/gc/ssa.go index 2866625c83..01d7bb31d9 100644 --- a/src/cmd/compile/internal/gc/ssa.go +++ b/src/cmd/compile/internal/gc/ssa.go @@ -1645,7 +1645,7 @@ var fpConvOpToSSA32 = map[twoTypes]twoOpsAndType{ twoTypes{TFLOAT64, TUINT32}: twoOpsAndType{ssa.OpCvt64Fto32U, ssa.OpCopy, TUINT32}, } -// uint64<->float conversions, only on machines that have intructions for that +// uint64<->float conversions, only on machines that have instructions for that var uint64fpConvOpToSSA = map[twoTypes]twoOpsAndType{ twoTypes{TUINT64, TFLOAT32}: twoOpsAndType{ssa.OpCopy, ssa.OpCvt64Uto32F, TUINT64}, twoTypes{TUINT64, TFLOAT64}: twoOpsAndType{ssa.OpCopy, ssa.OpCvt64Uto64F, TUINT64}, @@ -3600,8 +3600,8 @@ func init() { func(s *state, n *Node, args []*ssa.Value) *ssa.Value { return s.newValue2(ssa.OpMul64uhilo, types.NewTuple(types.Types[TUINT64], types.Types[TUINT64]), args[0], args[1]) }, - sys.AMD64, sys.ARM64, sys.PPC64) - alias("math/bits", "Mul", "math/bits", "Mul64", sys.ArchAMD64, sys.ArchARM64, sys.ArchPPC64) + sys.AMD64, sys.ARM64, sys.PPC64, sys.S390X) + alias("math/bits", "Mul", "math/bits", "Mul64", sys.ArchAMD64, sys.ArchARM64, sys.ArchPPC64, sys.ArchS390X) addF("math/bits", "Add64", func(s *state, n *Node, args []*ssa.Value) *ssa.Value { return s.newValue3(ssa.OpAdd64carry, types.NewTuple(types.Types[TUINT64], types.Types[TUINT64]), args[0], args[1], args[2]) diff --git a/src/cmd/compile/internal/gc/swt.go b/src/cmd/compile/internal/gc/swt.go index 33bc71b862..5089efe08b 100644 --- a/src/cmd/compile/internal/gc/swt.go +++ b/src/cmd/compile/internal/gc/swt.go @@ -6,6 +6,7 @@ package gc import ( "cmd/compile/internal/types" + "cmd/internal/src" "sort" ) @@ -56,164 +57,210 @@ type caseClauses struct { // typecheckswitch typechecks a switch statement. func typecheckswitch(n *Node) { typecheckslice(n.Ninit.Slice(), ctxStmt) - - var nilonly string - var top int - var t *types.Type - if n.Left != nil && n.Left.Op == OTYPESW { - // type switch - top = ctxType - n.Left.Right = typecheck(n.Left.Right, ctxExpr) - t = n.Left.Right.Type - if t != nil && !t.IsInterface() { - yyerrorl(n.Pos, "cannot type switch on non-interface value %L", n.Left.Right) - } - if v := n.Left.Left; v != nil && !v.isBlank() && n.List.Len() == 0 { - // We don't actually declare the type switch's guarded - // declaration itself. So if there are no cases, we - // won't notice that it went unused. - yyerrorl(v.Pos, "%v declared and not used", v.Sym) - } + typecheckTypeSwitch(n) } else { - // expression switch - top = ctxExpr - if n.Left != nil { - n.Left = typecheck(n.Left, ctxExpr) - n.Left = defaultlit(n.Left, nil) - t = n.Left.Type - } else { - t = types.Types[TBOOL] + typecheckExprSwitch(n) + } +} + +func typecheckTypeSwitch(n *Node) { + n.Left.Right = typecheck(n.Left.Right, ctxExpr) + t := n.Left.Right.Type + if t != nil && !t.IsInterface() { + yyerrorl(n.Pos, "cannot type switch on non-interface value %L", n.Left.Right) + t = nil + } + n.Type = t // TODO(mdempsky): Remove; statements aren't typed. + + // We don't actually declare the type switch's guarded + // declaration itself. So if there are no cases, we won't + // notice that it went unused. + if v := n.Left.Left; v != nil && !v.isBlank() && n.List.Len() == 0 { + yyerrorl(v.Pos, "%v declared and not used", v.Sym) + } + + var defCase, nilCase *Node + var ts typeSet + for _, ncase := range n.List.Slice() { + ls := ncase.List.Slice() + if len(ls) == 0 { // default: + if defCase != nil { + yyerrorl(ncase.Pos, "multiple defaults in switch (first at %v)", defCase.Line()) + } else { + defCase = ncase + } } - if t != nil { + + for i := range ls { + ls[i] = typecheck(ls[i], ctxExpr|ctxType) + n1 := ls[i] + if t == nil || n1.Type == nil { + continue + } + + var missing, have *types.Field + var ptr int switch { - case !okforeq[t.Etype]: - yyerrorl(n.Pos, "cannot switch on %L", n.Left) - case t.IsSlice(): - nilonly = "slice" - case t.IsArray() && !IsComparable(t): - yyerrorl(n.Pos, "cannot switch on %L", n.Left) - case t.IsStruct(): - if f := IncomparableField(t); f != nil { - yyerrorl(n.Pos, "cannot switch on %L (struct containing %v cannot be compared)", n.Left, f.Type) + case n1.isNil(): // case nil: + if nilCase != nil { + yyerrorl(ncase.Pos, "multiple nil cases in type switch (first at %v)", nilCase.Line()) + } else { + nilCase = ncase + } + case n1.Op != OTYPE: + yyerrorl(ncase.Pos, "%L is not a type", n1) + case !n1.Type.IsInterface() && !implements(n1.Type, t, &missing, &have, &ptr) && !missing.Broke(): + if have != nil && !have.Broke() { + yyerrorl(ncase.Pos, "impossible type switch case: %L cannot have dynamic type %v"+ + " (wrong type for %v method)\n\thave %v%S\n\twant %v%S", n.Left.Right, n1.Type, missing.Sym, have.Sym, have.Type, missing.Sym, missing.Type) + } else if ptr != 0 { + yyerrorl(ncase.Pos, "impossible type switch case: %L cannot have dynamic type %v"+ + " (%v method has pointer receiver)", n.Left.Right, n1.Type, missing.Sym) + } else { + yyerrorl(ncase.Pos, "impossible type switch case: %L cannot have dynamic type %v"+ + " (missing %v method)", n.Left.Right, n1.Type, missing.Sym) + } + } + + if n1.Op == OTYPE { + ts.add(ncase.Pos, n1.Type) + } + } + + if ncase.Rlist.Len() != 0 { + // Assign the clause variable's type. + vt := t + if len(ls) == 1 { + if ls[0].Op == OTYPE { + vt = ls[0].Type + } else if ls[0].Op != OLITERAL { // TODO(mdempsky): Should be !ls[0].isNil() + // Invalid single-type case; + // mark variable as broken. + vt = nil } - case t.Etype == TFUNC: - nilonly = "func" - case t.IsMap(): - nilonly = "map" } + + // TODO(mdempsky): It should be possible to + // still typecheck the case body. + if vt == nil { + continue + } + + nvar := ncase.Rlist.First() + nvar.Type = vt + nvar = typecheck(nvar, ctxExpr|ctxAssign) + ncase.Rlist.SetFirst(nvar) } + + typecheckslice(ncase.Nbody.Slice(), ctxStmt) } +} - n.Type = t +type typeSet struct { + m map[string][]typeSetEntry +} - var def, niltype *Node - for _, ncase := range n.List.Slice() { - if ncase.List.Len() == 0 { - // default - if def != nil { - setlineno(ncase) - yyerrorl(ncase.Pos, "multiple defaults in switch (first at %v)", def.Line()) +type typeSetEntry struct { + pos src.XPos + typ *types.Type +} + +func (s *typeSet) add(pos src.XPos, typ *types.Type) { + if s.m == nil { + s.m = make(map[string][]typeSetEntry) + } + + // LongString does not uniquely identify types, so we need to + // disambiguate collisions with types.Identical. + // TODO(mdempsky): Add a method that *is* unique. + ls := typ.LongString() + prevs := s.m[ls] + for _, prev := range prevs { + if types.Identical(typ, prev.typ) { + yyerrorl(pos, "duplicate case %v in type switch\n\tprevious case at %s", typ, linestr(prev.pos)) + return + } + } + s.m[ls] = append(prevs, typeSetEntry{pos, typ}) +} + +func typecheckExprSwitch(n *Node) { + t := types.Types[TBOOL] + if n.Left != nil { + n.Left = typecheck(n.Left, ctxExpr) + n.Left = defaultlit(n.Left, nil) + t = n.Left.Type + } + + var nilonly string + if t != nil { + switch { + case t.IsMap(): + nilonly = "map" + case t.Etype == TFUNC: + nilonly = "func" + case t.IsSlice(): + nilonly = "slice" + + case !IsComparable(t): + if t.IsStruct() { + yyerrorl(n.Pos, "cannot switch on %L (struct containing %v cannot be compared)", n.Left, IncomparableField(t).Type) } else { - def = ncase + yyerrorl(n.Pos, "cannot switch on %L", n.Left) } - } else { - ls := ncase.List.Slice() - for i1, n1 := range ls { - setlineno(n1) - ls[i1] = typecheck(ls[i1], ctxExpr|ctxType) - n1 = ls[i1] - if n1.Type == nil || t == nil { - continue - } - - setlineno(ncase) - switch top { - // expression switch - case ctxExpr: - ls[i1] = defaultlit(ls[i1], t) - n1 = ls[i1] - switch { - case n1.Op == OTYPE: - yyerrorl(ncase.Pos, "type %v is not an expression", n1.Type) - case n1.Type != nil && assignop(n1.Type, t, nil) == 0 && assignop(t, n1.Type, nil) == 0: - if n.Left != nil { - yyerrorl(ncase.Pos, "invalid case %v in switch on %v (mismatched types %v and %v)", n1, n.Left, n1.Type, t) - } else { - yyerrorl(ncase.Pos, "invalid case %v in switch (mismatched types %v and bool)", n1, n1.Type) - } - case nilonly != "" && !n1.isNil(): - yyerrorl(ncase.Pos, "invalid case %v in switch (can only compare %s %v to nil)", n1, nilonly, n.Left) - case t.IsInterface() && !n1.Type.IsInterface() && !IsComparable(n1.Type): - yyerrorl(ncase.Pos, "invalid case %L in switch (incomparable type)", n1) - } + t = nil + } + } + n.Type = t // TODO(mdempsky): Remove; statements aren't typed. - // type switch - case ctxType: - var missing, have *types.Field - var ptr int - switch { - case n1.Op == OLITERAL && n1.Type.IsKind(TNIL): - // case nil: - if niltype != nil { - yyerrorl(ncase.Pos, "multiple nil cases in type switch (first at %v)", niltype.Line()) - } else { - niltype = ncase - } - case n1.Op != OTYPE && n1.Type != nil: // should this be ||? - yyerrorl(ncase.Pos, "%L is not a type", n1) - // reset to original type - n1 = n.Left.Right - ls[i1] = n1 - case !n1.Type.IsInterface() && t.IsInterface() && !implements(n1.Type, t, &missing, &have, &ptr): - if have != nil && !missing.Broke() && !have.Broke() { - yyerrorl(ncase.Pos, "impossible type switch case: %L cannot have dynamic type %v"+ - " (wrong type for %v method)\n\thave %v%S\n\twant %v%S", n.Left.Right, n1.Type, missing.Sym, have.Sym, have.Type, missing.Sym, missing.Type) - } else if !missing.Broke() { - if ptr != 0 { - yyerrorl(ncase.Pos, "impossible type switch case: %L cannot have dynamic type %v"+ - " (%v method has pointer receiver)", n.Left.Right, n1.Type, missing.Sym) - } else { - yyerrorl(ncase.Pos, "impossible type switch case: %L cannot have dynamic type %v"+ - " (missing %v method)", n.Left.Right, n1.Type, missing.Sym) - } - } - } - } + var defCase *Node + var cs constSet + for _, ncase := range n.List.Slice() { + ls := ncase.List.Slice() + if len(ls) == 0 { // default: + if defCase != nil { + yyerrorl(ncase.Pos, "multiple defaults in switch (first at %v)", defCase.Line()) + } else { + defCase = ncase } } - if top == ctxType { - ll := ncase.List - if ncase.Rlist.Len() != 0 { - nvar := ncase.Rlist.First() - if ll.Len() == 1 && (ll.First().Type == nil || !ll.First().Type.IsKind(TNIL)) { - // single entry type switch - nvar.Type = ll.First().Type - } else { - // multiple entry type switch or default - nvar.Type = n.Type - } + for i := range ls { + setlineno(ncase) + ls[i] = typecheck(ls[i], ctxExpr) + ls[i] = defaultlit(ls[i], t) + n1 := ls[i] + if t == nil || n1.Type == nil { + continue + } - if nvar.Type == nil || nvar.Type.IsUntyped() { - // if the value we're switching on has no type or is untyped, - // we've already printed an error and don't need to continue - // typechecking the body - continue + switch { + case nilonly != "" && !n1.isNil(): + yyerrorl(ncase.Pos, "invalid case %v in switch (can only compare %s %v to nil)", n1, nilonly, n.Left) + case t.IsInterface() && !n1.Type.IsInterface() && !IsComparable(n1.Type): + yyerrorl(ncase.Pos, "invalid case %L in switch (incomparable type)", n1) + case assignop(n1.Type, t, nil) == 0 && assignop(t, n1.Type, nil) == 0: + if n.Left != nil { + yyerrorl(ncase.Pos, "invalid case %v in switch on %v (mismatched types %v and %v)", n1, n.Left, n1.Type, t) + } else { + yyerrorl(ncase.Pos, "invalid case %v in switch (mismatched types %v and bool)", n1, n1.Type) } + } - nvar = typecheck(nvar, ctxExpr|ctxAssign) - ncase.Rlist.SetFirst(nvar) + // Don't check for duplicate bools. Although the spec allows it, + // (1) the compiler hasn't checked it in the past, so compatibility mandates it, and + // (2) it would disallow useful things like + // case GOARCH == "arm" && GOARM == "5": + // case GOARCH == "arm": + // which would both evaluate to false for non-ARM compiles. + if !n1.Type.IsBoolean() { + cs.add(ncase.Pos, n1, "case", "switch") } } typecheckslice(ncase.Nbody.Slice(), ctxStmt) } - switch top { - // expression switch - case ctxExpr: - checkDupExprCases(n.Left, n.List.Slice()) - } } // walkswitch walks a switch statement. @@ -586,65 +633,9 @@ func (s *typeSwitch) genCaseClauses(clauses []*Node) caseClauses { cc.defjmp = nod(OBREAK, nil, nil) } - // diagnose duplicate cases - s.checkDupCases(cc.list) return cc } -func (s *typeSwitch) checkDupCases(cc []caseClause) { - if len(cc) < 2 { - return - } - // We store seen types in a map keyed by type hash. - // It is possible, but very unlikely, for multiple distinct types to have the same hash. - seen := make(map[uint32][]*Node) - // To avoid many small allocations of length 1 slices, - // also set up a single large slice to slice into. - nn := make([]*Node, 0, len(cc)) -Outer: - for _, c := range cc { - prev, ok := seen[c.hash] - if !ok { - // First entry for this hash. - nn = append(nn, c.node) - seen[c.hash] = nn[len(nn)-1 : len(nn) : len(nn)] - continue - } - for _, n := range prev { - if types.Identical(n.Left.Type, c.node.Left.Type) { - yyerrorl(c.node.Pos, "duplicate case %v in type switch\n\tprevious case at %v", c.node.Left.Type, n.Line()) - // avoid double-reporting errors - continue Outer - } - } - seen[c.hash] = append(seen[c.hash], c.node) - } -} - -func checkDupExprCases(exprname *Node, clauses []*Node) { - // boolean (naked) switch, nothing to do. - if exprname == nil { - return - } - - var cs constSet - for _, ncase := range clauses { - for _, n := range ncase.List.Slice() { - // Don't check for duplicate bools. Although the spec allows it, - // (1) the compiler hasn't checked it in the past, so compatibility mandates it, and - // (2) it would disallow useful things like - // case GOARCH == "arm" && GOARM == "5": - // case GOARCH == "arm": - // which would both evaluate to false for non-ARM compiles. - if n.Type.IsBoolean() { - continue - } - - cs.add(ncase.Pos, n, "case", "switch") - } - } -} - // walk generates an AST that implements sw, // where sw is a type switch. // The AST is generally of the form of a linear diff --git a/src/cmd/compile/internal/gc/syntax.go b/src/cmd/compile/internal/gc/syntax.go index dec72690bf..e8a527a8fc 100644 --- a/src/cmd/compile/internal/gc/syntax.go +++ b/src/cmd/compile/internal/gc/syntax.go @@ -48,6 +48,7 @@ type Node struct { // - OSTRUCTKEY uses it to store the named field's offset. // - Named OLITERALs use it to store their ambient iota value. // - OINLMARK stores an index into the inlTree data structure. + // - OCLOSURE uses it to store ambient iota value, if any. // Possibly still more uses. If you find any, document them. Xoffset int64 @@ -691,7 +692,7 @@ const ( ORECV // <-Left ORUNESTR // Type(Left) (Type is string, Left is rune) OSELRECV // Left = <-Right.Left: (appears as .Left of OCASE; Right.Op == ORECV) - OSELRECV2 // List = <-Right.Left: (apperas as .Left of OCASE; count(List) == 2, Right.Op == ORECV) + OSELRECV2 // List = <-Right.Left: (appears as .Left of OCASE; count(List) == 2, Right.Op == ORECV) OIOTA // iota OREAL // real(Left) OIMAG // imag(Left) diff --git a/src/cmd/compile/internal/gc/typecheck.go b/src/cmd/compile/internal/gc/typecheck.go index e4d1cedd74..e725c6f363 100644 --- a/src/cmd/compile/internal/gc/typecheck.go +++ b/src/cmd/compile/internal/gc/typecheck.go @@ -100,10 +100,8 @@ func resolve(n *Node) (res *Node) { } if r.Op == OIOTA { - if i := len(typecheckdefstack); i > 0 { - if x := typecheckdefstack[i-1]; x.Op == OLITERAL { - return nodintconst(x.Iota()) - } + if x := getIotaValue(); x >= 0 { + return nodintconst(x) } return n } @@ -1407,20 +1405,18 @@ func typecheck1(n *Node, top int) (res *Node) { } // Determine result type. - et := t.Etype - switch et { + switch t.Etype { case TIDEAL: - // result is ideal + n.Type = types.Idealfloat case TCOMPLEX64: - et = TFLOAT32 + n.Type = types.Types[TFLOAT32] case TCOMPLEX128: - et = TFLOAT64 + n.Type = types.Types[TFLOAT64] default: yyerror("invalid argument %L for %v", l, n.Op) n.Type = nil return n } - n.Type = types.Types[et] case OCOMPLEX: ok |= ctxExpr @@ -1457,7 +1453,7 @@ func typecheck1(n *Node, top int) (res *Node) { return n case TIDEAL: - t = types.Types[TIDEAL] + t = types.Idealcomplex case TFLOAT32: t = types.Types[TCOMPLEX64] @@ -2683,20 +2679,20 @@ func errorDetails(nl Nodes, tstruct *types.Type, isddd bool) string { // e.g in error messages about wrong arguments to return. func sigrepr(t *types.Type) string { switch t { - default: - return t.String() - - case types.Types[TIDEAL]: - // "untyped number" is not commonly used - // outside of the compiler, so let's use "number". - return "number" - case types.Idealstring: return "string" - case types.Idealbool: return "bool" } + + if t.Etype == TIDEAL { + // "untyped number" is not commonly used + // outside of the compiler, so let's use "number". + // TODO(mdempsky): Revisit this. + return "number" + } + + return t.String() } // retsigerr returns the signature of the types @@ -3937,3 +3933,19 @@ func setTypeNode(n *Node, t *types.Type) { n.Type = t n.Type.Nod = asTypesNode(n) } + +// getIotaValue returns the current value for "iota", +// or -1 if not within a ConstSpec. +func getIotaValue() int64 { + if i := len(typecheckdefstack); i > 0 { + if x := typecheckdefstack[i-1]; x.Op == OLITERAL { + return x.Iota() + } + } + + if Curfn != nil && Curfn.Iota() >= 0 { + return Curfn.Iota() + } + + return -1 +} diff --git a/src/cmd/compile/internal/gc/types.go b/src/cmd/compile/internal/gc/types.go index ce82c3a52e..748f8458bd 100644 --- a/src/cmd/compile/internal/gc/types.go +++ b/src/cmd/compile/internal/gc/types.go @@ -54,8 +54,5 @@ const ( TFUNCARGS = types.TFUNCARGS TCHANARGS = types.TCHANARGS - // pseudo-types for import/export - TDDDFIELD = types.TDDDFIELD // wrapper: contained type is a ... field - NTYPE = types.NTYPE ) diff --git a/src/cmd/compile/internal/gc/universe.go b/src/cmd/compile/internal/gc/universe.go index b8260d6525..2077c5639e 100644 --- a/src/cmd/compile/internal/gc/universe.go +++ b/src/cmd/compile/internal/gc/universe.go @@ -334,14 +334,7 @@ func typeinit() { maxfltval[TCOMPLEX128] = maxfltval[TFLOAT64] minfltval[TCOMPLEX128] = minfltval[TFLOAT64] - // for walk to use in error messages - types.Types[TFUNC] = functype(nil, nil, nil) - - // types used in front end - // types.Types[TNIL] got set early in lexinit - types.Types[TIDEAL] = types.New(TIDEAL) - - types.Types[TINTER] = types.New(TINTER) + types.Types[TINTER] = types.New(TINTER) // empty interface // simple aliases simtype[TMAP] = TPTR diff --git a/src/cmd/compile/internal/gc/walk.go b/src/cmd/compile/internal/gc/walk.go index 0062dddc6c..cb49e0f7ce 100644 --- a/src/cmd/compile/internal/gc/walk.go +++ b/src/cmd/compile/internal/gc/walk.go @@ -1265,7 +1265,7 @@ opswitch: } // Map initialization with a variable or large hint is // more complicated. We therefore generate a call to - // runtime.makemap to intialize hmap and allocate the + // runtime.makemap to initialize hmap and allocate the // map buckets. // When hint fits into int, use makemap instead of diff --git a/src/cmd/compile/internal/s390x/ggen.go b/src/cmd/compile/internal/s390x/ggen.go index 6a72b27ac5..ae9965c378 100644 --- a/src/cmd/compile/internal/s390x/ggen.go +++ b/src/cmd/compile/internal/s390x/ggen.go @@ -105,8 +105,5 @@ func zeroAuto(pp *gc.Progs, n *gc.Node) { } func ginsnop(pp *gc.Progs) *obj.Prog { - p := pp.Prog(s390x.AWORD) - p.From.Type = obj.TYPE_CONST - p.From.Offset = 0x47000000 // nop 0 - return p + return pp.Prog(s390x.ANOPH) } diff --git a/src/cmd/compile/internal/s390x/ssa.go b/src/cmd/compile/internal/s390x/ssa.go index 7ddebe7b64..5acb391dcd 100644 --- a/src/cmd/compile/internal/s390x/ssa.go +++ b/src/cmd/compile/internal/s390x/ssa.go @@ -225,6 +225,19 @@ func ssaGenValue(s *gc.SSAGenState, v *ssa.Value) { v.Fatalf("input[0] and output not in same register %s", v.LongString()) } opregreg(s, v.Op.Asm(), r, v.Args[1].Reg()) + case ssa.OpS390XMLGR: + // MLGR Rx R3 -> R2:R3 + r0 := v.Args[0].Reg() + r1 := v.Args[1].Reg() + if r1 != s390x.REG_R3 { + v.Fatalf("We require the multiplcand to be stored in R3 for MLGR %s", v.LongString()) + } + p := s.Prog(s390x.AMLGR) + p.From.Type = obj.TYPE_REG + p.From.Reg = r0 + p.To.Reg = s390x.REG_R2 + p.To.Type = obj.TYPE_REG + case ssa.OpS390XFMADD, ssa.OpS390XFMADDS, ssa.OpS390XFMSUB, ssa.OpS390XFMSUBS: r := v.Reg() @@ -482,7 +495,7 @@ func ssaGenValue(s *gc.SSAGenState, v *ssa.Value) { p.To.Type = obj.TYPE_MEM p.To.Reg = v.Args[0].Reg() gc.AddAux2(&p.To, v, sc.Off()) - case ssa.OpCopy, ssa.OpS390XMOVDreg: + case ssa.OpCopy: if v.Type.IsMemory() { return } @@ -491,11 +504,6 @@ func ssaGenValue(s *gc.SSAGenState, v *ssa.Value) { if x != y { opregreg(s, moveByType(v.Type), y, x) } - case ssa.OpS390XMOVDnop: - if v.Reg() != v.Args[0].Reg() { - v.Fatalf("input[0] and output not in same register %s", v.LongString()) - } - // nothing to do case ssa.OpLoadReg: if v.Type.IsFlags() { v.Fatalf("load flags not implemented: %v", v.LongString()) diff --git a/src/cmd/compile/internal/ssa/branchelim.go b/src/cmd/compile/internal/ssa/branchelim.go index 71c947d0d5..fda0cbb9b3 100644 --- a/src/cmd/compile/internal/ssa/branchelim.go +++ b/src/cmd/compile/internal/ssa/branchelim.go @@ -319,7 +319,7 @@ func shouldElimIfElse(no, yes, post *Block, arch string) bool { cost := phi * 1 if phi > 1 { // If we have more than 1 phi and some values in post have args - // in yes or no blocks, we may have to recalucalte condition, because + // in yes or no blocks, we may have to recalculate condition, because // those args may clobber flags. For now assume that all operations clobber flags. cost += other * 1 } diff --git a/src/cmd/compile/internal/ssa/deadstore.go b/src/cmd/compile/internal/ssa/deadstore.go index 69616b3a88..ebcb571e66 100644 --- a/src/cmd/compile/internal/ssa/deadstore.go +++ b/src/cmd/compile/internal/ssa/deadstore.go @@ -180,7 +180,7 @@ func elimDeadAutosGeneric(f *Func) { } return case OpStore, OpMove, OpZero: - // v should be elimated if we eliminate the auto. + // v should be eliminated if we eliminate the auto. n, ok := addr[args[0]] if ok && elim[v] == nil { elim[v] = n diff --git a/src/cmd/compile/internal/ssa/debug_test.go b/src/cmd/compile/internal/ssa/debug_test.go index 73a0afb82c..28bb88a0c3 100644 --- a/src/cmd/compile/internal/ssa/debug_test.go +++ b/src/cmd/compile/internal/ssa/debug_test.go @@ -959,7 +959,7 @@ func replaceEnv(env []string, ev string, evv string) []string { } // asCommandLine renders cmd as something that could be copy-and-pasted into a command line -// If cwd is not empty and different from the command's directory, prepend an approprirate "cd" +// If cwd is not empty and different from the command's directory, prepend an appropriate "cd" func asCommandLine(cwd string, cmd *exec.Cmd) string { s := "(" if cmd.Dir != "" && cmd.Dir != cwd { diff --git a/src/cmd/compile/internal/ssa/gen/PPC64Ops.go b/src/cmd/compile/internal/ssa/gen/PPC64Ops.go index 65a183dba6..af72774765 100644 --- a/src/cmd/compile/internal/ssa/gen/PPC64Ops.go +++ b/src/cmd/compile/internal/ssa/gen/PPC64Ops.go @@ -222,7 +222,7 @@ func init() { {name: "POPCNTD", argLength: 1, reg: gp11, asm: "POPCNTD"}, // number of set bits in arg0 {name: "POPCNTW", argLength: 1, reg: gp11, asm: "POPCNTW"}, // number of set bits in each word of arg0 placed in corresponding word - {name: "POPCNTB", argLength: 1, reg: gp11, asm: "POPCNTB"}, // number of set bits in each byte of arg0 placed in corresonding byte + {name: "POPCNTB", argLength: 1, reg: gp11, asm: "POPCNTB"}, // number of set bits in each byte of arg0 placed in corresponding byte {name: "FDIV", argLength: 2, reg: fp21, asm: "FDIV"}, // arg0/arg1 {name: "FDIVS", argLength: 2, reg: fp21, asm: "FDIVS"}, // arg0/arg1 diff --git a/src/cmd/compile/internal/ssa/gen/S390X.rules b/src/cmd/compile/internal/ssa/gen/S390X.rules index ee670a908d..98bf875f80 100644 --- a/src/cmd/compile/internal/ssa/gen/S390X.rules +++ b/src/cmd/compile/internal/ssa/gen/S390X.rules @@ -17,6 +17,7 @@ (Mul(32|16|8) x y) -> (MULLW x y) (Mul32F x y) -> (FMULS x y) (Mul64F x y) -> (FMUL x y) +(Mul64uhilo x y) -> (MLGR x y) (Div32F x y) -> (FDIVS x y) (Div64F x y) -> (FDIV x y) @@ -443,61 +444,121 @@ // *************************** // TODO: Should the optimizations be a separate pass? -// Fold unnecessary type conversions. -(MOVDreg x) && t.Compare(x.Type) == types.CMPeq -> x -(MOVDnop x) && t.Compare(x.Type) == types.CMPeq -> x - -// Propagate constants through type conversions. -(MOVDreg (MOVDconst [c])) -> (MOVDconst [c]) -(MOVDnop (MOVDconst [c])) -> (MOVDconst [c]) - -// If a register move has only 1 use, just use the same register without emitting instruction. -// MOVDnop doesn't emit instruction, only for ensuring the type. -(MOVDreg x) && x.Uses == 1 -> (MOVDnop x) - -// Fold type changes into loads. -(MOVDreg x:(MOVBZload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBZload [off] {sym} ptr mem) -(MOVDreg x:(MOVBload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBload [off] {sym} ptr mem) -(MOVDreg x:(MOVHZload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHZload [off] {sym} ptr mem) -(MOVDreg x:(MOVHload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHload [off] {sym} ptr mem) -(MOVDreg x:(MOVWZload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWZload [off] {sym} ptr mem) -(MOVDreg x:(MOVWload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWload [off] {sym} ptr mem) -(MOVDreg x:(MOVDload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVDload [off] {sym} ptr mem) - -(MOVDnop x:(MOVBZload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBZload [off] {sym} ptr mem) -(MOVDnop x:(MOVBload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBload [off] {sym} ptr mem) -(MOVDnop x:(MOVHZload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHZload [off] {sym} ptr mem) -(MOVDnop x:(MOVHload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHload [off] {sym} ptr mem) -(MOVDnop x:(MOVWZload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWZload [off] {sym} ptr mem) -(MOVDnop x:(MOVWload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWload [off] {sym} ptr mem) -(MOVDnop x:(MOVDload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVDload [off] {sym} ptr mem) - -(MOVDreg x:(MOVBZloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBZloadidx [off] {sym} ptr idx mem) -(MOVDreg x:(MOVBloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBloadidx [off] {sym} ptr idx mem) -(MOVDreg x:(MOVHZloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHZloadidx [off] {sym} ptr idx mem) -(MOVDreg x:(MOVHloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHloadidx [off] {sym} ptr idx mem) -(MOVDreg x:(MOVWZloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWZloadidx [off] {sym} ptr idx mem) -(MOVDreg x:(MOVWloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWloadidx [off] {sym} ptr idx mem) -(MOVDreg x:(MOVDloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVDloadidx [off] {sym} ptr idx mem) - -(MOVDnop x:(MOVBZloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBZloadidx [off] {sym} ptr idx mem) -(MOVDnop x:(MOVBloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBloadidx [off] {sym} ptr idx mem) -(MOVDnop x:(MOVHZloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHZloadidx [off] {sym} ptr idx mem) -(MOVDnop x:(MOVHloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHloadidx [off] {sym} ptr idx mem) -(MOVDnop x:(MOVWZloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWZloadidx [off] {sym} ptr idx mem) -(MOVDnop x:(MOVWloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWloadidx [off] {sym} ptr idx mem) -(MOVDnop x:(MOVDloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVDloadidx [off] {sym} ptr idx mem) - -// Fold sign extensions into conditional moves of constants. -// Designed to remove the MOVBZreg inserted by the If lowering. -(MOVBZreg x:(MOVDLT (MOVDconst [c]) (MOVDconst [d]) _)) && int64(uint8(c)) == c && int64(uint8(d)) == d -> (MOVDreg x) -(MOVBZreg x:(MOVDLE (MOVDconst [c]) (MOVDconst [d]) _)) && int64(uint8(c)) == c && int64(uint8(d)) == d -> (MOVDreg x) -(MOVBZreg x:(MOVDGT (MOVDconst [c]) (MOVDconst [d]) _)) && int64(uint8(c)) == c && int64(uint8(d)) == d -> (MOVDreg x) -(MOVBZreg x:(MOVDGE (MOVDconst [c]) (MOVDconst [d]) _)) && int64(uint8(c)) == c && int64(uint8(d)) == d -> (MOVDreg x) -(MOVBZreg x:(MOVDEQ (MOVDconst [c]) (MOVDconst [d]) _)) && int64(uint8(c)) == c && int64(uint8(d)) == d -> (MOVDreg x) -(MOVBZreg x:(MOVDNE (MOVDconst [c]) (MOVDconst [d]) _)) && int64(uint8(c)) == c && int64(uint8(d)) == d -> (MOVDreg x) -(MOVBZreg x:(MOVDGTnoinv (MOVDconst [c]) (MOVDconst [d]) _)) && int64(uint8(c)) == c && int64(uint8(d)) == d -> (MOVDreg x) -(MOVBZreg x:(MOVDGEnoinv (MOVDconst [c]) (MOVDconst [d]) _)) && int64(uint8(c)) == c && int64(uint8(d)) == d -> (MOVDreg x) +// Note: when removing unnecessary sign/zero extensions. +// +// After a value is spilled it is restored using a sign- or zero-extension +// to register-width as appropriate for its type. For example, a uint8 will +// be restored using a MOVBZ (llgc) instruction which will zero extend the +// 8-bit value to 64-bits. +// +// This is a hazard when folding sign- and zero-extensions since we need to +// ensure not only that the value in the argument register is correctly +// extended but also that it will still be correctly extended if it is +// spilled and restored. +// +// In general this means we need type checks when the RHS of a rule is an +// OpCopy (i.e. "(... x:(...) ...) -> x"). + +// Merge double extensions. +(MOV(H|HZ)reg e:(MOV(B|BZ)reg x)) && clobberIfDead(e) -> (MOV(B|BZ)reg x) +(MOV(W|WZ)reg e:(MOV(B|BZ)reg x)) && clobberIfDead(e) -> (MOV(B|BZ)reg x) +(MOV(W|WZ)reg e:(MOV(H|HZ)reg x)) && clobberIfDead(e) -> (MOV(H|HZ)reg x) + +// Bypass redundant sign extensions. +(MOV(B|BZ)reg e:(MOVBreg x)) && clobberIfDead(e) -> (MOV(B|BZ)reg x) +(MOV(B|BZ)reg e:(MOVHreg x)) && clobberIfDead(e) -> (MOV(B|BZ)reg x) +(MOV(B|BZ)reg e:(MOVWreg x)) && clobberIfDead(e) -> (MOV(B|BZ)reg x) +(MOV(H|HZ)reg e:(MOVHreg x)) && clobberIfDead(e) -> (MOV(H|HZ)reg x) +(MOV(H|HZ)reg e:(MOVWreg x)) && clobberIfDead(e) -> (MOV(H|HZ)reg x) +(MOV(W|WZ)reg e:(MOVWreg x)) && clobberIfDead(e) -> (MOV(W|WZ)reg x) + +// Bypass redundant zero extensions. +(MOV(B|BZ)reg e:(MOVBZreg x)) && clobberIfDead(e) -> (MOV(B|BZ)reg x) +(MOV(B|BZ)reg e:(MOVHZreg x)) && clobberIfDead(e) -> (MOV(B|BZ)reg x) +(MOV(B|BZ)reg e:(MOVWZreg x)) && clobberIfDead(e) -> (MOV(B|BZ)reg x) +(MOV(H|HZ)reg e:(MOVHZreg x)) && clobberIfDead(e) -> (MOV(H|HZ)reg x) +(MOV(H|HZ)reg e:(MOVWZreg x)) && clobberIfDead(e) -> (MOV(H|HZ)reg x) +(MOV(W|WZ)reg e:(MOVWZreg x)) && clobberIfDead(e) -> (MOV(W|WZ)reg x) + +// Remove zero extensions after zero extending load. +// Note: take care that if x is spilled it is restored correctly. +(MOV(B|H|W)Zreg x:(MOVBZload _ _)) && (!x.Type.IsSigned() || x.Type.Size() > 1) -> x +(MOV(B|H|W)Zreg x:(MOVBZloadidx _ _ _)) && (!x.Type.IsSigned() || x.Type.Size() > 1) -> x +(MOV(H|W)Zreg x:(MOVHZload _ _)) && (!x.Type.IsSigned() || x.Type.Size() > 2) -> x +(MOV(H|W)Zreg x:(MOVHZloadidx _ _ _)) && (!x.Type.IsSigned() || x.Type.Size() > 2) -> x +(MOVWZreg x:(MOVWZload _ _)) && (!x.Type.IsSigned() || x.Type.Size() > 4) -> x +(MOVWZreg x:(MOVWZloadidx _ _ _)) && (!x.Type.IsSigned() || x.Type.Size() > 4) -> x + +// Remove sign extensions after sign extending load. +// Note: take care that if x is spilled it is restored correctly. +(MOV(B|H|W)reg x:(MOVBload _ _)) && (x.Type.IsSigned() || x.Type.Size() == 8) -> x +(MOV(B|H|W)reg x:(MOVBloadidx _ _ _)) && (x.Type.IsSigned() || x.Type.Size() == 8) -> x +(MOV(H|W)reg x:(MOVHload _ _)) && (x.Type.IsSigned() || x.Type.Size() == 8) -> x +(MOV(H|W)reg x:(MOVHloadidx _ _ _)) && (x.Type.IsSigned() || x.Type.Size() == 8) -> x +(MOVWreg x:(MOVWload _ _)) && (x.Type.IsSigned() || x.Type.Size() == 8) -> x +(MOVWreg x:(MOVWloadidx _ _ _)) && (x.Type.IsSigned() || x.Type.Size() == 8) -> x + +// Remove sign extensions after zero extending load. +// These type checks are probably unnecessary but do them anyway just in case. +(MOV(H|W)reg x:(MOVBZload _ _)) && (!x.Type.IsSigned() || x.Type.Size() > 1) -> x +(MOV(H|W)reg x:(MOVBZloadidx _ _ _)) && (!x.Type.IsSigned() || x.Type.Size() > 1) -> x +(MOVWreg x:(MOVHZload _ _)) && (!x.Type.IsSigned() || x.Type.Size() > 2) -> x +(MOVWreg x:(MOVHZloadidx _ _ _)) && (!x.Type.IsSigned() || x.Type.Size() > 2) -> x + +// Fold sign and zero extensions into loads. +// +// Note: The combined instruction must end up in the same block +// as the original load. If not, we end up making a value with +// memory type live in two different blocks, which can lead to +// multiple memory values alive simultaneously. +// +// Make sure we don't combine these ops if the load has another use. +// This prevents a single load from being split into multiple loads +// which then might return different values. See test/atomicload.go. +(MOV(B|H|W)Zreg x:(MOV(B|H|W)load [o] {s} p mem)) + && x.Uses == 1 + && clobber(x) + -> @x.Block (MOV(B|H|W)Zload [o] {s} p mem) +(MOV(B|H|W)reg x:(MOV(B|H|W)Zload [o] {s} p mem)) + && x.Uses == 1 + && clobber(x) + -> @x.Block (MOV(B|H|W)load [o] {s} p mem) +(MOV(B|H|W)Zreg x:(MOV(B|H|W)loadidx [o] {s} p i mem)) + && x.Uses == 1 + && clobber(x) + -> @x.Block (MOV(B|H|W)Zloadidx [o] {s} p i mem) +(MOV(B|H|W)reg x:(MOV(B|H|W)Zloadidx [o] {s} p i mem)) + && x.Uses == 1 + && clobber(x) + -> @x.Block (MOV(B|H|W)loadidx [o] {s} p i mem) + +// Remove zero extensions after argument load. +(MOVBZreg x:(Arg )) && !t.IsSigned() && t.Size() == 1 -> x +(MOVHZreg x:(Arg )) && !t.IsSigned() && t.Size() <= 2 -> x +(MOVWZreg x:(Arg )) && !t.IsSigned() && t.Size() <= 4 -> x + +// Remove sign extensions after argument load. +(MOVBreg x:(Arg )) && t.IsSigned() && t.Size() == 1 -> x +(MOVHreg x:(Arg )) && t.IsSigned() && t.Size() <= 2 -> x +(MOVWreg x:(Arg )) && t.IsSigned() && t.Size() <= 4 -> x + +// Fold zero extensions into constants. +(MOVBZreg (MOVDconst [c])) -> (MOVDconst [int64( uint8(c))]) +(MOVHZreg (MOVDconst [c])) -> (MOVDconst [int64(uint16(c))]) +(MOVWZreg (MOVDconst [c])) -> (MOVDconst [int64(uint32(c))]) + +// Fold sign extensions into constants. +(MOVBreg (MOVDconst [c])) -> (MOVDconst [int64( int8(c))]) +(MOVHreg (MOVDconst [c])) -> (MOVDconst [int64(int16(c))]) +(MOVWreg (MOVDconst [c])) -> (MOVDconst [int64(int32(c))]) + +// Remove zero extension of conditional move. +// Note: only for MOVBZreg for now since it is added as part of 'if' statement lowering. +(MOVBZreg x:(MOVD(LT|LE|GT|GE|EQ|NE|GTnoinv|GEnoinv) (MOVDconst [c]) (MOVDconst [d]) _)) + && int64(uint8(c)) == c + && int64(uint8(d)) == d + && (!x.Type.IsSigned() || x.Type.Size() > 1) + -> x // Fold boolean tests into blocks. (NE (CMPWconst [0] (MOVDLT (MOVDconst [0]) (MOVDconst [1]) cmp)) yes no) -> (LT cmp yes no) @@ -547,12 +608,12 @@ -> (S(LD|RD|RAD|LW|RW|RAW) x (ANDWconst [c&63] y)) (S(LD|RD|RAD|LW|RW|RAW) x (ANDWconst [c] y)) && c&63 == 63 -> (S(LD|RD|RAD|LW|RW|RAW) x y) -(SLD x (MOV(D|W|H|B|WZ|HZ|BZ)reg y)) -> (SLD x y) -(SRD x (MOV(D|W|H|B|WZ|HZ|BZ)reg y)) -> (SRD x y) -(SRAD x (MOV(D|W|H|B|WZ|HZ|BZ)reg y)) -> (SRAD x y) -(SLW x (MOV(D|W|H|B|WZ|HZ|BZ)reg y)) -> (SLW x y) -(SRW x (MOV(D|W|H|B|WZ|HZ|BZ)reg y)) -> (SRW x y) -(SRAW x (MOV(D|W|H|B|WZ|HZ|BZ)reg y)) -> (SRAW x y) +(SLD x (MOV(W|H|B|WZ|HZ|BZ)reg y)) -> (SLD x y) +(SRD x (MOV(W|H|B|WZ|HZ|BZ)reg y)) -> (SRD x y) +(SRAD x (MOV(W|H|B|WZ|HZ|BZ)reg y)) -> (SRAD x y) +(SLW x (MOV(W|H|B|WZ|HZ|BZ)reg y)) -> (SLW x y) +(SRW x (MOV(W|H|B|WZ|HZ|BZ)reg y)) -> (SRW x y) +(SRAW x (MOV(W|H|B|WZ|HZ|BZ)reg y)) -> (SRAW x y) // Constant rotate generation (RLL x (MOVDconst [c])) -> (RLLconst x [c&31]) @@ -615,99 +676,8 @@ (MOVDEQ x y (InvertFlags cmp)) -> (MOVDEQ x y cmp) (MOVDNE x y (InvertFlags cmp)) -> (MOVDNE x y cmp) -// don't extend after proper load -(MOVBreg x:(MOVBload _ _)) -> (MOVDreg x) -(MOVBZreg x:(MOVBZload _ _)) -> (MOVDreg x) -(MOVHreg x:(MOVBload _ _)) -> (MOVDreg x) -(MOVHreg x:(MOVBZload _ _)) -> (MOVDreg x) -(MOVHreg x:(MOVHload _ _)) -> (MOVDreg x) -(MOVHZreg x:(MOVBZload _ _)) -> (MOVDreg x) -(MOVHZreg x:(MOVHZload _ _)) -> (MOVDreg x) -(MOVWreg x:(MOVBload _ _)) -> (MOVDreg x) -(MOVWreg x:(MOVBZload _ _)) -> (MOVDreg x) -(MOVWreg x:(MOVHload _ _)) -> (MOVDreg x) -(MOVWreg x:(MOVHZload _ _)) -> (MOVDreg x) -(MOVWreg x:(MOVWload _ _)) -> (MOVDreg x) -(MOVWZreg x:(MOVBZload _ _)) -> (MOVDreg x) -(MOVWZreg x:(MOVHZload _ _)) -> (MOVDreg x) -(MOVWZreg x:(MOVWZload _ _)) -> (MOVDreg x) - -// don't extend if argument is already extended -(MOVBreg x:(Arg )) && is8BitInt(t) && isSigned(t) -> (MOVDreg x) -(MOVBZreg x:(Arg )) && is8BitInt(t) && !isSigned(t) -> (MOVDreg x) -(MOVHreg x:(Arg )) && (is8BitInt(t) || is16BitInt(t)) && isSigned(t) -> (MOVDreg x) -(MOVHZreg x:(Arg )) && (is8BitInt(t) || is16BitInt(t)) && !isSigned(t) -> (MOVDreg x) -(MOVWreg x:(Arg )) && (is8BitInt(t) || is16BitInt(t) || is32BitInt(t)) && isSigned(t) -> (MOVDreg x) -(MOVWZreg x:(Arg )) && (is8BitInt(t) || is16BitInt(t) || is32BitInt(t)) && !isSigned(t) -> (MOVDreg x) - -// fold double extensions -(MOVBreg x:(MOVBreg _)) -> (MOVDreg x) -(MOVBZreg x:(MOVBZreg _)) -> (MOVDreg x) -(MOVHreg x:(MOVBreg _)) -> (MOVDreg x) -(MOVHreg x:(MOVBZreg _)) -> (MOVDreg x) -(MOVHreg x:(MOVHreg _)) -> (MOVDreg x) -(MOVHZreg x:(MOVBZreg _)) -> (MOVDreg x) -(MOVHZreg x:(MOVHZreg _)) -> (MOVDreg x) -(MOVWreg x:(MOVBreg _)) -> (MOVDreg x) -(MOVWreg x:(MOVBZreg _)) -> (MOVDreg x) -(MOVWreg x:(MOVHreg _)) -> (MOVDreg x) -(MOVWreg x:(MOVHZreg _)) -> (MOVDreg x) -(MOVWreg x:(MOVWreg _)) -> (MOVDreg x) -(MOVWZreg x:(MOVBZreg _)) -> (MOVDreg x) -(MOVWZreg x:(MOVHZreg _)) -> (MOVDreg x) -(MOVWZreg x:(MOVWZreg _)) -> (MOVDreg x) - -(MOVBreg (MOVBZreg x)) -> (MOVBreg x) -(MOVBZreg (MOVBreg x)) -> (MOVBZreg x) -(MOVHreg (MOVHZreg x)) -> (MOVHreg x) -(MOVHZreg (MOVHreg x)) -> (MOVHZreg x) -(MOVWreg (MOVWZreg x)) -> (MOVWreg x) -(MOVWZreg (MOVWreg x)) -> (MOVWZreg x) - -// fold extensions into constants -(MOVBreg (MOVDconst [c])) -> (MOVDconst [int64(int8(c))]) -(MOVBZreg (MOVDconst [c])) -> (MOVDconst [int64(uint8(c))]) -(MOVHreg (MOVDconst [c])) -> (MOVDconst [int64(int16(c))]) -(MOVHZreg (MOVDconst [c])) -> (MOVDconst [int64(uint16(c))]) -(MOVWreg (MOVDconst [c])) -> (MOVDconst [int64(int32(c))]) -(MOVWZreg (MOVDconst [c])) -> (MOVDconst [int64(uint32(c))]) - -// sign extended loads -// Note: The combined instruction must end up in the same block -// as the original load. If not, we end up making a value with -// memory type live in two different blocks, which can lead to -// multiple memory values alive simultaneously. -// Make sure we don't combine these ops if the load has another use. -// This prevents a single load from being split into multiple loads -// which then might return different values. See test/atomicload.go. -(MOVBreg x:(MOVBZload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBload [off] {sym} ptr mem) -(MOVBreg x:(MOVBload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBload [off] {sym} ptr mem) -(MOVBZreg x:(MOVBZload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBZload [off] {sym} ptr mem) -(MOVBZreg x:(MOVBload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBZload [off] {sym} ptr mem) -(MOVHreg x:(MOVHZload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHload [off] {sym} ptr mem) -(MOVHreg x:(MOVHload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHload [off] {sym} ptr mem) -(MOVHZreg x:(MOVHZload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHZload [off] {sym} ptr mem) -(MOVHZreg x:(MOVHload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHZload [off] {sym} ptr mem) -(MOVWreg x:(MOVWZload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWload [off] {sym} ptr mem) -(MOVWreg x:(MOVWload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWload [off] {sym} ptr mem) -(MOVWZreg x:(MOVWZload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWZload [off] {sym} ptr mem) -(MOVWZreg x:(MOVWload [off] {sym} ptr mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWZload [off] {sym} ptr mem) - -(MOVBreg x:(MOVBZloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBloadidx [off] {sym} ptr idx mem) -(MOVBreg x:(MOVBloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBloadidx [off] {sym} ptr idx mem) -(MOVBZreg x:(MOVBZloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBZloadidx [off] {sym} ptr idx mem) -(MOVBZreg x:(MOVBloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVBZloadidx [off] {sym} ptr idx mem) -(MOVHreg x:(MOVHZloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHloadidx [off] {sym} ptr idx mem) -(MOVHreg x:(MOVHloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHloadidx [off] {sym} ptr idx mem) -(MOVHZreg x:(MOVHZloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHZloadidx [off] {sym} ptr idx mem) -(MOVHZreg x:(MOVHloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVHZloadidx [off] {sym} ptr idx mem) -(MOVWreg x:(MOVWZloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWloadidx [off] {sym} ptr idx mem) -(MOVWreg x:(MOVWloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWloadidx [off] {sym} ptr idx mem) -(MOVWZreg x:(MOVWZloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWZloadidx [off] {sym} ptr idx mem) -(MOVWZreg x:(MOVWloadidx [off] {sym} ptr idx mem)) && x.Uses == 1 && clobber(x) -> @x.Block (MOVWZloadidx [off] {sym} ptr idx mem) - // replace load from same location as preceding store with copy -(MOVDload [off] {sym} ptr1 (MOVDstore [off] {sym} ptr2 x _)) && isSamePtr(ptr1, ptr2) -> (MOVDreg x) +(MOVDload [off] {sym} ptr1 (MOVDstore [off] {sym} ptr2 x _)) && isSamePtr(ptr1, ptr2) -> x (MOVWload [off] {sym} ptr1 (MOVWstore [off] {sym} ptr2 x _)) && isSamePtr(ptr1, ptr2) -> (MOVWreg x) (MOVHload [off] {sym} ptr1 (MOVHstore [off] {sym} ptr2 x _)) && isSamePtr(ptr1, ptr2) -> (MOVHreg x) (MOVBload [off] {sym} ptr1 (MOVBstore [off] {sym} ptr2 x _)) && isSamePtr(ptr1, ptr2) -> (MOVBreg x) @@ -755,7 +725,7 @@ // remove unnecessary FPR <-> GPR moves (LDGR (LGDR x)) -> x -(LGDR (LDGR x)) -> (MOVDreg x) +(LGDR (LDGR x)) -> x // Don't extend before storing (MOVWstore [off] {sym} ptr (MOVWreg x) mem) -> (MOVWstore [off] {sym} ptr x mem) diff --git a/src/cmd/compile/internal/ssa/gen/S390XOps.go b/src/cmd/compile/internal/ssa/gen/S390XOps.go index 03c8b3de06..b064e46377 100644 --- a/src/cmd/compile/internal/ssa/gen/S390XOps.go +++ b/src/cmd/compile/internal/ssa/gen/S390XOps.go @@ -373,9 +373,6 @@ func init() { {name: "MOVHZreg", argLength: 1, reg: gp11sp, asm: "MOVHZ", typ: "UInt64"}, // zero extend arg0 from int16 to int64 {name: "MOVWreg", argLength: 1, reg: gp11sp, asm: "MOVW", typ: "Int64"}, // sign extend arg0 from int32 to int64 {name: "MOVWZreg", argLength: 1, reg: gp11sp, asm: "MOVWZ", typ: "UInt64"}, // zero extend arg0 from int32 to int64 - {name: "MOVDreg", argLength: 1, reg: gp11sp, asm: "MOVD"}, // move from arg0 - - {name: "MOVDnop", argLength: 1, reg: gp11, resultInArg0: true}, // nop, return arg0 in same register {name: "MOVDconst", reg: gp01, asm: "MOVD", typ: "UInt64", aux: "Int64", rematerializeable: true}, // auxint @@ -489,7 +486,7 @@ func init() { {name: "LoweredPanicBoundsB", argLength: 3, aux: "Int64", reg: regInfo{inputs: []regMask{r1, r2}}, typ: "Mem"}, // arg0=idx, arg1=len, arg2=mem, returns memory. AuxInt contains report code (see PanicBounds in generic.go). {name: "LoweredPanicBoundsC", argLength: 3, aux: "Int64", reg: regInfo{inputs: []regMask{r0, r1}}, typ: "Mem"}, // arg0=idx, arg1=len, arg2=mem, returns memory. AuxInt contains report code (see PanicBounds in generic.go). - // Constant conditon code values. The condition code can be 0, 1, 2 or 3. + // Constant condition code values. The condition code can be 0, 1, 2 or 3. {name: "FlagEQ"}, // CC=0 (equal) {name: "FlagLT"}, // CC=1 (less than) {name: "FlagGT"}, // CC=2 (greater than) @@ -571,6 +568,19 @@ func init() { clobberFlags: true, }, + // unsigned multiplication (64x64 → 128) + // + // Multiply the two 64-bit input operands together and place the 128-bit result into + // an even-odd register pair. The second register in the target pair also contains + // one of the input operands. Since we don't currently have a way to specify an + // even-odd register pair we hardcode this register pair as R2:R3. + { + name: "MLGR", + argLength: 2, + reg: regInfo{inputs: []regMask{gp, r3}, outputs: []regMask{r2, r3}}, + asm: "MLGR", + }, + // pseudo operations to sum the output of the POPCNT instruction {name: "SumBytes2", argLength: 1, typ: "UInt8"}, // sum the rightmost 2 bytes in arg0 ignoring overflow {name: "SumBytes4", argLength: 1, typ: "UInt8"}, // sum the rightmost 4 bytes in arg0 ignoring overflow diff --git a/src/cmd/compile/internal/ssa/gen/Wasm.rules b/src/cmd/compile/internal/ssa/gen/Wasm.rules index 72080703f5..c9dd6e8084 100644 --- a/src/cmd/compile/internal/ssa/gen/Wasm.rules +++ b/src/cmd/compile/internal/ssa/gen/Wasm.rules @@ -103,6 +103,8 @@ // Unsigned shifts need to return 0 if shift amount is >= width of shifted value. (Lsh64x64 x y) && shiftIsBounded(v) -> (I64Shl x y) +(Lsh64x64 x (I64Const [c])) && uint64(c) < 64 -> (I64Shl x (I64Const [c])) +(Lsh64x64 x (I64Const [c])) && uint64(c) >= 64 -> (I64Const [0]) (Lsh64x64 x y) -> (Select (I64Shl x y) (I64Const [0]) (I64LtU y (I64Const [64]))) (Lsh64x(32|16|8) x y) -> (Lsh64x64 x (ZeroExt(32|16|8)to64 y)) @@ -116,6 +118,8 @@ (Lsh8x(32|16|8) x y) -> (Lsh64x64 x (ZeroExt(32|16|8)to64 y)) (Rsh64Ux64 x y) && shiftIsBounded(v) -> (I64ShrU x y) +(Rsh64Ux64 x (I64Const [c])) && uint64(c) < 64 -> (I64ShrU x (I64Const [c])) +(Rsh64Ux64 x (I64Const [c])) && uint64(c) >= 64 -> (I64Const [0]) (Rsh64Ux64 x y) -> (Select (I64ShrU x y) (I64Const [0]) (I64LtU y (I64Const [64]))) (Rsh64Ux(32|16|8) x y) -> (Rsh64Ux64 x (ZeroExt(32|16|8)to64 y)) @@ -132,6 +136,8 @@ // We implement this by setting the shift value to (width - 1) if the shift value is >= width. (Rsh64x64 x y) && shiftIsBounded(v) -> (I64ShrS x y) +(Rsh64x64 x (I64Const [c])) && uint64(c) < 64 -> (I64ShrS x (I64Const [c])) +(Rsh64x64 x (I64Const [c])) && uint64(c) >= 64 -> (I64ShrS x (I64Const [63])) (Rsh64x64 x y) -> (I64ShrS x (Select y (I64Const [63]) (I64LtU y (I64Const [64])))) (Rsh64x(32|16|8) x y) -> (Rsh64x64 x (ZeroExt(32|16|8)to64 y)) diff --git a/src/cmd/compile/internal/ssa/gen/genericOps.go b/src/cmd/compile/internal/ssa/gen/genericOps.go index 8933aa51ef..3ca773b595 100644 --- a/src/cmd/compile/internal/ssa/gen/genericOps.go +++ b/src/cmd/compile/internal/ssa/gen/genericOps.go @@ -484,7 +484,7 @@ var genericOps = []opData{ {name: "VarDef", argLength: 1, aux: "Sym", typ: "Mem", symEffect: "None", zeroWidth: true}, // aux is a *gc.Node of a variable that is about to be initialized. arg0=mem, returns mem {name: "VarKill", argLength: 1, aux: "Sym", symEffect: "None"}, // aux is a *gc.Node of a variable that is known to be dead. arg0=mem, returns mem - // TODO: what's the difference betweeen VarLive and KeepAlive? + // TODO: what's the difference between VarLive and KeepAlive? {name: "VarLive", argLength: 1, aux: "Sym", symEffect: "Read", zeroWidth: true}, // aux is a *gc.Node of a variable that must be kept live. arg0=mem, returns mem {name: "KeepAlive", argLength: 2, typ: "Mem", zeroWidth: true}, // arg[0] is a value that must be kept alive until this mark. arg[1]=mem, returns mem diff --git a/src/cmd/compile/internal/ssa/gen/main.go b/src/cmd/compile/internal/ssa/gen/main.go index 9c0e0904b2..ba17148fc9 100644 --- a/src/cmd/compile/internal/ssa/gen/main.go +++ b/src/cmd/compile/internal/ssa/gen/main.go @@ -74,7 +74,7 @@ type regInfo struct { // clobbers encodes the set of registers that are overwritten by // the instruction (other than the output registers). clobbers regMask - // outpus[i] encodes the set of registers allowed for the i'th output. + // outputs[i] encodes the set of registers allowed for the i'th output. outputs []regMask } @@ -430,7 +430,7 @@ func (a arch) Name() string { // // Note that there is no limit on the concurrency at the moment. On a four-core // laptop at the time of writing, peak RSS usually reached ~230MiB, which seems -// doable by practially any machine nowadays. If that stops being the case, we +// doable by practically any machine nowadays. If that stops being the case, we // can cap this func to a fixed number of architectures being generated at once. func genLower() { var wg sync.WaitGroup diff --git a/src/cmd/compile/internal/ssa/gen/rulegen.go b/src/cmd/compile/internal/ssa/gen/rulegen.go index ad09975a6d..3a18ca252c 100644 --- a/src/cmd/compile/internal/ssa/gen/rulegen.go +++ b/src/cmd/compile/internal/ssa/gen/rulegen.go @@ -21,12 +21,13 @@ import ( "go/parser" "go/printer" "go/token" - "go/types" "io" "log" "os" + "path" "regexp" "sort" + "strconv" "strings" "golang.org/x/tools/go/ast/astutil" @@ -266,38 +267,38 @@ func genRulesSuffix(arch arch, suff string) { } tfile := fset.File(file.Pos()) - for n := 0; n < 3; n++ { - unused := make(map[token.Pos]bool) - conf := types.Config{Error: func(err error) { - if terr, ok := err.(types.Error); ok && strings.Contains(terr.Msg, "not used") { - unused[terr.Pos] = true - } - }} - _, _ = conf.Check("ssa", fset, []*ast.File{file}, nil) - if len(unused) == 0 { - break - } - pre := func(c *astutil.Cursor) bool { - if node := c.Node(); node != nil && unused[node.Pos()] { - c.Delete() - // Unused imports and declarations use exactly - // one line. Prevent leaving an empty line. - tfile.MergeLine(tfile.Position(node.Pos()).Line) - return false - } + // First, use unusedInspector to find the unused declarations by their + // start position. + u := unusedInspector{unused: make(map[token.Pos]bool)} + u.node(file) + + // Then, delete said nodes via astutil.Apply. + pre := func(c *astutil.Cursor) bool { + node := c.Node() + if node == nil { return true } - post := func(c *astutil.Cursor) bool { - switch node := c.Node().(type) { - case *ast.GenDecl: - if len(node.Specs) == 0 { - c.Delete() - } + if u.unused[node.Pos()] { + c.Delete() + // Unused imports and declarations use exactly + // one line. Prevent leaving an empty line. + tfile.MergeLine(tfile.Position(node.Pos()).Line) + return false + } + return true + } + post := func(c *astutil.Cursor) bool { + switch node := c.Node().(type) { + case *ast.GenDecl: + if len(node.Specs) == 0 { + // Don't leave a broken or empty GenDecl behind, + // such as "import ()". + c.Delete() } - return true } - file = astutil.Apply(file, pre, post).(*ast.File) + return true } + file = astutil.Apply(file, pre, post).(*ast.File) // Write the well-formatted source to file f, err := os.Create("../rewrite" + arch.name + suff + ".go") @@ -319,6 +320,234 @@ func genRulesSuffix(arch arch, suff string) { } } +// unusedInspector can be used to detect unused variables and imports in an +// ast.Node via its node method. The result is available in the "unused" map. +// +// note that unusedInspector is lazy and best-effort; it only supports the node +// types and patterns used by the rulegen program. +type unusedInspector struct { + // scope is the current scope, which can never be nil when a declaration + // is encountered. That is, the unusedInspector.node entrypoint should + // generally be an entire file or block. + scope *scope + + // unused is the resulting set of unused declared names, indexed by the + // starting position of the node that declared the name. + unused map[token.Pos]bool + + // defining is the object currently being defined; this is useful so + // that if "foo := bar" is unused and removed, we can then detect if + // "bar" becomes unused as well. + defining *object +} + +// scoped opens a new scope when called, and returns a function which closes +// that same scope. When a scope is closed, unused variables are recorded. +func (u *unusedInspector) scoped() func() { + outer := u.scope + u.scope = &scope{outer: outer, objects: map[string]*object{}} + return func() { + for anyUnused := true; anyUnused; { + anyUnused = false + for _, obj := range u.scope.objects { + if obj.numUses > 0 { + continue + } + u.unused[obj.pos] = true + for _, used := range obj.used { + if used.numUses--; used.numUses == 0 { + anyUnused = true + } + } + // We've decremented numUses for each of the + // objects in used. Zero this slice too, to keep + // everything consistent. + obj.used = nil + } + } + u.scope = outer + } +} + +func (u *unusedInspector) exprs(list []ast.Expr) { + for _, x := range list { + u.node(x) + } +} + +func (u *unusedInspector) stmts(list []ast.Stmt) { + for _, x := range list { + u.node(x) + } +} + +func (u *unusedInspector) decls(list []ast.Decl) { + for _, x := range list { + u.node(x) + } +} + +func (u *unusedInspector) node(node ast.Node) { + switch node := node.(type) { + case *ast.File: + defer u.scoped()() + u.decls(node.Decls) + case *ast.GenDecl: + for _, spec := range node.Specs { + u.node(spec) + } + case *ast.ImportSpec: + impPath, _ := strconv.Unquote(node.Path.Value) + name := path.Base(impPath) + u.scope.objects[name] = &object{ + name: name, + pos: node.Pos(), + } + case *ast.FuncDecl: + u.node(node.Type) + if node.Body != nil { + u.node(node.Body) + } + case *ast.FuncType: + if node.Params != nil { + u.node(node.Params) + } + if node.Results != nil { + u.node(node.Results) + } + case *ast.FieldList: + for _, field := range node.List { + u.node(field) + } + case *ast.Field: + u.node(node.Type) + + // statements + + case *ast.BlockStmt: + defer u.scoped()() + u.stmts(node.List) + case *ast.IfStmt: + if node.Init != nil { + u.node(node.Init) + } + u.node(node.Cond) + u.node(node.Body) + if node.Else != nil { + u.node(node.Else) + } + case *ast.ForStmt: + if node.Init != nil { + u.node(node.Init) + } + if node.Cond != nil { + u.node(node.Cond) + } + if node.Post != nil { + u.node(node.Post) + } + u.node(node.Body) + case *ast.SwitchStmt: + if node.Init != nil { + u.node(node.Init) + } + if node.Tag != nil { + u.node(node.Tag) + } + u.node(node.Body) + case *ast.CaseClause: + u.exprs(node.List) + defer u.scoped()() + u.stmts(node.Body) + case *ast.BranchStmt: + case *ast.ExprStmt: + u.node(node.X) + case *ast.AssignStmt: + if node.Tok != token.DEFINE { + u.exprs(node.Rhs) + u.exprs(node.Lhs) + break + } + if len(node.Lhs) != 1 { + panic("no support for := with multiple names") + } + + name := node.Lhs[0].(*ast.Ident) + obj := &object{ + name: name.Name, + pos: name.NamePos, + } + + old := u.defining + u.defining = obj + u.exprs(node.Rhs) + u.defining = old + + u.scope.objects[name.Name] = obj + case *ast.ReturnStmt: + u.exprs(node.Results) + + // expressions + + case *ast.CallExpr: + u.node(node.Fun) + u.exprs(node.Args) + case *ast.SelectorExpr: + u.node(node.X) + case *ast.UnaryExpr: + u.node(node.X) + case *ast.BinaryExpr: + u.node(node.X) + u.node(node.Y) + case *ast.StarExpr: + u.node(node.X) + case *ast.ParenExpr: + u.node(node.X) + case *ast.IndexExpr: + u.node(node.X) + u.node(node.Index) + case *ast.TypeAssertExpr: + u.node(node.X) + u.node(node.Type) + case *ast.Ident: + if obj := u.scope.Lookup(node.Name); obj != nil { + obj.numUses++ + if u.defining != nil { + u.defining.used = append(u.defining.used, obj) + } + } + case *ast.BasicLit: + default: + panic(fmt.Sprintf("unhandled node: %T", node)) + } +} + +// scope keeps track of a certain scope and its declared names, as well as the +// outer (parent) scope. +type scope struct { + outer *scope // can be nil, if this is the top-level scope + objects map[string]*object // indexed by each declared name +} + +func (s *scope) Lookup(name string) *object { + if obj := s.objects[name]; obj != nil { + return obj + } + if s.outer == nil { + return nil + } + return s.outer.Lookup(name) +} + +// object keeps track of a declared name, such as a variable or import. +type object struct { + name string + pos token.Pos // start position of the node declaring the object + + numUses int // number of times this object is used + used []*object // objects that its declaration makes use of +} + func fprint(w io.Writer, n Node) { switch n := n.(type) { case *File: @@ -396,7 +625,7 @@ type Node interface{} // ast.Stmt under some limited circumstances. type Statement interface{} -// bodyBase is shared by all of our statement psuedo-node types which can +// bodyBase is shared by all of our statement pseudo-node types which can // contain other statements. type bodyBase struct { list []Statement diff --git a/src/cmd/compile/internal/ssa/loopbce.go b/src/cmd/compile/internal/ssa/loopbce.go index 5f02643ccd..092e7aa35b 100644 --- a/src/cmd/compile/internal/ssa/loopbce.go +++ b/src/cmd/compile/internal/ssa/loopbce.go @@ -186,7 +186,7 @@ func findIndVar(f *Func) []indVar { // Handle induction variables of these forms. // KNN is known-not-negative. // SIGNED ARITHMETIC ONLY. (see switch on b.Control.Op above) - // Possibilitis for KNN are len and cap; perhaps we can infer others. + // Possibilities for KNN are len and cap; perhaps we can infer others. // for i := 0; i <= KNN-k ; i += k // for i := 0; i < KNN-(k-1); i += k // Also handle decreasing. diff --git a/src/cmd/compile/internal/ssa/opGen.go b/src/cmd/compile/internal/ssa/opGen.go index 57ca6b01cc..de43cd14e8 100644 --- a/src/cmd/compile/internal/ssa/opGen.go +++ b/src/cmd/compile/internal/ssa/opGen.go @@ -2085,8 +2085,6 @@ const ( OpS390XMOVHZreg OpS390XMOVWreg OpS390XMOVWZreg - OpS390XMOVDreg - OpS390XMOVDnop OpS390XMOVDconst OpS390XLDGR OpS390XLGDR @@ -2179,6 +2177,7 @@ const ( OpS390XLoweredAtomicExchange64 OpS390XFLOGR OpS390XPOPCNT + OpS390XMLGR OpS390XSumBytes2 OpS390XSumBytes4 OpS390XSumBytes8 @@ -28087,32 +28086,6 @@ var opcodeTable = [...]opInfo{ }, }, }, - { - name: "MOVDreg", - argLen: 1, - asm: s390x.AMOVD, - reg: regInfo{ - inputs: []inputInfo{ - {0, 56319}, // R0 R1 R2 R3 R4 R5 R6 R7 R8 R9 R11 R12 R14 SP - }, - outputs: []outputInfo{ - {0, 23551}, // R0 R1 R2 R3 R4 R5 R6 R7 R8 R9 R11 R12 R14 - }, - }, - }, - { - name: "MOVDnop", - argLen: 1, - resultInArg0: true, - reg: regInfo{ - inputs: []inputInfo{ - {0, 23551}, // R0 R1 R2 R3 R4 R5 R6 R7 R8 R9 R11 R12 R14 - }, - outputs: []outputInfo{ - {0, 23551}, // R0 R1 R2 R3 R4 R5 R6 R7 R8 R9 R11 R12 R14 - }, - }, - }, { name: "MOVDconst", auxType: auxInt64, @@ -29388,6 +29361,21 @@ var opcodeTable = [...]opInfo{ }, }, }, + { + name: "MLGR", + argLen: 2, + asm: s390x.AMLGR, + reg: regInfo{ + inputs: []inputInfo{ + {1, 8}, // R3 + {0, 23551}, // R0 R1 R2 R3 R4 R5 R6 R7 R8 R9 R11 R12 R14 + }, + outputs: []outputInfo{ + {0, 4}, // R2 + {1, 8}, // R3 + }, + }, + }, { name: "SumBytes2", argLen: 1, diff --git a/src/cmd/compile/internal/ssa/regalloc.go b/src/cmd/compile/internal/ssa/regalloc.go index 8abbf61507..3b7049b823 100644 --- a/src/cmd/compile/internal/ssa/regalloc.go +++ b/src/cmd/compile/internal/ssa/regalloc.go @@ -411,7 +411,7 @@ func (s *regAllocState) allocReg(mask regMask, v *Value) register { if s.f.Config.ctxt.Arch.Arch == sys.ArchWasm { // TODO(neelance): In theory this should never happen, because all wasm registers are equal. - // So if there is still a free register, the allocation should have picked that one in the first place insead of + // So if there is still a free register, the allocation should have picked that one in the first place instead of // trying to kick some other value out. In practice, this case does happen and it breaks the stack optimization. s.freeReg(r) return r @@ -489,7 +489,7 @@ func (s *regAllocState) allocValToReg(v *Value, mask regMask, nospill bool, pos } var r register - // If nospill is set, the value is used immedately, so it can live on the WebAssembly stack. + // If nospill is set, the value is used immediately, so it can live on the WebAssembly stack. onWasmStack := nospill && s.f.Config.ctxt.Arch.Arch == sys.ArchWasm if !onWasmStack { // Allocate a register. diff --git a/src/cmd/compile/internal/ssa/rewriteS390X.go b/src/cmd/compile/internal/ssa/rewriteS390X.go index 2de5e1b83f..264bf255ce 100644 --- a/src/cmd/compile/internal/ssa/rewriteS390X.go +++ b/src/cmd/compile/internal/ssa/rewriteS390X.go @@ -335,6 +335,8 @@ func rewriteValueS390X(v *Value) bool { return rewriteValueS390X_OpMul64_0(v) case OpMul64F: return rewriteValueS390X_OpMul64F_0(v) + case OpMul64uhilo: + return rewriteValueS390X_OpMul64uhilo_0(v) case OpMul8: return rewriteValueS390X_OpMul8_0(v) case OpNeg16: @@ -560,13 +562,13 @@ func rewriteValueS390X(v *Value) bool { case OpS390XMOVBZloadidx: return rewriteValueS390X_OpS390XMOVBZloadidx_0(v) case OpS390XMOVBZreg: - return rewriteValueS390X_OpS390XMOVBZreg_0(v) || rewriteValueS390X_OpS390XMOVBZreg_10(v) + return rewriteValueS390X_OpS390XMOVBZreg_0(v) || rewriteValueS390X_OpS390XMOVBZreg_10(v) || rewriteValueS390X_OpS390XMOVBZreg_20(v) case OpS390XMOVBload: return rewriteValueS390X_OpS390XMOVBload_0(v) case OpS390XMOVBloadidx: return rewriteValueS390X_OpS390XMOVBloadidx_0(v) case OpS390XMOVBreg: - return rewriteValueS390X_OpS390XMOVBreg_0(v) + return rewriteValueS390X_OpS390XMOVBreg_0(v) || rewriteValueS390X_OpS390XMOVBreg_10(v) case OpS390XMOVBstore: return rewriteValueS390X_OpS390XMOVBstore_0(v) || rewriteValueS390X_OpS390XMOVBstore_10(v) case OpS390XMOVBstoreconst: @@ -591,10 +593,6 @@ func rewriteValueS390X(v *Value) bool { return rewriteValueS390X_OpS390XMOVDload_0(v) case OpS390XMOVDloadidx: return rewriteValueS390X_OpS390XMOVDloadidx_0(v) - case OpS390XMOVDnop: - return rewriteValueS390X_OpS390XMOVDnop_0(v) || rewriteValueS390X_OpS390XMOVDnop_10(v) - case OpS390XMOVDreg: - return rewriteValueS390X_OpS390XMOVDreg_0(v) || rewriteValueS390X_OpS390XMOVDreg_10(v) case OpS390XMOVDstore: return rewriteValueS390X_OpS390XMOVDstore_0(v) case OpS390XMOVDstoreconst: @@ -638,7 +636,7 @@ func rewriteValueS390X(v *Value) bool { case OpS390XMOVWloadidx: return rewriteValueS390X_OpS390XMOVWloadidx_0(v) case OpS390XMOVWreg: - return rewriteValueS390X_OpS390XMOVWreg_0(v) || rewriteValueS390X_OpS390XMOVWreg_10(v) + return rewriteValueS390X_OpS390XMOVWreg_0(v) || rewriteValueS390X_OpS390XMOVWreg_10(v) || rewriteValueS390X_OpS390XMOVWreg_20(v) case OpS390XMOVWstore: return rewriteValueS390X_OpS390XMOVWstore_0(v) || rewriteValueS390X_OpS390XMOVWstore_10(v) case OpS390XMOVWstoreconst: @@ -682,23 +680,23 @@ func rewriteValueS390X(v *Value) bool { case OpS390XRLLG: return rewriteValueS390X_OpS390XRLLG_0(v) case OpS390XSLD: - return rewriteValueS390X_OpS390XSLD_0(v) || rewriteValueS390X_OpS390XSLD_10(v) + return rewriteValueS390X_OpS390XSLD_0(v) case OpS390XSLW: - return rewriteValueS390X_OpS390XSLW_0(v) || rewriteValueS390X_OpS390XSLW_10(v) + return rewriteValueS390X_OpS390XSLW_0(v) case OpS390XSRAD: - return rewriteValueS390X_OpS390XSRAD_0(v) || rewriteValueS390X_OpS390XSRAD_10(v) + return rewriteValueS390X_OpS390XSRAD_0(v) case OpS390XSRADconst: return rewriteValueS390X_OpS390XSRADconst_0(v) case OpS390XSRAW: - return rewriteValueS390X_OpS390XSRAW_0(v) || rewriteValueS390X_OpS390XSRAW_10(v) + return rewriteValueS390X_OpS390XSRAW_0(v) case OpS390XSRAWconst: return rewriteValueS390X_OpS390XSRAWconst_0(v) case OpS390XSRD: - return rewriteValueS390X_OpS390XSRD_0(v) || rewriteValueS390X_OpS390XSRD_10(v) + return rewriteValueS390X_OpS390XSRD_0(v) case OpS390XSRDconst: return rewriteValueS390X_OpS390XSRDconst_0(v) case OpS390XSRW: - return rewriteValueS390X_OpS390XSRW_0(v) || rewriteValueS390X_OpS390XSRW_10(v) + return rewriteValueS390X_OpS390XSRW_0(v) case OpS390XSTM2: return rewriteValueS390X_OpS390XSTM2_0(v) case OpS390XSTMG2: @@ -4613,6 +4611,19 @@ func rewriteValueS390X_OpMul64F_0(v *Value) bool { return true } } +func rewriteValueS390X_OpMul64uhilo_0(v *Value) bool { + // match: (Mul64uhilo x y) + // cond: + // result: (MLGR x y) + for { + y := v.Args[1] + x := v.Args[0] + v.reset(OpS390XMLGR) + v.AddArg(x) + v.AddArg(y) + return true + } +} func rewriteValueS390X_OpMul8_0(v *Value) bool { // match: (Mul8 x y) // cond: @@ -10676,14 +10687,15 @@ func rewriteValueS390X_OpS390XLEDBR_0(v *Value) bool { func rewriteValueS390X_OpS390XLGDR_0(v *Value) bool { // match: (LGDR (LDGR x)) // cond: - // result: (MOVDreg x) + // result: x for { v_0 := v.Args[0] if v_0.Op != OpS390XLDGR { break } x := v_0.Args[0] - v.reset(OpS390XMOVDreg) + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } @@ -10953,116 +10965,248 @@ func rewriteValueS390X_OpS390XMOVBZloadidx_0(v *Value) bool { return false } func rewriteValueS390X_OpS390XMOVBZreg_0(v *Value) bool { - // match: (MOVBZreg x:(MOVDLT (MOVDconst [c]) (MOVDconst [d]) _)) - // cond: int64(uint8(c)) == c && int64(uint8(d)) == d - // result: (MOVDreg x) + b := v.Block + // match: (MOVBZreg e:(MOVBreg x)) + // cond: clobberIfDead(e) + // result: (MOVBZreg x) for { - x := v.Args[0] - if x.Op != OpS390XMOVDLT { + e := v.Args[0] + if e.Op != OpS390XMOVBreg { break } - _ = x.Args[2] - x_0 := x.Args[0] - if x_0.Op != OpS390XMOVDconst { + x := e.Args[0] + if !(clobberIfDead(e)) { break } - c := x_0.AuxInt - x_1 := x.Args[1] - if x_1.Op != OpS390XMOVDconst { + v.reset(OpS390XMOVBZreg) + v.AddArg(x) + return true + } + // match: (MOVBZreg e:(MOVHreg x)) + // cond: clobberIfDead(e) + // result: (MOVBZreg x) + for { + e := v.Args[0] + if e.Op != OpS390XMOVHreg { break } - d := x_1.AuxInt - if !(int64(uint8(c)) == c && int64(uint8(d)) == d) { + x := e.Args[0] + if !(clobberIfDead(e)) { break } - v.reset(OpS390XMOVDreg) + v.reset(OpS390XMOVBZreg) v.AddArg(x) return true } - // match: (MOVBZreg x:(MOVDLE (MOVDconst [c]) (MOVDconst [d]) _)) - // cond: int64(uint8(c)) == c && int64(uint8(d)) == d - // result: (MOVDreg x) + // match: (MOVBZreg e:(MOVWreg x)) + // cond: clobberIfDead(e) + // result: (MOVBZreg x) for { - x := v.Args[0] - if x.Op != OpS390XMOVDLE { + e := v.Args[0] + if e.Op != OpS390XMOVWreg { break } - _ = x.Args[2] - x_0 := x.Args[0] - if x_0.Op != OpS390XMOVDconst { + x := e.Args[0] + if !(clobberIfDead(e)) { break } - c := x_0.AuxInt - x_1 := x.Args[1] - if x_1.Op != OpS390XMOVDconst { + v.reset(OpS390XMOVBZreg) + v.AddArg(x) + return true + } + // match: (MOVBZreg e:(MOVBZreg x)) + // cond: clobberIfDead(e) + // result: (MOVBZreg x) + for { + e := v.Args[0] + if e.Op != OpS390XMOVBZreg { break } - d := x_1.AuxInt - if !(int64(uint8(c)) == c && int64(uint8(d)) == d) { + x := e.Args[0] + if !(clobberIfDead(e)) { break } - v.reset(OpS390XMOVDreg) + v.reset(OpS390XMOVBZreg) v.AddArg(x) return true } - // match: (MOVBZreg x:(MOVDGT (MOVDconst [c]) (MOVDconst [d]) _)) - // cond: int64(uint8(c)) == c && int64(uint8(d)) == d - // result: (MOVDreg x) + // match: (MOVBZreg e:(MOVHZreg x)) + // cond: clobberIfDead(e) + // result: (MOVBZreg x) + for { + e := v.Args[0] + if e.Op != OpS390XMOVHZreg { + break + } + x := e.Args[0] + if !(clobberIfDead(e)) { + break + } + v.reset(OpS390XMOVBZreg) + v.AddArg(x) + return true + } + // match: (MOVBZreg e:(MOVWZreg x)) + // cond: clobberIfDead(e) + // result: (MOVBZreg x) + for { + e := v.Args[0] + if e.Op != OpS390XMOVWZreg { + break + } + x := e.Args[0] + if !(clobberIfDead(e)) { + break + } + v.reset(OpS390XMOVBZreg) + v.AddArg(x) + return true + } + // match: (MOVBZreg x:(MOVBZload _ _)) + // cond: (!x.Type.IsSigned() || x.Type.Size() > 1) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVDGT { + if x.Op != OpS390XMOVBZload { break } - _ = x.Args[2] - x_0 := x.Args[0] - if x_0.Op != OpS390XMOVDconst { + _ = x.Args[1] + if !(!x.Type.IsSigned() || x.Type.Size() > 1) { break } - c := x_0.AuxInt - x_1 := x.Args[1] - if x_1.Op != OpS390XMOVDconst { + v.reset(OpCopy) + v.Type = x.Type + v.AddArg(x) + return true + } + // match: (MOVBZreg x:(MOVBZloadidx _ _ _)) + // cond: (!x.Type.IsSigned() || x.Type.Size() > 1) + // result: x + for { + x := v.Args[0] + if x.Op != OpS390XMOVBZloadidx { break } - d := x_1.AuxInt - if !(int64(uint8(c)) == c && int64(uint8(d)) == d) { + _ = x.Args[2] + if !(!x.Type.IsSigned() || x.Type.Size() > 1) { break } - v.reset(OpS390XMOVDreg) + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVBZreg x:(MOVDGE (MOVDconst [c]) (MOVDconst [d]) _)) - // cond: int64(uint8(c)) == c && int64(uint8(d)) == d - // result: (MOVDreg x) + // match: (MOVBZreg x:(MOVBZloadidx _ _ _)) + // cond: (!x.Type.IsSigned() || x.Type.Size() > 1) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVDGE { + if x.Op != OpS390XMOVBZloadidx { break } _ = x.Args[2] - x_0 := x.Args[0] - if x_0.Op != OpS390XMOVDconst { + if !(!x.Type.IsSigned() || x.Type.Size() > 1) { break } - c := x_0.AuxInt - x_1 := x.Args[1] - if x_1.Op != OpS390XMOVDconst { + v.reset(OpCopy) + v.Type = x.Type + v.AddArg(x) + return true + } + // match: (MOVBZreg x:(MOVBload [o] {s} p mem)) + // cond: x.Uses == 1 && clobber(x) + // result: @x.Block (MOVBZload [o] {s} p mem) + for { + t := v.Type + x := v.Args[0] + if x.Op != OpS390XMOVBload { break } - d := x_1.AuxInt - if !(int64(uint8(c)) == c && int64(uint8(d)) == d) { + o := x.AuxInt + s := x.Aux + mem := x.Args[1] + p := x.Args[0] + if !(x.Uses == 1 && clobber(x)) { + break + } + b = x.Block + v0 := b.NewValue0(x.Pos, OpS390XMOVBZload, t) + v.reset(OpCopy) + v.AddArg(v0) + v0.AuxInt = o + v0.Aux = s + v0.AddArg(p) + v0.AddArg(mem) + return true + } + return false +} +func rewriteValueS390X_OpS390XMOVBZreg_10(v *Value) bool { + b := v.Block + // match: (MOVBZreg x:(MOVBloadidx [o] {s} p i mem)) + // cond: x.Uses == 1 && clobber(x) + // result: @x.Block (MOVBZloadidx [o] {s} p i mem) + for { + t := v.Type + x := v.Args[0] + if x.Op != OpS390XMOVBloadidx { + break + } + o := x.AuxInt + s := x.Aux + mem := x.Args[2] + p := x.Args[0] + i := x.Args[1] + if !(x.Uses == 1 && clobber(x)) { + break + } + b = x.Block + v0 := b.NewValue0(v.Pos, OpS390XMOVBZloadidx, t) + v.reset(OpCopy) + v.AddArg(v0) + v0.AuxInt = o + v0.Aux = s + v0.AddArg(p) + v0.AddArg(i) + v0.AddArg(mem) + return true + } + // match: (MOVBZreg x:(Arg )) + // cond: !t.IsSigned() && t.Size() == 1 + // result: x + for { + x := v.Args[0] + if x.Op != OpArg { + break + } + t := x.Type + if !(!t.IsSigned() && t.Size() == 1) { break } - v.reset(OpS390XMOVDreg) + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVBZreg x:(MOVDEQ (MOVDconst [c]) (MOVDconst [d]) _)) - // cond: int64(uint8(c)) == c && int64(uint8(d)) == d - // result: (MOVDreg x) + // match: (MOVBZreg (MOVDconst [c])) + // cond: + // result: (MOVDconst [int64( uint8(c))]) + for { + v_0 := v.Args[0] + if v_0.Op != OpS390XMOVDconst { + break + } + c := v_0.AuxInt + v.reset(OpS390XMOVDconst) + v.AuxInt = int64(uint8(c)) + return true + } + // match: (MOVBZreg x:(MOVDLT (MOVDconst [c]) (MOVDconst [d]) _)) + // cond: int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVDEQ { + if x.Op != OpS390XMOVDLT { break } _ = x.Args[2] @@ -11076,19 +11220,20 @@ func rewriteValueS390X_OpS390XMOVBZreg_0(v *Value) bool { break } d := x_1.AuxInt - if !(int64(uint8(c)) == c && int64(uint8(d)) == d) { + if !(int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1)) { break } - v.reset(OpS390XMOVDreg) + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVBZreg x:(MOVDNE (MOVDconst [c]) (MOVDconst [d]) _)) - // cond: int64(uint8(c)) == c && int64(uint8(d)) == d - // result: (MOVDreg x) + // match: (MOVBZreg x:(MOVDLE (MOVDconst [c]) (MOVDconst [d]) _)) + // cond: int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVDNE { + if x.Op != OpS390XMOVDLE { break } _ = x.Args[2] @@ -11102,19 +11247,20 @@ func rewriteValueS390X_OpS390XMOVBZreg_0(v *Value) bool { break } d := x_1.AuxInt - if !(int64(uint8(c)) == c && int64(uint8(d)) == d) { + if !(int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1)) { break } - v.reset(OpS390XMOVDreg) + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVBZreg x:(MOVDGTnoinv (MOVDconst [c]) (MOVDconst [d]) _)) - // cond: int64(uint8(c)) == c && int64(uint8(d)) == d - // result: (MOVDreg x) + // match: (MOVBZreg x:(MOVDGT (MOVDconst [c]) (MOVDconst [d]) _)) + // cond: int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVDGTnoinv { + if x.Op != OpS390XMOVDGT { break } _ = x.Args[2] @@ -11128,19 +11274,20 @@ func rewriteValueS390X_OpS390XMOVBZreg_0(v *Value) bool { break } d := x_1.AuxInt - if !(int64(uint8(c)) == c && int64(uint8(d)) == d) { + if !(int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1)) { break } - v.reset(OpS390XMOVDreg) + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVBZreg x:(MOVDGEnoinv (MOVDconst [c]) (MOVDconst [d]) _)) - // cond: int64(uint8(c)) == c && int64(uint8(d)) == d - // result: (MOVDreg x) + // match: (MOVBZreg x:(MOVDGE (MOVDconst [c]) (MOVDconst [d]) _)) + // cond: int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVDGEnoinv { + if x.Op != OpS390XMOVDGE { break } _ = x.Args[2] @@ -11154,187 +11301,125 @@ func rewriteValueS390X_OpS390XMOVBZreg_0(v *Value) bool { break } d := x_1.AuxInt - if !(int64(uint8(c)) == c && int64(uint8(d)) == d) { + if !(int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1)) { break } - v.reset(OpS390XMOVDreg) + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVBZreg x:(MOVBZload _ _)) - // cond: - // result: (MOVDreg x) + // match: (MOVBZreg x:(MOVDEQ (MOVDconst [c]) (MOVDconst [d]) _)) + // cond: int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVBZload { + if x.Op != OpS390XMOVDEQ { break } - _ = x.Args[1] - v.reset(OpS390XMOVDreg) - v.AddArg(x) - return true - } - // match: (MOVBZreg x:(Arg )) - // cond: is8BitInt(t) && !isSigned(t) - // result: (MOVDreg x) - for { - x := v.Args[0] - if x.Op != OpArg { + _ = x.Args[2] + x_0 := x.Args[0] + if x_0.Op != OpS390XMOVDconst { break } - t := x.Type - if !(is8BitInt(t) && !isSigned(t)) { + c := x_0.AuxInt + x_1 := x.Args[1] + if x_1.Op != OpS390XMOVDconst { break } - v.reset(OpS390XMOVDreg) - v.AddArg(x) - return true - } - return false -} -func rewriteValueS390X_OpS390XMOVBZreg_10(v *Value) bool { - b := v.Block - typ := &b.Func.Config.Types - // match: (MOVBZreg x:(MOVBZreg _)) - // cond: - // result: (MOVDreg x) - for { - x := v.Args[0] - if x.Op != OpS390XMOVBZreg { + d := x_1.AuxInt + if !(int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1)) { break } - v.reset(OpS390XMOVDreg) + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVBZreg (MOVBreg x)) - // cond: - // result: (MOVBZreg x) + // match: (MOVBZreg x:(MOVDNE (MOVDconst [c]) (MOVDconst [d]) _)) + // cond: int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1) + // result: x for { - v_0 := v.Args[0] - if v_0.Op != OpS390XMOVBreg { + x := v.Args[0] + if x.Op != OpS390XMOVDNE { break } - x := v_0.Args[0] - v.reset(OpS390XMOVBZreg) - v.AddArg(x) - return true - } - // match: (MOVBZreg (MOVDconst [c])) - // cond: - // result: (MOVDconst [int64(uint8(c))]) - for { - v_0 := v.Args[0] - if v_0.Op != OpS390XMOVDconst { + _ = x.Args[2] + x_0 := x.Args[0] + if x_0.Op != OpS390XMOVDconst { break } - c := v_0.AuxInt - v.reset(OpS390XMOVDconst) - v.AuxInt = int64(uint8(c)) - return true - } - // match: (MOVBZreg x:(MOVBZload [off] {sym} ptr mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVBZload [off] {sym} ptr mem) - for { - x := v.Args[0] - if x.Op != OpS390XMOVBZload { + c := x_0.AuxInt + x_1 := x.Args[1] + if x_1.Op != OpS390XMOVDconst { break } - off := x.AuxInt - sym := x.Aux - mem := x.Args[1] - ptr := x.Args[0] - if !(x.Uses == 1 && clobber(x)) { + d := x_1.AuxInt + if !(int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1)) { break } - b = x.Block - v0 := b.NewValue0(x.Pos, OpS390XMOVBZload, v.Type) v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(mem) + v.Type = x.Type + v.AddArg(x) return true } - // match: (MOVBZreg x:(MOVBload [off] {sym} ptr mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVBZload [off] {sym} ptr mem) + // match: (MOVBZreg x:(MOVDGTnoinv (MOVDconst [c]) (MOVDconst [d]) _)) + // cond: int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVBload { + if x.Op != OpS390XMOVDGTnoinv { break } - off := x.AuxInt - sym := x.Aux - mem := x.Args[1] - ptr := x.Args[0] - if !(x.Uses == 1 && clobber(x)) { + _ = x.Args[2] + x_0 := x.Args[0] + if x_0.Op != OpS390XMOVDconst { break } - b = x.Block - v0 := b.NewValue0(x.Pos, OpS390XMOVBZload, v.Type) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(mem) - return true - } - // match: (MOVBZreg x:(MOVBZloadidx [off] {sym} ptr idx mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVBZloadidx [off] {sym} ptr idx mem) - for { - x := v.Args[0] - if x.Op != OpS390XMOVBZloadidx { + c := x_0.AuxInt + x_1 := x.Args[1] + if x_1.Op != OpS390XMOVDconst { break } - off := x.AuxInt - sym := x.Aux - mem := x.Args[2] - ptr := x.Args[0] - idx := x.Args[1] - if !(x.Uses == 1 && clobber(x)) { + d := x_1.AuxInt + if !(int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1)) { break } - b = x.Block - v0 := b.NewValue0(v.Pos, OpS390XMOVBZloadidx, v.Type) v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(idx) - v0.AddArg(mem) + v.Type = x.Type + v.AddArg(x) return true } - // match: (MOVBZreg x:(MOVBloadidx [off] {sym} ptr idx mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVBZloadidx [off] {sym} ptr idx mem) + return false +} +func rewriteValueS390X_OpS390XMOVBZreg_20(v *Value) bool { + b := v.Block + typ := &b.Func.Config.Types + // match: (MOVBZreg x:(MOVDGEnoinv (MOVDconst [c]) (MOVDconst [d]) _)) + // cond: int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVBloadidx { + if x.Op != OpS390XMOVDGEnoinv { break } - off := x.AuxInt - sym := x.Aux - mem := x.Args[2] - ptr := x.Args[0] - idx := x.Args[1] - if !(x.Uses == 1 && clobber(x)) { + _ = x.Args[2] + x_0 := x.Args[0] + if x_0.Op != OpS390XMOVDconst { + break + } + c := x_0.AuxInt + x_1 := x.Args[1] + if x_1.Op != OpS390XMOVDconst { + break + } + d := x_1.AuxInt + if !(int64(uint8(c)) == c && int64(uint8(d)) == d && (!x.Type.IsSigned() || x.Type.Size() > 1)) { break } - b = x.Block - v0 := b.NewValue0(v.Pos, OpS390XMOVBZloadidx, v.Type) v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(idx) - v0.AddArg(mem) + v.Type = x.Type + v.AddArg(x) return true } // match: (MOVBZreg (ANDWconst [m] x)) @@ -11589,176 +11674,240 @@ func rewriteValueS390X_OpS390XMOVBloadidx_0(v *Value) bool { } func rewriteValueS390X_OpS390XMOVBreg_0(v *Value) bool { b := v.Block - typ := &b.Func.Config.Types - // match: (MOVBreg x:(MOVBload _ _)) - // cond: - // result: (MOVDreg x) + // match: (MOVBreg e:(MOVBreg x)) + // cond: clobberIfDead(e) + // result: (MOVBreg x) for { - x := v.Args[0] - if x.Op != OpS390XMOVBload { + e := v.Args[0] + if e.Op != OpS390XMOVBreg { break } - _ = x.Args[1] - v.reset(OpS390XMOVDreg) + x := e.Args[0] + if !(clobberIfDead(e)) { + break + } + v.reset(OpS390XMOVBreg) v.AddArg(x) return true } - // match: (MOVBreg x:(Arg )) - // cond: is8BitInt(t) && isSigned(t) - // result: (MOVDreg x) + // match: (MOVBreg e:(MOVHreg x)) + // cond: clobberIfDead(e) + // result: (MOVBreg x) for { - x := v.Args[0] - if x.Op != OpArg { + e := v.Args[0] + if e.Op != OpS390XMOVHreg { break } - t := x.Type - if !(is8BitInt(t) && isSigned(t)) { + x := e.Args[0] + if !(clobberIfDead(e)) { break } - v.reset(OpS390XMOVDreg) + v.reset(OpS390XMOVBreg) v.AddArg(x) return true } - // match: (MOVBreg x:(MOVBreg _)) - // cond: - // result: (MOVDreg x) + // match: (MOVBreg e:(MOVWreg x)) + // cond: clobberIfDead(e) + // result: (MOVBreg x) for { - x := v.Args[0] - if x.Op != OpS390XMOVBreg { + e := v.Args[0] + if e.Op != OpS390XMOVWreg { + break + } + x := e.Args[0] + if !(clobberIfDead(e)) { break } - v.reset(OpS390XMOVDreg) + v.reset(OpS390XMOVBreg) v.AddArg(x) return true } - // match: (MOVBreg (MOVBZreg x)) - // cond: + // match: (MOVBreg e:(MOVBZreg x)) + // cond: clobberIfDead(e) // result: (MOVBreg x) for { - v_0 := v.Args[0] - if v_0.Op != OpS390XMOVBZreg { + e := v.Args[0] + if e.Op != OpS390XMOVBZreg { + break + } + x := e.Args[0] + if !(clobberIfDead(e)) { break } - x := v_0.Args[0] v.reset(OpS390XMOVBreg) v.AddArg(x) return true } - // match: (MOVBreg (MOVDconst [c])) - // cond: - // result: (MOVDconst [int64(int8(c))]) + // match: (MOVBreg e:(MOVHZreg x)) + // cond: clobberIfDead(e) + // result: (MOVBreg x) for { - v_0 := v.Args[0] - if v_0.Op != OpS390XMOVDconst { + e := v.Args[0] + if e.Op != OpS390XMOVHZreg { break } - c := v_0.AuxInt - v.reset(OpS390XMOVDconst) - v.AuxInt = int64(int8(c)) + x := e.Args[0] + if !(clobberIfDead(e)) { + break + } + v.reset(OpS390XMOVBreg) + v.AddArg(x) return true } - // match: (MOVBreg x:(MOVBZload [off] {sym} ptr mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVBload [off] {sym} ptr mem) + // match: (MOVBreg e:(MOVWZreg x)) + // cond: clobberIfDead(e) + // result: (MOVBreg x) + for { + e := v.Args[0] + if e.Op != OpS390XMOVWZreg { + break + } + x := e.Args[0] + if !(clobberIfDead(e)) { + break + } + v.reset(OpS390XMOVBreg) + v.AddArg(x) + return true + } + // match: (MOVBreg x:(MOVBload _ _)) + // cond: (x.Type.IsSigned() || x.Type.Size() == 8) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVBZload { + if x.Op != OpS390XMOVBload { break } - off := x.AuxInt - sym := x.Aux - mem := x.Args[1] - ptr := x.Args[0] - if !(x.Uses == 1 && clobber(x)) { + _ = x.Args[1] + if !(x.Type.IsSigned() || x.Type.Size() == 8) { break } - b = x.Block - v0 := b.NewValue0(x.Pos, OpS390XMOVBload, v.Type) v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(mem) + v.Type = x.Type + v.AddArg(x) + return true + } + // match: (MOVBreg x:(MOVBloadidx _ _ _)) + // cond: (x.Type.IsSigned() || x.Type.Size() == 8) + // result: x + for { + x := v.Args[0] + if x.Op != OpS390XMOVBloadidx { + break + } + _ = x.Args[2] + if !(x.Type.IsSigned() || x.Type.Size() == 8) { + break + } + v.reset(OpCopy) + v.Type = x.Type + v.AddArg(x) + return true + } + // match: (MOVBreg x:(MOVBloadidx _ _ _)) + // cond: (x.Type.IsSigned() || x.Type.Size() == 8) + // result: x + for { + x := v.Args[0] + if x.Op != OpS390XMOVBloadidx { + break + } + _ = x.Args[2] + if !(x.Type.IsSigned() || x.Type.Size() == 8) { + break + } + v.reset(OpCopy) + v.Type = x.Type + v.AddArg(x) return true } - // match: (MOVBreg x:(MOVBload [off] {sym} ptr mem)) + // match: (MOVBreg x:(MOVBZload [o] {s} p mem)) // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVBload [off] {sym} ptr mem) + // result: @x.Block (MOVBload [o] {s} p mem) for { + t := v.Type x := v.Args[0] - if x.Op != OpS390XMOVBload { + if x.Op != OpS390XMOVBZload { break } - off := x.AuxInt - sym := x.Aux + o := x.AuxInt + s := x.Aux mem := x.Args[1] - ptr := x.Args[0] + p := x.Args[0] if !(x.Uses == 1 && clobber(x)) { break } b = x.Block - v0 := b.NewValue0(x.Pos, OpS390XMOVBload, v.Type) + v0 := b.NewValue0(x.Pos, OpS390XMOVBload, t) v.reset(OpCopy) v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) + v0.AuxInt = o + v0.Aux = s + v0.AddArg(p) v0.AddArg(mem) return true } - // match: (MOVBreg x:(MOVBZloadidx [off] {sym} ptr idx mem)) + return false +} +func rewriteValueS390X_OpS390XMOVBreg_10(v *Value) bool { + b := v.Block + typ := &b.Func.Config.Types + // match: (MOVBreg x:(MOVBZloadidx [o] {s} p i mem)) // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVBloadidx [off] {sym} ptr idx mem) + // result: @x.Block (MOVBloadidx [o] {s} p i mem) for { + t := v.Type x := v.Args[0] if x.Op != OpS390XMOVBZloadidx { break } - off := x.AuxInt - sym := x.Aux + o := x.AuxInt + s := x.Aux mem := x.Args[2] - ptr := x.Args[0] - idx := x.Args[1] + p := x.Args[0] + i := x.Args[1] if !(x.Uses == 1 && clobber(x)) { break } b = x.Block - v0 := b.NewValue0(v.Pos, OpS390XMOVBloadidx, v.Type) + v0 := b.NewValue0(v.Pos, OpS390XMOVBloadidx, t) v.reset(OpCopy) v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(idx) + v0.AuxInt = o + v0.Aux = s + v0.AddArg(p) + v0.AddArg(i) v0.AddArg(mem) return true } - // match: (MOVBreg x:(MOVBloadidx [off] {sym} ptr idx mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVBloadidx [off] {sym} ptr idx mem) + // match: (MOVBreg x:(Arg )) + // cond: t.IsSigned() && t.Size() == 1 + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVBloadidx { + if x.Op != OpArg { break } - off := x.AuxInt - sym := x.Aux - mem := x.Args[2] - ptr := x.Args[0] - idx := x.Args[1] - if !(x.Uses == 1 && clobber(x)) { + t := x.Type + if !(t.IsSigned() && t.Size() == 1) { break } - b = x.Block - v0 := b.NewValue0(v.Pos, OpS390XMOVBloadidx, v.Type) v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(idx) - v0.AddArg(mem) + v.Type = x.Type + v.AddArg(x) + return true + } + // match: (MOVBreg (MOVDconst [c])) + // cond: + // result: (MOVDconst [int64( int8(c))]) + for { + v_0 := v.Args[0] + if v_0.Op != OpS390XMOVDconst { + break + } + c := v_0.AuxInt + v.reset(OpS390XMOVDconst) + v.AuxInt = int64(int8(c)) return true } // match: (MOVBreg (ANDWconst [m] x)) @@ -14677,7 +14826,7 @@ func rewriteValueS390X_OpS390XMOVDaddridx_0(v *Value) bool { func rewriteValueS390X_OpS390XMOVDload_0(v *Value) bool { // match: (MOVDload [off] {sym} ptr1 (MOVDstore [off] {sym} ptr2 x _)) // cond: isSamePtr(ptr1, ptr2) - // result: (MOVDreg x) + // result: x for { off := v.AuxInt sym := v.Aux @@ -14699,7 +14848,8 @@ func rewriteValueS390X_OpS390XMOVDload_0(v *Value) bool { if !(isSamePtr(ptr1, ptr2)) { break } - v.reset(OpS390XMOVDreg) + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } @@ -14934,844 +15084,6 @@ func rewriteValueS390X_OpS390XMOVDloadidx_0(v *Value) bool { } return false } -func rewriteValueS390X_OpS390XMOVDnop_0(v *Value) bool { - b := v.Block - // match: (MOVDnop x) - // cond: t.Compare(x.Type) == types.CMPeq - // result: x - for { - t := v.Type - x := v.Args[0] - if !(t.Compare(x.Type) == types.CMPeq) { - break - } - v.reset(OpCopy) - v.Type = x.Type - v.AddArg(x) - return true - } - // match: (MOVDnop (MOVDconst [c])) - // cond: - // result: (MOVDconst [c]) - for { - v_0 := v.Args[0] - if v_0.Op != OpS390XMOVDconst { - break - } - c := v_0.AuxInt - v.reset(OpS390XMOVDconst) - v.AuxInt = c - return true - } - // match: (MOVDnop x:(MOVBZload [off] {sym} ptr mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVBZload [off] {sym} ptr mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVBZload { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[1] - ptr := x.Args[0] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(x.Pos, OpS390XMOVBZload, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(mem) - return true - } - // match: (MOVDnop x:(MOVBload [off] {sym} ptr mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVBload [off] {sym} ptr mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVBload { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[1] - ptr := x.Args[0] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(x.Pos, OpS390XMOVBload, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(mem) - return true - } - // match: (MOVDnop x:(MOVHZload [off] {sym} ptr mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVHZload [off] {sym} ptr mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVHZload { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[1] - ptr := x.Args[0] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(x.Pos, OpS390XMOVHZload, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(mem) - return true - } - // match: (MOVDnop x:(MOVHload [off] {sym} ptr mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVHload [off] {sym} ptr mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVHload { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[1] - ptr := x.Args[0] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(x.Pos, OpS390XMOVHload, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(mem) - return true - } - // match: (MOVDnop x:(MOVWZload [off] {sym} ptr mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVWZload [off] {sym} ptr mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVWZload { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[1] - ptr := x.Args[0] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(x.Pos, OpS390XMOVWZload, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(mem) - return true - } - // match: (MOVDnop x:(MOVWload [off] {sym} ptr mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVWload [off] {sym} ptr mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVWload { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[1] - ptr := x.Args[0] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(x.Pos, OpS390XMOVWload, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(mem) - return true - } - // match: (MOVDnop x:(MOVDload [off] {sym} ptr mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVDload [off] {sym} ptr mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVDload { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[1] - ptr := x.Args[0] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(x.Pos, OpS390XMOVDload, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(mem) - return true - } - // match: (MOVDnop x:(MOVBZloadidx [off] {sym} ptr idx mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVBZloadidx [off] {sym} ptr idx mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVBZloadidx { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[2] - ptr := x.Args[0] - idx := x.Args[1] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(v.Pos, OpS390XMOVBZloadidx, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(idx) - v0.AddArg(mem) - return true - } - return false -} -func rewriteValueS390X_OpS390XMOVDnop_10(v *Value) bool { - b := v.Block - // match: (MOVDnop x:(MOVBloadidx [off] {sym} ptr idx mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVBloadidx [off] {sym} ptr idx mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVBloadidx { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[2] - ptr := x.Args[0] - idx := x.Args[1] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(v.Pos, OpS390XMOVBloadidx, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(idx) - v0.AddArg(mem) - return true - } - // match: (MOVDnop x:(MOVHZloadidx [off] {sym} ptr idx mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVHZloadidx [off] {sym} ptr idx mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVHZloadidx { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[2] - ptr := x.Args[0] - idx := x.Args[1] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(idx) - v0.AddArg(mem) - return true - } - // match: (MOVDnop x:(MOVHloadidx [off] {sym} ptr idx mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVHloadidx [off] {sym} ptr idx mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVHloadidx { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[2] - ptr := x.Args[0] - idx := x.Args[1] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(v.Pos, OpS390XMOVHloadidx, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(idx) - v0.AddArg(mem) - return true - } - // match: (MOVDnop x:(MOVWZloadidx [off] {sym} ptr idx mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVWZloadidx [off] {sym} ptr idx mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVWZloadidx { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[2] - ptr := x.Args[0] - idx := x.Args[1] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(idx) - v0.AddArg(mem) - return true - } - // match: (MOVDnop x:(MOVWloadidx [off] {sym} ptr idx mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVWloadidx [off] {sym} ptr idx mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVWloadidx { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[2] - ptr := x.Args[0] - idx := x.Args[1] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(v.Pos, OpS390XMOVWloadidx, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(idx) - v0.AddArg(mem) - return true - } - // match: (MOVDnop x:(MOVDloadidx [off] {sym} ptr idx mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVDloadidx [off] {sym} ptr idx mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVDloadidx { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[2] - ptr := x.Args[0] - idx := x.Args[1] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(v.Pos, OpS390XMOVDloadidx, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(idx) - v0.AddArg(mem) - return true - } - return false -} -func rewriteValueS390X_OpS390XMOVDreg_0(v *Value) bool { - b := v.Block - // match: (MOVDreg x) - // cond: t.Compare(x.Type) == types.CMPeq - // result: x - for { - t := v.Type - x := v.Args[0] - if !(t.Compare(x.Type) == types.CMPeq) { - break - } - v.reset(OpCopy) - v.Type = x.Type - v.AddArg(x) - return true - } - // match: (MOVDreg (MOVDconst [c])) - // cond: - // result: (MOVDconst [c]) - for { - v_0 := v.Args[0] - if v_0.Op != OpS390XMOVDconst { - break - } - c := v_0.AuxInt - v.reset(OpS390XMOVDconst) - v.AuxInt = c - return true - } - // match: (MOVDreg x) - // cond: x.Uses == 1 - // result: (MOVDnop x) - for { - x := v.Args[0] - if !(x.Uses == 1) { - break - } - v.reset(OpS390XMOVDnop) - v.AddArg(x) - return true - } - // match: (MOVDreg x:(MOVBZload [off] {sym} ptr mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVBZload [off] {sym} ptr mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVBZload { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[1] - ptr := x.Args[0] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(x.Pos, OpS390XMOVBZload, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(mem) - return true - } - // match: (MOVDreg x:(MOVBload [off] {sym} ptr mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVBload [off] {sym} ptr mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVBload { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[1] - ptr := x.Args[0] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(x.Pos, OpS390XMOVBload, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(mem) - return true - } - // match: (MOVDreg x:(MOVHZload [off] {sym} ptr mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVHZload [off] {sym} ptr mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVHZload { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[1] - ptr := x.Args[0] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(x.Pos, OpS390XMOVHZload, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(mem) - return true - } - // match: (MOVDreg x:(MOVHload [off] {sym} ptr mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVHload [off] {sym} ptr mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVHload { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[1] - ptr := x.Args[0] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(x.Pos, OpS390XMOVHload, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(mem) - return true - } - // match: (MOVDreg x:(MOVWZload [off] {sym} ptr mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVWZload [off] {sym} ptr mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVWZload { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[1] - ptr := x.Args[0] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(x.Pos, OpS390XMOVWZload, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(mem) - return true - } - // match: (MOVDreg x:(MOVWload [off] {sym} ptr mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVWload [off] {sym} ptr mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVWload { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[1] - ptr := x.Args[0] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(x.Pos, OpS390XMOVWload, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(mem) - return true - } - // match: (MOVDreg x:(MOVDload [off] {sym} ptr mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVDload [off] {sym} ptr mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVDload { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[1] - ptr := x.Args[0] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(x.Pos, OpS390XMOVDload, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(mem) - return true - } - return false -} -func rewriteValueS390X_OpS390XMOVDreg_10(v *Value) bool { - b := v.Block - // match: (MOVDreg x:(MOVBZloadidx [off] {sym} ptr idx mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVBZloadidx [off] {sym} ptr idx mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVBZloadidx { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[2] - ptr := x.Args[0] - idx := x.Args[1] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(v.Pos, OpS390XMOVBZloadidx, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(idx) - v0.AddArg(mem) - return true - } - // match: (MOVDreg x:(MOVBloadidx [off] {sym} ptr idx mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVBloadidx [off] {sym} ptr idx mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVBloadidx { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[2] - ptr := x.Args[0] - idx := x.Args[1] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(v.Pos, OpS390XMOVBloadidx, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(idx) - v0.AddArg(mem) - return true - } - // match: (MOVDreg x:(MOVHZloadidx [off] {sym} ptr idx mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVHZloadidx [off] {sym} ptr idx mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVHZloadidx { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[2] - ptr := x.Args[0] - idx := x.Args[1] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(idx) - v0.AddArg(mem) - return true - } - // match: (MOVDreg x:(MOVHloadidx [off] {sym} ptr idx mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVHloadidx [off] {sym} ptr idx mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVHloadidx { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[2] - ptr := x.Args[0] - idx := x.Args[1] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(v.Pos, OpS390XMOVHloadidx, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(idx) - v0.AddArg(mem) - return true - } - // match: (MOVDreg x:(MOVWZloadidx [off] {sym} ptr idx mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVWZloadidx [off] {sym} ptr idx mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVWZloadidx { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[2] - ptr := x.Args[0] - idx := x.Args[1] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(idx) - v0.AddArg(mem) - return true - } - // match: (MOVDreg x:(MOVWloadidx [off] {sym} ptr idx mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVWloadidx [off] {sym} ptr idx mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVWloadidx { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[2] - ptr := x.Args[0] - idx := x.Args[1] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(v.Pos, OpS390XMOVWloadidx, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(idx) - v0.AddArg(mem) - return true - } - // match: (MOVDreg x:(MOVDloadidx [off] {sym} ptr idx mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVDloadidx [off] {sym} ptr idx mem) - for { - t := v.Type - x := v.Args[0] - if x.Op != OpS390XMOVDloadidx { - break - } - off := x.AuxInt - sym := x.Aux - mem := x.Args[2] - ptr := x.Args[0] - idx := x.Args[1] - if !(x.Uses == 1 && clobber(x)) { - break - } - b = x.Block - v0 := b.NewValue0(v.Pos, OpS390XMOVDloadidx, t) - v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(idx) - v0.AddArg(mem) - return true - } - return false -} func rewriteValueS390X_OpS390XMOVDstore_0(v *Value) bool { // match: (MOVDstore [off1] {sym} (ADDconst [off2] ptr) val mem) // cond: is20Bit(off1+off2) @@ -17431,206 +16743,275 @@ func rewriteValueS390X_OpS390XMOVHZloadidx_0(v *Value) bool { return false } func rewriteValueS390X_OpS390XMOVHZreg_0(v *Value) bool { - b := v.Block - // match: (MOVHZreg x:(MOVBZload _ _)) - // cond: - // result: (MOVDreg x) + // match: (MOVHZreg e:(MOVBZreg x)) + // cond: clobberIfDead(e) + // result: (MOVBZreg x) for { - x := v.Args[0] - if x.Op != OpS390XMOVBZload { + e := v.Args[0] + if e.Op != OpS390XMOVBZreg { break } - _ = x.Args[1] - v.reset(OpS390XMOVDreg) + x := e.Args[0] + if !(clobberIfDead(e)) { + break + } + v.reset(OpS390XMOVBZreg) v.AddArg(x) return true } - // match: (MOVHZreg x:(MOVHZload _ _)) - // cond: - // result: (MOVDreg x) + // match: (MOVHZreg e:(MOVHreg x)) + // cond: clobberIfDead(e) + // result: (MOVHZreg x) for { - x := v.Args[0] - if x.Op != OpS390XMOVHZload { + e := v.Args[0] + if e.Op != OpS390XMOVHreg { break } - _ = x.Args[1] - v.reset(OpS390XMOVDreg) + x := e.Args[0] + if !(clobberIfDead(e)) { + break + } + v.reset(OpS390XMOVHZreg) v.AddArg(x) return true } - // match: (MOVHZreg x:(Arg )) - // cond: (is8BitInt(t) || is16BitInt(t)) && !isSigned(t) - // result: (MOVDreg x) + // match: (MOVHZreg e:(MOVWreg x)) + // cond: clobberIfDead(e) + // result: (MOVHZreg x) + for { + e := v.Args[0] + if e.Op != OpS390XMOVWreg { + break + } + x := e.Args[0] + if !(clobberIfDead(e)) { + break + } + v.reset(OpS390XMOVHZreg) + v.AddArg(x) + return true + } + // match: (MOVHZreg e:(MOVHZreg x)) + // cond: clobberIfDead(e) + // result: (MOVHZreg x) + for { + e := v.Args[0] + if e.Op != OpS390XMOVHZreg { + break + } + x := e.Args[0] + if !(clobberIfDead(e)) { + break + } + v.reset(OpS390XMOVHZreg) + v.AddArg(x) + return true + } + // match: (MOVHZreg e:(MOVWZreg x)) + // cond: clobberIfDead(e) + // result: (MOVHZreg x) + for { + e := v.Args[0] + if e.Op != OpS390XMOVWZreg { + break + } + x := e.Args[0] + if !(clobberIfDead(e)) { + break + } + v.reset(OpS390XMOVHZreg) + v.AddArg(x) + return true + } + // match: (MOVHZreg x:(MOVBZload _ _)) + // cond: (!x.Type.IsSigned() || x.Type.Size() > 1) + // result: x for { x := v.Args[0] - if x.Op != OpArg { + if x.Op != OpS390XMOVBZload { break } - t := x.Type - if !((is8BitInt(t) || is16BitInt(t)) && !isSigned(t)) { + _ = x.Args[1] + if !(!x.Type.IsSigned() || x.Type.Size() > 1) { break } - v.reset(OpS390XMOVDreg) + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVHZreg x:(MOVBZreg _)) - // cond: - // result: (MOVDreg x) + // match: (MOVHZreg x:(MOVBZloadidx _ _ _)) + // cond: (!x.Type.IsSigned() || x.Type.Size() > 1) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVBZreg { + if x.Op != OpS390XMOVBZloadidx { + break + } + _ = x.Args[2] + if !(!x.Type.IsSigned() || x.Type.Size() > 1) { break } - v.reset(OpS390XMOVDreg) + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVHZreg x:(MOVHZreg _)) - // cond: - // result: (MOVDreg x) + // match: (MOVHZreg x:(MOVBZloadidx _ _ _)) + // cond: (!x.Type.IsSigned() || x.Type.Size() > 1) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVHZreg { + if x.Op != OpS390XMOVBZloadidx { + break + } + _ = x.Args[2] + if !(!x.Type.IsSigned() || x.Type.Size() > 1) { break } - v.reset(OpS390XMOVDreg) + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVHZreg (MOVHreg x)) - // cond: - // result: (MOVHZreg x) + // match: (MOVHZreg x:(MOVHZload _ _)) + // cond: (!x.Type.IsSigned() || x.Type.Size() > 2) + // result: x for { - v_0 := v.Args[0] - if v_0.Op != OpS390XMOVHreg { + x := v.Args[0] + if x.Op != OpS390XMOVHZload { break } - x := v_0.Args[0] - v.reset(OpS390XMOVHZreg) + _ = x.Args[1] + if !(!x.Type.IsSigned() || x.Type.Size() > 2) { + break + } + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVHZreg (MOVDconst [c])) - // cond: - // result: (MOVDconst [int64(uint16(c))]) + // match: (MOVHZreg x:(MOVHZloadidx _ _ _)) + // cond: (!x.Type.IsSigned() || x.Type.Size() > 2) + // result: x for { - v_0 := v.Args[0] - if v_0.Op != OpS390XMOVDconst { + x := v.Args[0] + if x.Op != OpS390XMOVHZloadidx { break } - c := v_0.AuxInt - v.reset(OpS390XMOVDconst) - v.AuxInt = int64(uint16(c)) + _ = x.Args[2] + if !(!x.Type.IsSigned() || x.Type.Size() > 2) { + break + } + v.reset(OpCopy) + v.Type = x.Type + v.AddArg(x) return true } - // match: (MOVHZreg x:(MOVHZload [off] {sym} ptr mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVHZload [off] {sym} ptr mem) + return false +} +func rewriteValueS390X_OpS390XMOVHZreg_10(v *Value) bool { + b := v.Block + typ := &b.Func.Config.Types + // match: (MOVHZreg x:(MOVHZloadidx _ _ _)) + // cond: (!x.Type.IsSigned() || x.Type.Size() > 2) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVHZload { + if x.Op != OpS390XMOVHZloadidx { break } - off := x.AuxInt - sym := x.Aux - mem := x.Args[1] - ptr := x.Args[0] - if !(x.Uses == 1 && clobber(x)) { + _ = x.Args[2] + if !(!x.Type.IsSigned() || x.Type.Size() > 2) { break } - b = x.Block - v0 := b.NewValue0(x.Pos, OpS390XMOVHZload, v.Type) v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(mem) + v.Type = x.Type + v.AddArg(x) return true } - // match: (MOVHZreg x:(MOVHload [off] {sym} ptr mem)) + // match: (MOVHZreg x:(MOVHload [o] {s} p mem)) // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVHZload [off] {sym} ptr mem) + // result: @x.Block (MOVHZload [o] {s} p mem) for { + t := v.Type x := v.Args[0] if x.Op != OpS390XMOVHload { break } - off := x.AuxInt - sym := x.Aux + o := x.AuxInt + s := x.Aux mem := x.Args[1] - ptr := x.Args[0] + p := x.Args[0] if !(x.Uses == 1 && clobber(x)) { break } b = x.Block - v0 := b.NewValue0(x.Pos, OpS390XMOVHZload, v.Type) + v0 := b.NewValue0(x.Pos, OpS390XMOVHZload, t) v.reset(OpCopy) v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) + v0.AuxInt = o + v0.Aux = s + v0.AddArg(p) v0.AddArg(mem) return true } - // match: (MOVHZreg x:(MOVHZloadidx [off] {sym} ptr idx mem)) + // match: (MOVHZreg x:(MOVHloadidx [o] {s} p i mem)) // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVHZloadidx [off] {sym} ptr idx mem) + // result: @x.Block (MOVHZloadidx [o] {s} p i mem) for { + t := v.Type x := v.Args[0] - if x.Op != OpS390XMOVHZloadidx { + if x.Op != OpS390XMOVHloadidx { break } - off := x.AuxInt - sym := x.Aux + o := x.AuxInt + s := x.Aux mem := x.Args[2] - ptr := x.Args[0] - idx := x.Args[1] + p := x.Args[0] + i := x.Args[1] if !(x.Uses == 1 && clobber(x)) { break } b = x.Block - v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, v.Type) + v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, t) v.reset(OpCopy) v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(idx) + v0.AuxInt = o + v0.Aux = s + v0.AddArg(p) + v0.AddArg(i) v0.AddArg(mem) return true } - return false -} -func rewriteValueS390X_OpS390XMOVHZreg_10(v *Value) bool { - b := v.Block - typ := &b.Func.Config.Types - // match: (MOVHZreg x:(MOVHloadidx [off] {sym} ptr idx mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVHZloadidx [off] {sym} ptr idx mem) + // match: (MOVHZreg x:(Arg )) + // cond: !t.IsSigned() && t.Size() <= 2 + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVHloadidx { + if x.Op != OpArg { break } - off := x.AuxInt - sym := x.Aux - mem := x.Args[2] - ptr := x.Args[0] - idx := x.Args[1] - if !(x.Uses == 1 && clobber(x)) { + t := x.Type + if !(!t.IsSigned() && t.Size() <= 2) { break } - b = x.Block - v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, v.Type) v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(idx) - v0.AddArg(mem) + v.Type = x.Type + v.AddArg(x) + return true + } + // match: (MOVHZreg (MOVDconst [c])) + // cond: + // result: (MOVDconst [int64(uint16(c))]) + for { + v_0 := v.Args[0] + if v_0.Op != OpS390XMOVDconst { + break + } + c := v_0.AuxInt + v.reset(OpS390XMOVDconst) + v.AuxInt = int64(uint16(c)) return true } // match: (MOVHZreg (ANDWconst [m] x)) @@ -17885,147 +17266,169 @@ func rewriteValueS390X_OpS390XMOVHloadidx_0(v *Value) bool { return false } func rewriteValueS390X_OpS390XMOVHreg_0(v *Value) bool { - b := v.Block - // match: (MOVHreg x:(MOVBload _ _)) - // cond: - // result: (MOVDreg x) + // match: (MOVHreg e:(MOVBreg x)) + // cond: clobberIfDead(e) + // result: (MOVBreg x) for { - x := v.Args[0] - if x.Op != OpS390XMOVBload { + e := v.Args[0] + if e.Op != OpS390XMOVBreg { break } - _ = x.Args[1] - v.reset(OpS390XMOVDreg) + x := e.Args[0] + if !(clobberIfDead(e)) { + break + } + v.reset(OpS390XMOVBreg) v.AddArg(x) return true } - // match: (MOVHreg x:(MOVBZload _ _)) - // cond: - // result: (MOVDreg x) + // match: (MOVHreg e:(MOVHreg x)) + // cond: clobberIfDead(e) + // result: (MOVHreg x) for { - x := v.Args[0] - if x.Op != OpS390XMOVBZload { + e := v.Args[0] + if e.Op != OpS390XMOVHreg { break } - _ = x.Args[1] - v.reset(OpS390XMOVDreg) + x := e.Args[0] + if !(clobberIfDead(e)) { + break + } + v.reset(OpS390XMOVHreg) v.AddArg(x) return true } - // match: (MOVHreg x:(MOVHload _ _)) - // cond: - // result: (MOVDreg x) + // match: (MOVHreg e:(MOVWreg x)) + // cond: clobberIfDead(e) + // result: (MOVHreg x) for { - x := v.Args[0] - if x.Op != OpS390XMOVHload { + e := v.Args[0] + if e.Op != OpS390XMOVWreg { break } - _ = x.Args[1] - v.reset(OpS390XMOVDreg) + x := e.Args[0] + if !(clobberIfDead(e)) { + break + } + v.reset(OpS390XMOVHreg) v.AddArg(x) return true } - // match: (MOVHreg x:(Arg )) - // cond: (is8BitInt(t) || is16BitInt(t)) && isSigned(t) - // result: (MOVDreg x) + // match: (MOVHreg e:(MOVHZreg x)) + // cond: clobberIfDead(e) + // result: (MOVHreg x) for { - x := v.Args[0] - if x.Op != OpArg { + e := v.Args[0] + if e.Op != OpS390XMOVHZreg { break } - t := x.Type - if !((is8BitInt(t) || is16BitInt(t)) && isSigned(t)) { + x := e.Args[0] + if !(clobberIfDead(e)) { break } - v.reset(OpS390XMOVDreg) + v.reset(OpS390XMOVHreg) v.AddArg(x) return true } - // match: (MOVHreg x:(MOVBreg _)) - // cond: - // result: (MOVDreg x) + // match: (MOVHreg e:(MOVWZreg x)) + // cond: clobberIfDead(e) + // result: (MOVHreg x) for { - x := v.Args[0] - if x.Op != OpS390XMOVBreg { + e := v.Args[0] + if e.Op != OpS390XMOVWZreg { + break + } + x := e.Args[0] + if !(clobberIfDead(e)) { break } - v.reset(OpS390XMOVDreg) + v.reset(OpS390XMOVHreg) v.AddArg(x) return true } - // match: (MOVHreg x:(MOVBZreg _)) - // cond: - // result: (MOVDreg x) + // match: (MOVHreg x:(MOVBload _ _)) + // cond: (x.Type.IsSigned() || x.Type.Size() == 8) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVBZreg { + if x.Op != OpS390XMOVBload { break } - v.reset(OpS390XMOVDreg) + _ = x.Args[1] + if !(x.Type.IsSigned() || x.Type.Size() == 8) { + break + } + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVHreg x:(MOVHreg _)) - // cond: - // result: (MOVDreg x) + // match: (MOVHreg x:(MOVBloadidx _ _ _)) + // cond: (x.Type.IsSigned() || x.Type.Size() == 8) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVHreg { + if x.Op != OpS390XMOVBloadidx { break } - v.reset(OpS390XMOVDreg) + _ = x.Args[2] + if !(x.Type.IsSigned() || x.Type.Size() == 8) { + break + } + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVHreg (MOVHZreg x)) - // cond: - // result: (MOVHreg x) + // match: (MOVHreg x:(MOVBloadidx _ _ _)) + // cond: (x.Type.IsSigned() || x.Type.Size() == 8) + // result: x for { - v_0 := v.Args[0] - if v_0.Op != OpS390XMOVHZreg { + x := v.Args[0] + if x.Op != OpS390XMOVBloadidx { break } - x := v_0.Args[0] - v.reset(OpS390XMOVHreg) + _ = x.Args[2] + if !(x.Type.IsSigned() || x.Type.Size() == 8) { + break + } + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVHreg (MOVDconst [c])) - // cond: - // result: (MOVDconst [int64(int16(c))]) + // match: (MOVHreg x:(MOVHload _ _)) + // cond: (x.Type.IsSigned() || x.Type.Size() == 8) + // result: x for { - v_0 := v.Args[0] - if v_0.Op != OpS390XMOVDconst { + x := v.Args[0] + if x.Op != OpS390XMOVHload { break } - c := v_0.AuxInt - v.reset(OpS390XMOVDconst) - v.AuxInt = int64(int16(c)) + _ = x.Args[1] + if !(x.Type.IsSigned() || x.Type.Size() == 8) { + break + } + v.reset(OpCopy) + v.Type = x.Type + v.AddArg(x) return true } - // match: (MOVHreg x:(MOVHZload [off] {sym} ptr mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVHload [off] {sym} ptr mem) + // match: (MOVHreg x:(MOVHloadidx _ _ _)) + // cond: (x.Type.IsSigned() || x.Type.Size() == 8) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVHZload { + if x.Op != OpS390XMOVHloadidx { break } - off := x.AuxInt - sym := x.Aux - mem := x.Args[1] - ptr := x.Args[0] - if !(x.Uses == 1 && clobber(x)) { + _ = x.Args[2] + if !(x.Type.IsSigned() || x.Type.Size() == 8) { break } - b = x.Block - v0 := b.NewValue0(x.Pos, OpS390XMOVHload, v.Type) v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(mem) + v.Type = x.Type + v.AddArg(x) return true } return false @@ -18033,83 +17436,156 @@ func rewriteValueS390X_OpS390XMOVHreg_0(v *Value) bool { func rewriteValueS390X_OpS390XMOVHreg_10(v *Value) bool { b := v.Block typ := &b.Func.Config.Types - // match: (MOVHreg x:(MOVHload [off] {sym} ptr mem)) + // match: (MOVHreg x:(MOVHloadidx _ _ _)) + // cond: (x.Type.IsSigned() || x.Type.Size() == 8) + // result: x + for { + x := v.Args[0] + if x.Op != OpS390XMOVHloadidx { + break + } + _ = x.Args[2] + if !(x.Type.IsSigned() || x.Type.Size() == 8) { + break + } + v.reset(OpCopy) + v.Type = x.Type + v.AddArg(x) + return true + } + // match: (MOVHreg x:(MOVBZload _ _)) + // cond: (!x.Type.IsSigned() || x.Type.Size() > 1) + // result: x + for { + x := v.Args[0] + if x.Op != OpS390XMOVBZload { + break + } + _ = x.Args[1] + if !(!x.Type.IsSigned() || x.Type.Size() > 1) { + break + } + v.reset(OpCopy) + v.Type = x.Type + v.AddArg(x) + return true + } + // match: (MOVHreg x:(MOVBZloadidx _ _ _)) + // cond: (!x.Type.IsSigned() || x.Type.Size() > 1) + // result: x + for { + x := v.Args[0] + if x.Op != OpS390XMOVBZloadidx { + break + } + _ = x.Args[2] + if !(!x.Type.IsSigned() || x.Type.Size() > 1) { + break + } + v.reset(OpCopy) + v.Type = x.Type + v.AddArg(x) + return true + } + // match: (MOVHreg x:(MOVBZloadidx _ _ _)) + // cond: (!x.Type.IsSigned() || x.Type.Size() > 1) + // result: x + for { + x := v.Args[0] + if x.Op != OpS390XMOVBZloadidx { + break + } + _ = x.Args[2] + if !(!x.Type.IsSigned() || x.Type.Size() > 1) { + break + } + v.reset(OpCopy) + v.Type = x.Type + v.AddArg(x) + return true + } + // match: (MOVHreg x:(MOVHZload [o] {s} p mem)) // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVHload [off] {sym} ptr mem) + // result: @x.Block (MOVHload [o] {s} p mem) for { + t := v.Type x := v.Args[0] - if x.Op != OpS390XMOVHload { + if x.Op != OpS390XMOVHZload { break } - off := x.AuxInt - sym := x.Aux + o := x.AuxInt + s := x.Aux mem := x.Args[1] - ptr := x.Args[0] + p := x.Args[0] if !(x.Uses == 1 && clobber(x)) { break } b = x.Block - v0 := b.NewValue0(x.Pos, OpS390XMOVHload, v.Type) + v0 := b.NewValue0(x.Pos, OpS390XMOVHload, t) v.reset(OpCopy) v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) + v0.AuxInt = o + v0.Aux = s + v0.AddArg(p) v0.AddArg(mem) return true } - // match: (MOVHreg x:(MOVHZloadidx [off] {sym} ptr idx mem)) + // match: (MOVHreg x:(MOVHZloadidx [o] {s} p i mem)) // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVHloadidx [off] {sym} ptr idx mem) + // result: @x.Block (MOVHloadidx [o] {s} p i mem) for { + t := v.Type x := v.Args[0] if x.Op != OpS390XMOVHZloadidx { break } - off := x.AuxInt - sym := x.Aux + o := x.AuxInt + s := x.Aux mem := x.Args[2] - ptr := x.Args[0] - idx := x.Args[1] + p := x.Args[0] + i := x.Args[1] if !(x.Uses == 1 && clobber(x)) { break } b = x.Block - v0 := b.NewValue0(v.Pos, OpS390XMOVHloadidx, v.Type) + v0 := b.NewValue0(v.Pos, OpS390XMOVHloadidx, t) v.reset(OpCopy) v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(idx) + v0.AuxInt = o + v0.Aux = s + v0.AddArg(p) + v0.AddArg(i) v0.AddArg(mem) return true } - // match: (MOVHreg x:(MOVHloadidx [off] {sym} ptr idx mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVHloadidx [off] {sym} ptr idx mem) + // match: (MOVHreg x:(Arg )) + // cond: t.IsSigned() && t.Size() <= 2 + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVHloadidx { + if x.Op != OpArg { break } - off := x.AuxInt - sym := x.Aux - mem := x.Args[2] - ptr := x.Args[0] - idx := x.Args[1] - if !(x.Uses == 1 && clobber(x)) { + t := x.Type + if !(t.IsSigned() && t.Size() <= 2) { break } - b = x.Block - v0 := b.NewValue0(v.Pos, OpS390XMOVHloadidx, v.Type) v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(idx) - v0.AddArg(mem) + v.Type = x.Type + v.AddArg(x) + return true + } + // match: (MOVHreg (MOVDconst [c])) + // cond: + // result: (MOVDconst [int64(int16(c))]) + for { + v_0 := v.Args[0] + if v_0.Op != OpS390XMOVDconst { + break + } + c := v_0.AuxInt + v.reset(OpS390XMOVDconst) + v.AuxInt = int64(int16(c)) return true } // match: (MOVHreg (ANDWconst [m] x)) @@ -20264,230 +19740,309 @@ func rewriteValueS390X_OpS390XMOVWZloadidx_0(v *Value) bool { return false } func rewriteValueS390X_OpS390XMOVWZreg_0(v *Value) bool { - b := v.Block - // match: (MOVWZreg x:(MOVBZload _ _)) - // cond: - // result: (MOVDreg x) + // match: (MOVWZreg e:(MOVBZreg x)) + // cond: clobberIfDead(e) + // result: (MOVBZreg x) for { - x := v.Args[0] - if x.Op != OpS390XMOVBZload { + e := v.Args[0] + if e.Op != OpS390XMOVBZreg { break } - _ = x.Args[1] - v.reset(OpS390XMOVDreg) + x := e.Args[0] + if !(clobberIfDead(e)) { + break + } + v.reset(OpS390XMOVBZreg) v.AddArg(x) return true } - // match: (MOVWZreg x:(MOVHZload _ _)) - // cond: - // result: (MOVDreg x) + // match: (MOVWZreg e:(MOVHZreg x)) + // cond: clobberIfDead(e) + // result: (MOVHZreg x) for { - x := v.Args[0] - if x.Op != OpS390XMOVHZload { + e := v.Args[0] + if e.Op != OpS390XMOVHZreg { break } - _ = x.Args[1] - v.reset(OpS390XMOVDreg) + x := e.Args[0] + if !(clobberIfDead(e)) { + break + } + v.reset(OpS390XMOVHZreg) v.AddArg(x) return true } - // match: (MOVWZreg x:(MOVWZload _ _)) - // cond: - // result: (MOVDreg x) + // match: (MOVWZreg e:(MOVWreg x)) + // cond: clobberIfDead(e) + // result: (MOVWZreg x) for { - x := v.Args[0] - if x.Op != OpS390XMOVWZload { + e := v.Args[0] + if e.Op != OpS390XMOVWreg { break } - _ = x.Args[1] - v.reset(OpS390XMOVDreg) + x := e.Args[0] + if !(clobberIfDead(e)) { + break + } + v.reset(OpS390XMOVWZreg) v.AddArg(x) return true } - // match: (MOVWZreg x:(Arg )) - // cond: (is8BitInt(t) || is16BitInt(t) || is32BitInt(t)) && !isSigned(t) - // result: (MOVDreg x) + // match: (MOVWZreg e:(MOVWZreg x)) + // cond: clobberIfDead(e) + // result: (MOVWZreg x) for { - x := v.Args[0] - if x.Op != OpArg { + e := v.Args[0] + if e.Op != OpS390XMOVWZreg { break } - t := x.Type - if !((is8BitInt(t) || is16BitInt(t) || is32BitInt(t)) && !isSigned(t)) { + x := e.Args[0] + if !(clobberIfDead(e)) { break } - v.reset(OpS390XMOVDreg) + v.reset(OpS390XMOVWZreg) v.AddArg(x) return true } - // match: (MOVWZreg x:(MOVBZreg _)) - // cond: - // result: (MOVDreg x) + // match: (MOVWZreg x:(MOVBZload _ _)) + // cond: (!x.Type.IsSigned() || x.Type.Size() > 1) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVBZreg { + if x.Op != OpS390XMOVBZload { break } - v.reset(OpS390XMOVDreg) + _ = x.Args[1] + if !(!x.Type.IsSigned() || x.Type.Size() > 1) { + break + } + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVWZreg x:(MOVHZreg _)) - // cond: - // result: (MOVDreg x) + // match: (MOVWZreg x:(MOVBZloadidx _ _ _)) + // cond: (!x.Type.IsSigned() || x.Type.Size() > 1) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVHZreg { + if x.Op != OpS390XMOVBZloadidx { break } - v.reset(OpS390XMOVDreg) + _ = x.Args[2] + if !(!x.Type.IsSigned() || x.Type.Size() > 1) { + break + } + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVWZreg x:(MOVWZreg _)) - // cond: - // result: (MOVDreg x) + // match: (MOVWZreg x:(MOVBZloadidx _ _ _)) + // cond: (!x.Type.IsSigned() || x.Type.Size() > 1) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVWZreg { + if x.Op != OpS390XMOVBZloadidx { + break + } + _ = x.Args[2] + if !(!x.Type.IsSigned() || x.Type.Size() > 1) { break } - v.reset(OpS390XMOVDreg) + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVWZreg (MOVWreg x)) - // cond: - // result: (MOVWZreg x) + // match: (MOVWZreg x:(MOVHZload _ _)) + // cond: (!x.Type.IsSigned() || x.Type.Size() > 2) + // result: x for { - v_0 := v.Args[0] - if v_0.Op != OpS390XMOVWreg { + x := v.Args[0] + if x.Op != OpS390XMOVHZload { break } - x := v_0.Args[0] - v.reset(OpS390XMOVWZreg) + _ = x.Args[1] + if !(!x.Type.IsSigned() || x.Type.Size() > 2) { + break + } + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVWZreg (MOVDconst [c])) - // cond: - // result: (MOVDconst [int64(uint32(c))]) + // match: (MOVWZreg x:(MOVHZloadidx _ _ _)) + // cond: (!x.Type.IsSigned() || x.Type.Size() > 2) + // result: x for { - v_0 := v.Args[0] - if v_0.Op != OpS390XMOVDconst { + x := v.Args[0] + if x.Op != OpS390XMOVHZloadidx { break } - c := v_0.AuxInt - v.reset(OpS390XMOVDconst) - v.AuxInt = int64(uint32(c)) + _ = x.Args[2] + if !(!x.Type.IsSigned() || x.Type.Size() > 2) { + break + } + v.reset(OpCopy) + v.Type = x.Type + v.AddArg(x) return true } - // match: (MOVWZreg x:(MOVWZload [off] {sym} ptr mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVWZload [off] {sym} ptr mem) + // match: (MOVWZreg x:(MOVHZloadidx _ _ _)) + // cond: (!x.Type.IsSigned() || x.Type.Size() > 2) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVWZload { + if x.Op != OpS390XMOVHZloadidx { break } - off := x.AuxInt - sym := x.Aux - mem := x.Args[1] - ptr := x.Args[0] - if !(x.Uses == 1 && clobber(x)) { + _ = x.Args[2] + if !(!x.Type.IsSigned() || x.Type.Size() > 2) { break } - b = x.Block - v0 := b.NewValue0(x.Pos, OpS390XMOVWZload, v.Type) v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(mem) + v.Type = x.Type + v.AddArg(x) return true } return false } func rewriteValueS390X_OpS390XMOVWZreg_10(v *Value) bool { b := v.Block - // match: (MOVWZreg x:(MOVWload [off] {sym} ptr mem)) + // match: (MOVWZreg x:(MOVWZload _ _)) + // cond: (!x.Type.IsSigned() || x.Type.Size() > 4) + // result: x + for { + x := v.Args[0] + if x.Op != OpS390XMOVWZload { + break + } + _ = x.Args[1] + if !(!x.Type.IsSigned() || x.Type.Size() > 4) { + break + } + v.reset(OpCopy) + v.Type = x.Type + v.AddArg(x) + return true + } + // match: (MOVWZreg x:(MOVWZloadidx _ _ _)) + // cond: (!x.Type.IsSigned() || x.Type.Size() > 4) + // result: x + for { + x := v.Args[0] + if x.Op != OpS390XMOVWZloadidx { + break + } + _ = x.Args[2] + if !(!x.Type.IsSigned() || x.Type.Size() > 4) { + break + } + v.reset(OpCopy) + v.Type = x.Type + v.AddArg(x) + return true + } + // match: (MOVWZreg x:(MOVWZloadidx _ _ _)) + // cond: (!x.Type.IsSigned() || x.Type.Size() > 4) + // result: x + for { + x := v.Args[0] + if x.Op != OpS390XMOVWZloadidx { + break + } + _ = x.Args[2] + if !(!x.Type.IsSigned() || x.Type.Size() > 4) { + break + } + v.reset(OpCopy) + v.Type = x.Type + v.AddArg(x) + return true + } + // match: (MOVWZreg x:(MOVWload [o] {s} p mem)) // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVWZload [off] {sym} ptr mem) + // result: @x.Block (MOVWZload [o] {s} p mem) for { + t := v.Type x := v.Args[0] if x.Op != OpS390XMOVWload { break } - off := x.AuxInt - sym := x.Aux + o := x.AuxInt + s := x.Aux mem := x.Args[1] - ptr := x.Args[0] + p := x.Args[0] if !(x.Uses == 1 && clobber(x)) { break } b = x.Block - v0 := b.NewValue0(x.Pos, OpS390XMOVWZload, v.Type) + v0 := b.NewValue0(x.Pos, OpS390XMOVWZload, t) v.reset(OpCopy) v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) + v0.AuxInt = o + v0.Aux = s + v0.AddArg(p) v0.AddArg(mem) return true } - // match: (MOVWZreg x:(MOVWZloadidx [off] {sym} ptr idx mem)) + // match: (MOVWZreg x:(MOVWloadidx [o] {s} p i mem)) // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVWZloadidx [off] {sym} ptr idx mem) + // result: @x.Block (MOVWZloadidx [o] {s} p i mem) for { + t := v.Type x := v.Args[0] - if x.Op != OpS390XMOVWZloadidx { + if x.Op != OpS390XMOVWloadidx { break } - off := x.AuxInt - sym := x.Aux + o := x.AuxInt + s := x.Aux mem := x.Args[2] - ptr := x.Args[0] - idx := x.Args[1] + p := x.Args[0] + i := x.Args[1] if !(x.Uses == 1 && clobber(x)) { break } b = x.Block - v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, v.Type) + v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, t) v.reset(OpCopy) v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(idx) + v0.AuxInt = o + v0.Aux = s + v0.AddArg(p) + v0.AddArg(i) v0.AddArg(mem) return true } - // match: (MOVWZreg x:(MOVWloadidx [off] {sym} ptr idx mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVWZloadidx [off] {sym} ptr idx mem) + // match: (MOVWZreg x:(Arg )) + // cond: !t.IsSigned() && t.Size() <= 4 + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVWloadidx { + if x.Op != OpArg { break } - off := x.AuxInt - sym := x.Aux - mem := x.Args[2] - ptr := x.Args[0] - idx := x.Args[1] - if !(x.Uses == 1 && clobber(x)) { + t := x.Type + if !(!t.IsSigned() && t.Size() <= 4) { break } - b = x.Block - v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, v.Type) v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(idx) - v0.AddArg(mem) + v.Type = x.Type + v.AddArg(x) + return true + } + // match: (MOVWZreg (MOVDconst [c])) + // cond: + // result: (MOVDconst [int64(uint32(c))]) + for { + v_0 := v.Args[0] + if v_0.Op != OpS390XMOVDconst { + break + } + c := v_0.AuxInt + v.reset(OpS390XMOVDconst) + v.AuxInt = int64(uint32(c)) return true } return false @@ -20725,279 +20280,415 @@ func rewriteValueS390X_OpS390XMOVWloadidx_0(v *Value) bool { return false } func rewriteValueS390X_OpS390XMOVWreg_0(v *Value) bool { + // match: (MOVWreg e:(MOVBreg x)) + // cond: clobberIfDead(e) + // result: (MOVBreg x) + for { + e := v.Args[0] + if e.Op != OpS390XMOVBreg { + break + } + x := e.Args[0] + if !(clobberIfDead(e)) { + break + } + v.reset(OpS390XMOVBreg) + v.AddArg(x) + return true + } + // match: (MOVWreg e:(MOVHreg x)) + // cond: clobberIfDead(e) + // result: (MOVHreg x) + for { + e := v.Args[0] + if e.Op != OpS390XMOVHreg { + break + } + x := e.Args[0] + if !(clobberIfDead(e)) { + break + } + v.reset(OpS390XMOVHreg) + v.AddArg(x) + return true + } + // match: (MOVWreg e:(MOVWreg x)) + // cond: clobberIfDead(e) + // result: (MOVWreg x) + for { + e := v.Args[0] + if e.Op != OpS390XMOVWreg { + break + } + x := e.Args[0] + if !(clobberIfDead(e)) { + break + } + v.reset(OpS390XMOVWreg) + v.AddArg(x) + return true + } + // match: (MOVWreg e:(MOVWZreg x)) + // cond: clobberIfDead(e) + // result: (MOVWreg x) + for { + e := v.Args[0] + if e.Op != OpS390XMOVWZreg { + break + } + x := e.Args[0] + if !(clobberIfDead(e)) { + break + } + v.reset(OpS390XMOVWreg) + v.AddArg(x) + return true + } // match: (MOVWreg x:(MOVBload _ _)) - // cond: - // result: (MOVDreg x) + // cond: (x.Type.IsSigned() || x.Type.Size() == 8) + // result: x for { x := v.Args[0] if x.Op != OpS390XMOVBload { break } _ = x.Args[1] - v.reset(OpS390XMOVDreg) + if !(x.Type.IsSigned() || x.Type.Size() == 8) { + break + } + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVWreg x:(MOVBZload _ _)) - // cond: - // result: (MOVDreg x) + // match: (MOVWreg x:(MOVBloadidx _ _ _)) + // cond: (x.Type.IsSigned() || x.Type.Size() == 8) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVBZload { + if x.Op != OpS390XMOVBloadidx { break } - _ = x.Args[1] - v.reset(OpS390XMOVDreg) + _ = x.Args[2] + if !(x.Type.IsSigned() || x.Type.Size() == 8) { + break + } + v.reset(OpCopy) + v.Type = x.Type + v.AddArg(x) + return true + } + // match: (MOVWreg x:(MOVBloadidx _ _ _)) + // cond: (x.Type.IsSigned() || x.Type.Size() == 8) + // result: x + for { + x := v.Args[0] + if x.Op != OpS390XMOVBloadidx { + break + } + _ = x.Args[2] + if !(x.Type.IsSigned() || x.Type.Size() == 8) { + break + } + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } // match: (MOVWreg x:(MOVHload _ _)) - // cond: - // result: (MOVDreg x) + // cond: (x.Type.IsSigned() || x.Type.Size() == 8) + // result: x for { x := v.Args[0] if x.Op != OpS390XMOVHload { break } _ = x.Args[1] - v.reset(OpS390XMOVDreg) + if !(x.Type.IsSigned() || x.Type.Size() == 8) { + break + } + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVWreg x:(MOVHZload _ _)) - // cond: - // result: (MOVDreg x) + // match: (MOVWreg x:(MOVHloadidx _ _ _)) + // cond: (x.Type.IsSigned() || x.Type.Size() == 8) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVHZload { + if x.Op != OpS390XMOVHloadidx { break } - _ = x.Args[1] - v.reset(OpS390XMOVDreg) + _ = x.Args[2] + if !(x.Type.IsSigned() || x.Type.Size() == 8) { + break + } + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVWreg x:(MOVWload _ _)) - // cond: - // result: (MOVDreg x) + // match: (MOVWreg x:(MOVHloadidx _ _ _)) + // cond: (x.Type.IsSigned() || x.Type.Size() == 8) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVWload { + if x.Op != OpS390XMOVHloadidx { break } - _ = x.Args[1] - v.reset(OpS390XMOVDreg) + _ = x.Args[2] + if !(x.Type.IsSigned() || x.Type.Size() == 8) { + break + } + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVWreg x:(Arg )) - // cond: (is8BitInt(t) || is16BitInt(t) || is32BitInt(t)) && isSigned(t) - // result: (MOVDreg x) + return false +} +func rewriteValueS390X_OpS390XMOVWreg_10(v *Value) bool { + b := v.Block + // match: (MOVWreg x:(MOVWload _ _)) + // cond: (x.Type.IsSigned() || x.Type.Size() == 8) + // result: x for { x := v.Args[0] - if x.Op != OpArg { + if x.Op != OpS390XMOVWload { break } - t := x.Type - if !((is8BitInt(t) || is16BitInt(t) || is32BitInt(t)) && isSigned(t)) { + _ = x.Args[1] + if !(x.Type.IsSigned() || x.Type.Size() == 8) { break } - v.reset(OpS390XMOVDreg) + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVWreg x:(MOVBreg _)) - // cond: - // result: (MOVDreg x) + // match: (MOVWreg x:(MOVWloadidx _ _ _)) + // cond: (x.Type.IsSigned() || x.Type.Size() == 8) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVBreg { + if x.Op != OpS390XMOVWloadidx { + break + } + _ = x.Args[2] + if !(x.Type.IsSigned() || x.Type.Size() == 8) { break } - v.reset(OpS390XMOVDreg) + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVWreg x:(MOVBZreg _)) - // cond: - // result: (MOVDreg x) + // match: (MOVWreg x:(MOVWloadidx _ _ _)) + // cond: (x.Type.IsSigned() || x.Type.Size() == 8) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVBZreg { + if x.Op != OpS390XMOVWloadidx { + break + } + _ = x.Args[2] + if !(x.Type.IsSigned() || x.Type.Size() == 8) { break } - v.reset(OpS390XMOVDreg) + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVWreg x:(MOVHreg _)) - // cond: - // result: (MOVDreg x) + // match: (MOVWreg x:(MOVBZload _ _)) + // cond: (!x.Type.IsSigned() || x.Type.Size() > 1) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVHreg { + if x.Op != OpS390XMOVBZload { break } - v.reset(OpS390XMOVDreg) + _ = x.Args[1] + if !(!x.Type.IsSigned() || x.Type.Size() > 1) { + break + } + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVWreg x:(MOVHZreg _)) - // cond: - // result: (MOVDreg x) + // match: (MOVWreg x:(MOVBZloadidx _ _ _)) + // cond: (!x.Type.IsSigned() || x.Type.Size() > 1) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVHZreg { + if x.Op != OpS390XMOVBZloadidx { + break + } + _ = x.Args[2] + if !(!x.Type.IsSigned() || x.Type.Size() > 1) { break } - v.reset(OpS390XMOVDreg) + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - return false -} -func rewriteValueS390X_OpS390XMOVWreg_10(v *Value) bool { - b := v.Block - // match: (MOVWreg x:(MOVWreg _)) - // cond: - // result: (MOVDreg x) + // match: (MOVWreg x:(MOVBZloadidx _ _ _)) + // cond: (!x.Type.IsSigned() || x.Type.Size() > 1) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVWreg { + if x.Op != OpS390XMOVBZloadidx { + break + } + _ = x.Args[2] + if !(!x.Type.IsSigned() || x.Type.Size() > 1) { break } - v.reset(OpS390XMOVDreg) + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVWreg (MOVWZreg x)) - // cond: - // result: (MOVWreg x) + // match: (MOVWreg x:(MOVHZload _ _)) + // cond: (!x.Type.IsSigned() || x.Type.Size() > 2) + // result: x for { - v_0 := v.Args[0] - if v_0.Op != OpS390XMOVWZreg { + x := v.Args[0] + if x.Op != OpS390XMOVHZload { break } - x := v_0.Args[0] - v.reset(OpS390XMOVWreg) + _ = x.Args[1] + if !(!x.Type.IsSigned() || x.Type.Size() > 2) { + break + } + v.reset(OpCopy) + v.Type = x.Type v.AddArg(x) return true } - // match: (MOVWreg (MOVDconst [c])) - // cond: - // result: (MOVDconst [int64(int32(c))]) + // match: (MOVWreg x:(MOVHZloadidx _ _ _)) + // cond: (!x.Type.IsSigned() || x.Type.Size() > 2) + // result: x for { - v_0 := v.Args[0] - if v_0.Op != OpS390XMOVDconst { + x := v.Args[0] + if x.Op != OpS390XMOVHZloadidx { break } - c := v_0.AuxInt - v.reset(OpS390XMOVDconst) - v.AuxInt = int64(int32(c)) + _ = x.Args[2] + if !(!x.Type.IsSigned() || x.Type.Size() > 2) { + break + } + v.reset(OpCopy) + v.Type = x.Type + v.AddArg(x) return true } - // match: (MOVWreg x:(MOVWZload [off] {sym} ptr mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVWload [off] {sym} ptr mem) + // match: (MOVWreg x:(MOVHZloadidx _ _ _)) + // cond: (!x.Type.IsSigned() || x.Type.Size() > 2) + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVWZload { + if x.Op != OpS390XMOVHZloadidx { break } - off := x.AuxInt - sym := x.Aux - mem := x.Args[1] - ptr := x.Args[0] - if !(x.Uses == 1 && clobber(x)) { + _ = x.Args[2] + if !(!x.Type.IsSigned() || x.Type.Size() > 2) { break } - b = x.Block - v0 := b.NewValue0(x.Pos, OpS390XMOVWload, v.Type) v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(mem) + v.Type = x.Type + v.AddArg(x) return true } - // match: (MOVWreg x:(MOVWload [off] {sym} ptr mem)) + // match: (MOVWreg x:(MOVWZload [o] {s} p mem)) // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVWload [off] {sym} ptr mem) + // result: @x.Block (MOVWload [o] {s} p mem) for { + t := v.Type x := v.Args[0] - if x.Op != OpS390XMOVWload { + if x.Op != OpS390XMOVWZload { break } - off := x.AuxInt - sym := x.Aux + o := x.AuxInt + s := x.Aux mem := x.Args[1] - ptr := x.Args[0] + p := x.Args[0] if !(x.Uses == 1 && clobber(x)) { break } b = x.Block - v0 := b.NewValue0(x.Pos, OpS390XMOVWload, v.Type) + v0 := b.NewValue0(x.Pos, OpS390XMOVWload, t) v.reset(OpCopy) v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) + v0.AuxInt = o + v0.Aux = s + v0.AddArg(p) v0.AddArg(mem) return true } - // match: (MOVWreg x:(MOVWZloadidx [off] {sym} ptr idx mem)) + return false +} +func rewriteValueS390X_OpS390XMOVWreg_20(v *Value) bool { + b := v.Block + // match: (MOVWreg x:(MOVWZloadidx [o] {s} p i mem)) // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVWloadidx [off] {sym} ptr idx mem) + // result: @x.Block (MOVWloadidx [o] {s} p i mem) for { + t := v.Type x := v.Args[0] if x.Op != OpS390XMOVWZloadidx { break } - off := x.AuxInt - sym := x.Aux + o := x.AuxInt + s := x.Aux mem := x.Args[2] - ptr := x.Args[0] - idx := x.Args[1] + p := x.Args[0] + i := x.Args[1] if !(x.Uses == 1 && clobber(x)) { break } b = x.Block - v0 := b.NewValue0(v.Pos, OpS390XMOVWloadidx, v.Type) + v0 := b.NewValue0(v.Pos, OpS390XMOVWloadidx, t) v.reset(OpCopy) v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(idx) + v0.AuxInt = o + v0.Aux = s + v0.AddArg(p) + v0.AddArg(i) v0.AddArg(mem) return true } - // match: (MOVWreg x:(MOVWloadidx [off] {sym} ptr idx mem)) - // cond: x.Uses == 1 && clobber(x) - // result: @x.Block (MOVWloadidx [off] {sym} ptr idx mem) + // match: (MOVWreg x:(Arg )) + // cond: t.IsSigned() && t.Size() <= 4 + // result: x for { x := v.Args[0] - if x.Op != OpS390XMOVWloadidx { + if x.Op != OpArg { break } - off := x.AuxInt - sym := x.Aux - mem := x.Args[2] - ptr := x.Args[0] - idx := x.Args[1] - if !(x.Uses == 1 && clobber(x)) { + t := x.Type + if !(t.IsSigned() && t.Size() <= 4) { break } - b = x.Block - v0 := b.NewValue0(v.Pos, OpS390XMOVWloadidx, v.Type) v.reset(OpCopy) - v.AddArg(v0) - v0.AuxInt = off - v0.Aux = sym - v0.AddArg(ptr) - v0.AddArg(idx) - v0.AddArg(mem) + v.Type = x.Type + v.AddArg(x) + return true + } + // match: (MOVWreg (MOVDconst [c])) + // cond: + // result: (MOVDconst [int64(int32(c))]) + for { + v_0 := v.Args[0] + if v_0.Op != OpS390XMOVDconst { + break + } + c := v_0.AuxInt + v.reset(OpS390XMOVDconst) + v.AuxInt = int64(int32(c)) return true } return false @@ -37836,22 +37527,6 @@ func rewriteValueS390X_OpS390XSLD_0(v *Value) bool { v.AddArg(y) return true } - // match: (SLD x (MOVDreg y)) - // cond: - // result: (SLD x y) - for { - _ = v.Args[1] - x := v.Args[0] - v_1 := v.Args[1] - if v_1.Op != OpS390XMOVDreg { - break - } - y := v_1.Args[0] - v.reset(OpS390XSLD) - v.AddArg(x) - v.AddArg(y) - return true - } // match: (SLD x (MOVWreg y)) // cond: // result: (SLD x y) @@ -37932,9 +37607,6 @@ func rewriteValueS390X_OpS390XSLD_0(v *Value) bool { v.AddArg(y) return true } - return false -} -func rewriteValueS390X_OpS390XSLD_10(v *Value) bool { // match: (SLD x (MOVBZreg y)) // cond: // result: (SLD x y) @@ -38041,22 +37713,6 @@ func rewriteValueS390X_OpS390XSLW_0(v *Value) bool { v.AddArg(y) return true } - // match: (SLW x (MOVDreg y)) - // cond: - // result: (SLW x y) - for { - _ = v.Args[1] - x := v.Args[0] - v_1 := v.Args[1] - if v_1.Op != OpS390XMOVDreg { - break - } - y := v_1.Args[0] - v.reset(OpS390XSLW) - v.AddArg(x) - v.AddArg(y) - return true - } // match: (SLW x (MOVWreg y)) // cond: // result: (SLW x y) @@ -38137,9 +37793,6 @@ func rewriteValueS390X_OpS390XSLW_0(v *Value) bool { v.AddArg(y) return true } - return false -} -func rewriteValueS390X_OpS390XSLW_10(v *Value) bool { // match: (SLW x (MOVBZreg y)) // cond: // result: (SLW x y) @@ -38246,22 +37899,6 @@ func rewriteValueS390X_OpS390XSRAD_0(v *Value) bool { v.AddArg(y) return true } - // match: (SRAD x (MOVDreg y)) - // cond: - // result: (SRAD x y) - for { - _ = v.Args[1] - x := v.Args[0] - v_1 := v.Args[1] - if v_1.Op != OpS390XMOVDreg { - break - } - y := v_1.Args[0] - v.reset(OpS390XSRAD) - v.AddArg(x) - v.AddArg(y) - return true - } // match: (SRAD x (MOVWreg y)) // cond: // result: (SRAD x y) @@ -38342,9 +37979,6 @@ func rewriteValueS390X_OpS390XSRAD_0(v *Value) bool { v.AddArg(y) return true } - return false -} -func rewriteValueS390X_OpS390XSRAD_10(v *Value) bool { // match: (SRAD x (MOVBZreg y)) // cond: // result: (SRAD x y) @@ -38468,22 +38102,6 @@ func rewriteValueS390X_OpS390XSRAW_0(v *Value) bool { v.AddArg(y) return true } - // match: (SRAW x (MOVDreg y)) - // cond: - // result: (SRAW x y) - for { - _ = v.Args[1] - x := v.Args[0] - v_1 := v.Args[1] - if v_1.Op != OpS390XMOVDreg { - break - } - y := v_1.Args[0] - v.reset(OpS390XSRAW) - v.AddArg(x) - v.AddArg(y) - return true - } // match: (SRAW x (MOVWreg y)) // cond: // result: (SRAW x y) @@ -38564,9 +38182,6 @@ func rewriteValueS390X_OpS390XSRAW_0(v *Value) bool { v.AddArg(y) return true } - return false -} -func rewriteValueS390X_OpS390XSRAW_10(v *Value) bool { // match: (SRAW x (MOVBZreg y)) // cond: // result: (SRAW x y) @@ -38690,22 +38305,6 @@ func rewriteValueS390X_OpS390XSRD_0(v *Value) bool { v.AddArg(y) return true } - // match: (SRD x (MOVDreg y)) - // cond: - // result: (SRD x y) - for { - _ = v.Args[1] - x := v.Args[0] - v_1 := v.Args[1] - if v_1.Op != OpS390XMOVDreg { - break - } - y := v_1.Args[0] - v.reset(OpS390XSRD) - v.AddArg(x) - v.AddArg(y) - return true - } // match: (SRD x (MOVWreg y)) // cond: // result: (SRD x y) @@ -38786,9 +38385,6 @@ func rewriteValueS390X_OpS390XSRD_0(v *Value) bool { v.AddArg(y) return true } - return false -} -func rewriteValueS390X_OpS390XSRD_10(v *Value) bool { // match: (SRD x (MOVBZreg y)) // cond: // result: (SRD x y) @@ -38926,22 +38522,6 @@ func rewriteValueS390X_OpS390XSRW_0(v *Value) bool { v.AddArg(y) return true } - // match: (SRW x (MOVDreg y)) - // cond: - // result: (SRW x y) - for { - _ = v.Args[1] - x := v.Args[0] - v_1 := v.Args[1] - if v_1.Op != OpS390XMOVDreg { - break - } - y := v_1.Args[0] - v.reset(OpS390XSRW) - v.AddArg(x) - v.AddArg(y) - return true - } // match: (SRW x (MOVWreg y)) // cond: // result: (SRW x y) @@ -39022,9 +38602,6 @@ func rewriteValueS390X_OpS390XSRW_0(v *Value) bool { v.AddArg(y) return true } - return false -} -func rewriteValueS390X_OpS390XSRW_10(v *Value) bool { // match: (SRW x (MOVBZreg y)) // cond: // result: (SRW x y) diff --git a/src/cmd/compile/internal/ssa/rewriteWasm.go b/src/cmd/compile/internal/ssa/rewriteWasm.go index 45b855027d..c9384af16a 100644 --- a/src/cmd/compile/internal/ssa/rewriteWasm.go +++ b/src/cmd/compile/internal/ssa/rewriteWasm.go @@ -2742,6 +2742,44 @@ func rewriteValueWasm_OpLsh64x64_0(v *Value) bool { v.AddArg(y) return true } + // match: (Lsh64x64 x (I64Const [c])) + // cond: uint64(c) < 64 + // result: (I64Shl x (I64Const [c])) + for { + _ = v.Args[1] + x := v.Args[0] + v_1 := v.Args[1] + if v_1.Op != OpWasmI64Const { + break + } + c := v_1.AuxInt + if !(uint64(c) < 64) { + break + } + v.reset(OpWasmI64Shl) + v.AddArg(x) + v0 := b.NewValue0(v.Pos, OpWasmI64Const, typ.Int64) + v0.AuxInt = c + v.AddArg(v0) + return true + } + // match: (Lsh64x64 x (I64Const [c])) + // cond: uint64(c) >= 64 + // result: (I64Const [0]) + for { + _ = v.Args[1] + v_1 := v.Args[1] + if v_1.Op != OpWasmI64Const { + break + } + c := v_1.AuxInt + if !(uint64(c) >= 64) { + break + } + v.reset(OpWasmI64Const) + v.AuxInt = 0 + return true + } // match: (Lsh64x64 x y) // cond: // result: (Select (I64Shl x y) (I64Const [0]) (I64LtU y (I64Const [64]))) @@ -4264,6 +4302,44 @@ func rewriteValueWasm_OpRsh64Ux64_0(v *Value) bool { v.AddArg(y) return true } + // match: (Rsh64Ux64 x (I64Const [c])) + // cond: uint64(c) < 64 + // result: (I64ShrU x (I64Const [c])) + for { + _ = v.Args[1] + x := v.Args[0] + v_1 := v.Args[1] + if v_1.Op != OpWasmI64Const { + break + } + c := v_1.AuxInt + if !(uint64(c) < 64) { + break + } + v.reset(OpWasmI64ShrU) + v.AddArg(x) + v0 := b.NewValue0(v.Pos, OpWasmI64Const, typ.Int64) + v0.AuxInt = c + v.AddArg(v0) + return true + } + // match: (Rsh64Ux64 x (I64Const [c])) + // cond: uint64(c) >= 64 + // result: (I64Const [0]) + for { + _ = v.Args[1] + v_1 := v.Args[1] + if v_1.Op != OpWasmI64Const { + break + } + c := v_1.AuxInt + if !(uint64(c) >= 64) { + break + } + v.reset(OpWasmI64Const) + v.AuxInt = 0 + return true + } // match: (Rsh64Ux64 x y) // cond: // result: (Select (I64ShrU x y) (I64Const [0]) (I64LtU y (I64Const [64]))) @@ -4355,6 +4431,48 @@ func rewriteValueWasm_OpRsh64x64_0(v *Value) bool { v.AddArg(y) return true } + // match: (Rsh64x64 x (I64Const [c])) + // cond: uint64(c) < 64 + // result: (I64ShrS x (I64Const [c])) + for { + _ = v.Args[1] + x := v.Args[0] + v_1 := v.Args[1] + if v_1.Op != OpWasmI64Const { + break + } + c := v_1.AuxInt + if !(uint64(c) < 64) { + break + } + v.reset(OpWasmI64ShrS) + v.AddArg(x) + v0 := b.NewValue0(v.Pos, OpWasmI64Const, typ.Int64) + v0.AuxInt = c + v.AddArg(v0) + return true + } + // match: (Rsh64x64 x (I64Const [c])) + // cond: uint64(c) >= 64 + // result: (I64ShrS x (I64Const [63])) + for { + _ = v.Args[1] + x := v.Args[0] + v_1 := v.Args[1] + if v_1.Op != OpWasmI64Const { + break + } + c := v_1.AuxInt + if !(uint64(c) >= 64) { + break + } + v.reset(OpWasmI64ShrS) + v.AddArg(x) + v0 := b.NewValue0(v.Pos, OpWasmI64Const, typ.Int64) + v0.AuxInt = 63 + v.AddArg(v0) + return true + } // match: (Rsh64x64 x y) // cond: // result: (I64ShrS x (Select y (I64Const [63]) (I64LtU y (I64Const [64])))) diff --git a/src/cmd/compile/internal/ssa/sparsetree.go b/src/cmd/compile/internal/ssa/sparsetree.go index 546da8348d..fe96912c00 100644 --- a/src/cmd/compile/internal/ssa/sparsetree.go +++ b/src/cmd/compile/internal/ssa/sparsetree.go @@ -223,7 +223,7 @@ func (t SparseTree) domorder(x *Block) int32 { // entry(x) < entry(y) allows cases x-dom-y and x-then-y. // But by supposition, x does not dominate y. So we have x-then-y. // - // For contractidion, assume x dominates z. + // For contradiction, assume x dominates z. // Then entry(x) < entry(z) < exit(z) < exit(x). // But we know x-then-y, so entry(x) < exit(x) < entry(y) < exit(y). // Combining those, entry(x) < entry(z) < exit(z) < exit(x) < entry(y) < exit(y). diff --git a/src/cmd/compile/internal/syntax/scanner.go b/src/cmd/compile/internal/syntax/scanner.go index 30ee6c0e5f..fef87171bc 100644 --- a/src/cmd/compile/internal/syntax/scanner.go +++ b/src/cmd/compile/internal/syntax/scanner.go @@ -35,8 +35,8 @@ type scanner struct { // current token, valid after calling next() line, col uint tok token - lit string // valid if tok is _Name, _Literal, or _Semi ("semicolon", "newline", or "EOF") - bad bool // valid if tok is _Literal, true if a syntax error occurred, lit may be incorrect + lit string // valid if tok is _Name, _Literal, or _Semi ("semicolon", "newline", or "EOF"); may be malformed if bad is true + bad bool // valid if tok is _Literal, true if a syntax error occurred, lit may be malformed kind LitKind // valid if tok is _Literal op Operator // valid if tok is _Operator, _AssignOp, or _IncOp prec int // valid if tok is _Operator, _AssignOp, or _IncOp @@ -50,8 +50,6 @@ func (s *scanner) init(src io.Reader, errh func(line, col uint, msg string), mod // errorf reports an error at the most recently read character position. func (s *scanner) errorf(format string, args ...interface{}) { - // TODO(gri) Consider using s.bad to consistently suppress multiple errors - // per token, here and below. s.bad = true s.error(fmt.Sprintf(format, args...)) } @@ -495,17 +493,19 @@ func (s *scanner) number(c rune) { digsep |= ds } - if digsep&1 == 0 { + if digsep&1 == 0 && !s.bad { s.errorf("%s has no digits", litname(prefix)) } // exponent if e := lower(c); e == 'e' || e == 'p' { - switch { - case e == 'e' && prefix != 0 && prefix != '0': - s.errorf("%q exponent requires decimal mantissa", c) - case e == 'p' && prefix != 'x': - s.errorf("%q exponent requires hexadecimal mantissa", c) + if !s.bad { + switch { + case e == 'e' && prefix != 0 && prefix != '0': + s.errorf("%q exponent requires decimal mantissa", c) + case e == 'p' && prefix != 'x': + s.errorf("%q exponent requires hexadecimal mantissa", c) + } } c = s.getr() s.kind = FloatLit @@ -514,10 +514,10 @@ func (s *scanner) number(c rune) { } c, ds = s.digits(c, 10, nil) digsep |= ds - if ds&1 == 0 { + if ds&1 == 0 && !s.bad { s.errorf("exponent has no digits") } - } else if prefix == 'x' && s.kind == FloatLit { + } else if prefix == 'x' && s.kind == FloatLit && !s.bad { s.errorf("hexadecimal mantissa requires a 'p' exponent") } @@ -532,11 +532,11 @@ func (s *scanner) number(c rune) { s.lit = string(s.stopLit()) s.tok = _Literal - if s.kind == IntLit && invalid >= 0 { + if s.kind == IntLit && invalid >= 0 && !s.bad { s.errorAtf(invalid, "invalid digit %q in %s", s.lit[invalid], litname(prefix)) } - if digsep&2 != 0 { + if digsep&2 != 0 && !s.bad { if i := invalidSep(s.lit); i >= 0 { s.errorAtf(i, "'_' must separate successive digits") } diff --git a/src/cmd/compile/internal/syntax/scanner_test.go b/src/cmd/compile/internal/syntax/scanner_test.go index 3030bfd4c0..717deb9073 100644 --- a/src/cmd/compile/internal/syntax/scanner_test.go +++ b/src/cmd/compile/internal/syntax/scanner_test.go @@ -652,3 +652,25 @@ func TestIssue21938(t *testing.T) { t.Errorf("got %s %q; want %s %q", got.tok, got.lit, _Literal, ".5") } } + +func TestIssue33961(t *testing.T) { + literals := `08__ 0b.p 0b_._p 0x.e 0x.p` + for _, lit := range strings.Split(literals, " ") { + n := 0 + var got scanner + got.init(strings.NewReader(lit), func(_, _ uint, msg string) { + // fmt.Printf("%s: %s\n", lit, msg) // uncomment for debugging + n++ + }, 0) + got.next() + + if n != 1 { + t.Errorf("%q: got %d errors; want 1", lit, n) + continue + } + + if !got.bad { + t.Errorf("%q: got error but bad not set", lit) + } + } +} diff --git a/src/cmd/compile/internal/types/etype_string.go b/src/cmd/compile/internal/types/etype_string.go index f234a31fd0..0ff05a8c2a 100644 --- a/src/cmd/compile/internal/types/etype_string.go +++ b/src/cmd/compile/internal/types/etype_string.go @@ -4,9 +4,52 @@ package types import "strconv" -const _EType_name = "xxxINT8UINT8INT16UINT16INT32UINT32INT64UINT64INTUINTUINTPTRCOMPLEX64COMPLEX128FLOAT32FLOAT64BOOLPTRFUNCSLICEARRAYSTRUCTCHANMAPINTERFORWANYSTRINGUNSAFEPTRIDEALNILBLANKFUNCARGSCHANARGSDDDFIELDSSATUPLENTYPE" +func _() { + // An "invalid array index" compiler error signifies that the constant values have changed. + // Re-run the stringer command to generate them again. + var x [1]struct{} + _ = x[Txxx-0] + _ = x[TINT8-1] + _ = x[TUINT8-2] + _ = x[TINT16-3] + _ = x[TUINT16-4] + _ = x[TINT32-5] + _ = x[TUINT32-6] + _ = x[TINT64-7] + _ = x[TUINT64-8] + _ = x[TINT-9] + _ = x[TUINT-10] + _ = x[TUINTPTR-11] + _ = x[TCOMPLEX64-12] + _ = x[TCOMPLEX128-13] + _ = x[TFLOAT32-14] + _ = x[TFLOAT64-15] + _ = x[TBOOL-16] + _ = x[TPTR-17] + _ = x[TFUNC-18] + _ = x[TSLICE-19] + _ = x[TARRAY-20] + _ = x[TSTRUCT-21] + _ = x[TCHAN-22] + _ = x[TMAP-23] + _ = x[TINTER-24] + _ = x[TFORW-25] + _ = x[TANY-26] + _ = x[TSTRING-27] + _ = x[TUNSAFEPTR-28] + _ = x[TIDEAL-29] + _ = x[TNIL-30] + _ = x[TBLANK-31] + _ = x[TFUNCARGS-32] + _ = x[TCHANARGS-33] + _ = x[TSSA-34] + _ = x[TTUPLE-35] + _ = x[NTYPE-36] +} + +const _EType_name = "xxxINT8UINT8INT16UINT16INT32UINT32INT64UINT64INTUINTUINTPTRCOMPLEX64COMPLEX128FLOAT32FLOAT64BOOLPTRFUNCSLICEARRAYSTRUCTCHANMAPINTERFORWANYSTRINGUNSAFEPTRIDEALNILBLANKFUNCARGSCHANARGSSSATUPLENTYPE" -var _EType_index = [...]uint8{0, 3, 7, 12, 17, 23, 28, 34, 39, 45, 48, 52, 59, 68, 78, 85, 92, 96, 99, 103, 108, 113, 119, 123, 126, 131, 135, 138, 144, 153, 158, 161, 166, 174, 182, 190, 193, 198, 203} +var _EType_index = [...]uint8{0, 3, 7, 12, 17, 23, 28, 34, 39, 45, 48, 52, 59, 68, 78, 85, 92, 96, 99, 103, 108, 113, 119, 123, 126, 131, 135, 138, 144, 153, 158, 161, 166, 174, 182, 185, 190, 195} func (i EType) String() string { if i >= EType(len(_EType_index)-1) { diff --git a/src/cmd/compile/internal/types/identity.go b/src/cmd/compile/internal/types/identity.go index 7c14a03ba1..a77f514df9 100644 --- a/src/cmd/compile/internal/types/identity.go +++ b/src/cmd/compile/internal/types/identity.go @@ -53,6 +53,13 @@ func identical(t1, t2 *Type, cmpTags bool, assumedEqual map[typePair]struct{}) b assumedEqual[typePair{t1, t2}] = struct{}{} switch t1.Etype { + case TIDEAL: + // Historically, cmd/compile used a single "untyped + // number" type, so all untyped number types were + // identical. Match this behavior. + // TODO(mdempsky): Revisit this. + return true + case TINTER: if t1.NumFields() != t2.NumFields() { return false diff --git a/src/cmd/compile/internal/types/sizeof_test.go b/src/cmd/compile/internal/types/sizeof_test.go index 2633ef2ddd..09b852f343 100644 --- a/src/cmd/compile/internal/types/sizeof_test.go +++ b/src/cmd/compile/internal/types/sizeof_test.go @@ -31,7 +31,6 @@ func TestSizeof(t *testing.T) { {Interface{}, 8, 16}, {Chan{}, 8, 16}, {Array{}, 12, 16}, - {DDDField{}, 4, 8}, {FuncArgs{}, 4, 8}, {ChanArgs{}, 4, 8}, {Ptr{}, 4, 8}, diff --git a/src/cmd/compile/internal/types/type.go b/src/cmd/compile/internal/types/type.go index 2fcd6057f3..e61a5573dd 100644 --- a/src/cmd/compile/internal/types/type.go +++ b/src/cmd/compile/internal/types/type.go @@ -57,7 +57,7 @@ const ( TUNSAFEPTR // pseudo-types for literals - TIDEAL + TIDEAL // untyped numeric constants TNIL TBLANK @@ -65,9 +65,6 @@ const ( TFUNCARGS TCHANARGS - // pseudo-types for import/export - TDDDFIELD // wrapper: contained type is a ... field - // SSA backend types TSSA // internal types used by SSA backend (flags, memory, etc.) TTUPLE // a pair of types, used by SSA backend @@ -94,7 +91,6 @@ const ( // It also stores pointers to several special types: // - Types[TANY] is the placeholder "any" type recognized by substArgTypes. // - Types[TBLANK] represents the blank variable's type. -// - Types[TIDEAL] represents untyped numeric constants. // - Types[TNIL] represents the predeclared "nil" value's type. // - Types[TUNSAFEPTR] is package unsafe's Pointer type. var Types [NTYPE]*Type @@ -112,8 +108,6 @@ var ( Idealbool *Type // Types to represent untyped numeric constants. - // Note: Currently these are only used within the binary export - // data format. The rest of the compiler only uses Types[TIDEAL]. Idealint = New(TIDEAL) Idealrune = New(TIDEAL) Idealfloat = New(TIDEAL) @@ -131,7 +125,6 @@ type Type struct { // TFUNC: *Func // TSTRUCT: *Struct // TINTER: *Interface - // TDDDFIELD: DDDField // TFUNCARGS: FuncArgs // TCHANARGS: ChanArgs // TCHAN: *Chan @@ -308,11 +301,6 @@ type Ptr struct { Elem *Type // element type } -// DDDField contains Type fields specific to TDDDFIELD types. -type DDDField struct { - T *Type // reference to a slice type for ... args -} - // ChanArgs contains Type fields specific to TCHANARGS types. type ChanArgs struct { T *Type // reference to a chan type whose elements need a width check @@ -473,8 +461,6 @@ func New(et EType) *Type { t.Extra = ChanArgs{} case TFUNCARGS: t.Extra = FuncArgs{} - case TDDDFIELD: - t.Extra = DDDField{} case TCHAN: t.Extra = new(Chan) case TTUPLE: @@ -576,13 +562,6 @@ func NewPtr(elem *Type) *Type { return t } -// NewDDDField returns a new TDDDFIELD type for slice type s. -func NewDDDField(s *Type) *Type { - t := New(TDDDFIELD) - t.Extra = DDDField{T: s} - return t -} - // NewChanArgs returns a new TCHANARGS type for channel type c. func NewChanArgs(c *Type) *Type { t := New(TCHANARGS) @@ -802,12 +781,6 @@ func (t *Type) Elem() *Type { return nil } -// DDDField returns the slice ... type for TDDDFIELD type t. -func (t *Type) DDDField() *Type { - t.wantEtype(TDDDFIELD) - return t.Extra.(DDDField).T -} - // ChanArgs returns the channel type for TCHANARGS type t. func (t *Type) ChanArgs() *Type { t.wantEtype(TCHANARGS) diff --git a/src/cmd/go/alldocs.go b/src/cmd/go/alldocs.go index ebbead5d31..36fa528a90 100644 --- a/src/cmd/go/alldocs.go +++ b/src/cmd/go/alldocs.go @@ -110,11 +110,13 @@ // The default is the number of CPUs available. // -race // enable data race detection. -// Supported only on linux/amd64, freebsd/amd64, darwin/amd64 and windows/amd64. +// Supported only on linux/amd64, freebsd/amd64, darwin/amd64, windows/amd64, +// linux/ppc64le and linux/arm64 (only for 48-bit VMA). // -msan // enable interoperation with memory sanitizer. // Supported only on linux/amd64, linux/arm64 // and only with Clang/LLVM as the host C compiler. +// On linux/arm64, pie build mode will be used. // -v // print the names of packages as they are compiled. // -work @@ -2060,8 +2062,8 @@ // // The GET requests sent to a Go module proxy are: // -// GET $GOPROXY//@v/list returns a list of all known versions of the -// given module, one per line. +// GET $GOPROXY//@v/list returns a list of known versions of the given +// module, one per line. // // GET $GOPROXY//@v/.info returns JSON-formatted metadata // about that version of the given module. @@ -2072,6 +2074,21 @@ // GET $GOPROXY//@v/.zip returns the zip archive // for that version of the given module. // +// GET $GOPROXY//@latest returns JSON-formatted metadata about the +// latest known version of the given module in the same format as +// /@v/.info. The latest version should be the version of +// the module the go command may use if /@v/list is empty or no +// listed version is suitable. /@latest is optional and may not +// be implemented by a module proxy. +// +// When resolving the latest version of a module, the go command will request +// /@v/list, then, if no suitable versions are found, /@latest. +// The go command prefers, in order: the semantically highest release version, +// the semantically highest pre-release version, and the chronologically +// most recent pseudo-version. In Go 1.12 and earlier, the go command considered +// pseudo-versions in /@v/list to be pre-release versions, but this is +// no longer true since Go 1.13. +// // To avoid problems when serving from case-sensitive file systems, // the and elements are case-encoded, replacing every // uppercase letter with an exclamation mark followed by the corresponding diff --git a/src/cmd/go/go_test.go b/src/cmd/go/go_test.go index f6caa01fd2..71a2e01fa3 100644 --- a/src/cmd/go/go_test.go +++ b/src/cmd/go/go_test.go @@ -206,7 +206,7 @@ func TestMain(m *testing.M) { // (installed in GOROOT/pkg/tool/GOOS_GOARCH). // If these are not the same toolchain, then the entire standard library // will look out of date (the compilers in those two different tool directories - // are built for different architectures and have different buid IDs), + // are built for different architectures and have different build IDs), // which will cause many tests to do unnecessary rebuilds and some // tests to attempt to overwrite the installed standard library. // Bail out entirely in this case. diff --git a/src/cmd/go/internal/cache/cache.go b/src/cmd/go/internal/cache/cache.go index a05a08f75f..8797398765 100644 --- a/src/cmd/go/internal/cache/cache.go +++ b/src/cmd/go/internal/cache/cache.go @@ -74,7 +74,22 @@ func (c *Cache) fileName(id [HashSize]byte, key string) string { return filepath.Join(c.dir, fmt.Sprintf("%02x", id[0]), fmt.Sprintf("%x", id)+"-"+key) } -var errMissing = errors.New("cache entry not found") +// An entryNotFoundError indicates that a cache entry was not found, with an +// optional underlying reason. +type entryNotFoundError struct { + Err error +} + +func (e *entryNotFoundError) Error() string { + if e.Err == nil { + return "cache entry not found" + } + return fmt.Sprintf("cache entry not found: %v", e.Err) +} + +func (e *entryNotFoundError) Unwrap() error { + return e.Err +} const ( // action entry file is "v1 \n" @@ -93,6 +108,8 @@ const ( // GODEBUG=gocacheverify=1. var verify = false +var errVerifyMode = errors.New("gocachverify=1") + // DebugTest is set when GODEBUG=gocachetest=1 is in the environment. var DebugTest = false @@ -121,7 +138,7 @@ func initEnv() { // saved file for that output ID is still available. func (c *Cache) Get(id ActionID) (Entry, error) { if verify { - return Entry{}, errMissing + return Entry{}, &entryNotFoundError{Err: errVerifyMode} } return c.get(id) } @@ -134,47 +151,60 @@ type Entry struct { // get is Get but does not respect verify mode, so that Put can use it. func (c *Cache) get(id ActionID) (Entry, error) { - missing := func() (Entry, error) { - return Entry{}, errMissing + missing := func(reason error) (Entry, error) { + return Entry{}, &entryNotFoundError{Err: reason} } f, err := os.Open(c.fileName(id, "a")) if err != nil { - return missing() + return missing(err) } defer f.Close() entry := make([]byte, entrySize+1) // +1 to detect whether f is too long - if n, err := io.ReadFull(f, entry); n != entrySize || err != io.ErrUnexpectedEOF { - return missing() + if n, err := io.ReadFull(f, entry); n > entrySize { + return missing(errors.New("too long")) + } else if err != io.ErrUnexpectedEOF { + if err == io.EOF { + return missing(errors.New("file is empty")) + } + return missing(err) + } else if n < entrySize { + return missing(errors.New("entry file incomplete")) } if entry[0] != 'v' || entry[1] != '1' || entry[2] != ' ' || entry[3+hexSize] != ' ' || entry[3+hexSize+1+hexSize] != ' ' || entry[3+hexSize+1+hexSize+1+20] != ' ' || entry[entrySize-1] != '\n' { - return missing() + return missing(errors.New("invalid header")) } eid, entry := entry[3:3+hexSize], entry[3+hexSize:] eout, entry := entry[1:1+hexSize], entry[1+hexSize:] esize, entry := entry[1:1+20], entry[1+20:] etime, entry := entry[1:1+20], entry[1+20:] var buf [HashSize]byte - if _, err := hex.Decode(buf[:], eid); err != nil || buf != id { - return missing() + if _, err := hex.Decode(buf[:], eid); err != nil { + return missing(fmt.Errorf("decoding ID: %v", err)) + } else if buf != id { + return missing(errors.New("mismatched ID")) } if _, err := hex.Decode(buf[:], eout); err != nil { - return missing() + return missing(fmt.Errorf("decoding output ID: %v", err)) } i := 0 for i < len(esize) && esize[i] == ' ' { i++ } size, err := strconv.ParseInt(string(esize[i:]), 10, 64) - if err != nil || size < 0 { - return missing() + if err != nil { + return missing(fmt.Errorf("parsing size: %v", err)) + } else if size < 0 { + return missing(errors.New("negative size")) } i = 0 for i < len(etime) && etime[i] == ' ' { i++ } tm, err := strconv.ParseInt(string(etime[i:]), 10, 64) - if err != nil || tm < 0 { - return missing() + if err != nil { + return missing(fmt.Errorf("parsing timestamp: %v", err)) + } else if tm < 0 { + return missing(errors.New("negative timestamp")) } c.used(c.fileName(id, "a")) @@ -191,8 +221,11 @@ func (c *Cache) GetFile(id ActionID) (file string, entry Entry, err error) { } file = c.OutputFile(entry.OutputID) info, err := os.Stat(file) - if err != nil || info.Size() != entry.Size { - return "", Entry{}, errMissing + if err != nil { + return "", Entry{}, &entryNotFoundError{Err: err} + } + if info.Size() != entry.Size { + return "", Entry{}, &entryNotFoundError{Err: errors.New("file incomplete")} } return file, entry, nil } @@ -207,7 +240,7 @@ func (c *Cache) GetBytes(id ActionID) ([]byte, Entry, error) { } data, _ := ioutil.ReadFile(c.OutputFile(entry.OutputID)) if sha256.Sum256(data) != entry.OutputID { - return nil, entry, errMissing + return nil, entry, &entryNotFoundError{Err: errors.New("bad checksum")} } return data, entry, nil } diff --git a/src/cmd/go/internal/get/discovery.go b/src/cmd/go/internal/get/discovery.go index 6ba5c091e3..afa6ef455f 100644 --- a/src/cmd/go/internal/get/discovery.go +++ b/src/cmd/go/internal/get/discovery.go @@ -11,15 +11,15 @@ import ( "strings" ) -// charsetReader returns a reader for the given charset. Currently -// it only supports UTF-8 and ASCII. Otherwise, it returns a meaningful +// charsetReader returns a reader that converts from the given charset to UTF-8. +// Currently it only supports UTF-8 and ASCII. Otherwise, it returns a meaningful // error which is printed by go get, so the user can find why the package // wasn't downloaded if the encoding is not supported. Note that, in // order to reduce potential errors, ASCII is treated as UTF-8 (i.e. characters // greater than 0x7f are not rejected). func charsetReader(charset string, input io.Reader) (io.Reader, error) { switch strings.ToLower(charset) { - case "ascii": + case "utf-8", "ascii": return input, nil default: return nil, fmt.Errorf("can't decode XML document using charset %q", charset) @@ -28,16 +28,16 @@ func charsetReader(charset string, input io.Reader) (io.Reader, error) { // parseMetaGoImports returns meta imports from the HTML in r. // Parsing ends at the end of the section or the beginning of the . -func parseMetaGoImports(r io.Reader, mod ModuleMode) (imports []metaImport, err error) { +func parseMetaGoImports(r io.Reader, mod ModuleMode) ([]metaImport, error) { d := xml.NewDecoder(r) d.CharsetReader = charsetReader d.Strict = false - var t xml.Token + var imports []metaImport for { - t, err = d.RawToken() + t, err := d.RawToken() if err != nil { - if err == io.EOF || len(imports) > 0 { - err = nil + if err != io.EOF && len(imports) == 0 { + return nil, err } break } diff --git a/src/cmd/go/internal/get/vcs.go b/src/cmd/go/internal/get/vcs.go index 6ae3cffd93..3ccfbb8837 100644 --- a/src/cmd/go/internal/get/vcs.go +++ b/src/cmd/go/internal/get/vcs.go @@ -661,7 +661,7 @@ func RepoRootForImportPath(importPath string, mod ModuleMode, security web.Secur if err == errUnknownSite { rr, err = repoRootForImportDynamic(importPath, mod, security) if err != nil { - err = fmt.Errorf("unrecognized import path %q (%v)", importPath, err) + err = fmt.Errorf("unrecognized import path %q: %v", importPath, err) } } if err != nil { @@ -799,6 +799,13 @@ func repoRootForImportDynamic(importPath string, mod ModuleMode, security web.Se body := resp.Body defer body.Close() imports, err := parseMetaGoImports(body, mod) + if len(imports) == 0 { + if respErr := resp.Err(); respErr != nil { + // If the server's status was not OK, prefer to report that instead of + // an XML parse error. + return nil, respErr + } + } if err != nil { return nil, fmt.Errorf("parsing %s: %v", importPath, err) } @@ -909,6 +916,13 @@ func metaImportsForPrefix(importPrefix string, mod ModuleMode, security web.Secu body := resp.Body defer body.Close() imports, err := parseMetaGoImports(body, mod) + if len(imports) == 0 { + if respErr := resp.Err(); respErr != nil { + // If the server's status was not OK, prefer to report that instead of + // an XML parse error. + return setCache(fetchResult{url: url, err: respErr}) + } + } if err != nil { return setCache(fetchResult{url: url, err: fmt.Errorf("parsing %s: %v", resp.URL, err)}) } @@ -962,7 +976,7 @@ func (m ImportMismatchError) Error() string { // matchGoImport returns the metaImport from imports matching importPath. // An error is returned if there are multiple matches. -// errNoMatch is returned if none match. +// An ImportMismatchError is returned if none match. func matchGoImport(imports []metaImport, importPath string) (metaImport, error) { match := -1 diff --git a/src/cmd/go/internal/load/test.go b/src/cmd/go/internal/load/test.go index afff5deaaa..2864fb5ebb 100644 --- a/src/cmd/go/internal/load/test.go +++ b/src/cmd/go/internal/load/test.go @@ -399,10 +399,13 @@ func recompileForTest(pmain, preal, ptest, pxtest *Package) { } } - // Don't compile build info from a main package. This can happen - // if -coverpkg patterns include main packages, since those packages - // are imported by pmain. See golang.org/issue/30907. - if p.Internal.BuildInfo != "" && p != pmain { + // Force main packages the test imports to be built as libraries. + // Normal imports of main packages are forbidden by the package loader, + // but this can still happen if -coverpkg patterns include main packages: + // covered packages are imported by pmain. Linking multiple packages + // compiled with '-p main' causes duplicate symbol errors. + // See golang.org/issue/30907, golang.org/issue/34114. + if p.Name == "main" && p != pmain { split() } } diff --git a/src/cmd/go/internal/modfetch/codehost/git.go b/src/cmd/go/internal/modfetch/codehost/git.go index d382e8ac9a..df895ec91b 100644 --- a/src/cmd/go/internal/modfetch/codehost/git.go +++ b/src/cmd/go/internal/modfetch/codehost/git.go @@ -6,9 +6,11 @@ package codehost import ( "bytes" + "errors" "fmt" "io" "io/ioutil" + "net/url" "os" "os/exec" "path/filepath" @@ -21,6 +23,7 @@ import ( "cmd/go/internal/lockedfile" "cmd/go/internal/par" "cmd/go/internal/semver" + "cmd/go/internal/web" ) // GitRepo returns the code repository at the given Git remote reference. @@ -34,6 +37,15 @@ func LocalGitRepo(remote string) (Repo, error) { return newGitRepoCached(remote, true) } +// A notExistError wraps another error to retain its original text +// but makes it opaquely equivalent to os.ErrNotExist. +type notExistError struct { + err error +} + +func (e notExistError) Error() string { return e.err.Error() } +func (notExistError) Is(err error) bool { return err == os.ErrNotExist } + const gitWorkDirType = "git3" var gitRepoCache par.Cache @@ -85,8 +97,9 @@ func newGitRepo(remote string, localOK bool) (Repo, error) { os.RemoveAll(r.dir) return nil, err } - r.remote = "origin" } + r.remoteURL = r.remote + r.remote = "origin" } else { // Local path. // Disallow colon (not in ://) because sometimes @@ -113,9 +126,9 @@ func newGitRepo(remote string, localOK bool) (Repo, error) { } type gitRepo struct { - remote string - local bool - dir string + remote, remoteURL string + local bool + dir string mu lockedfile.Mutex // protects fetchLevel and git repo state @@ -166,14 +179,25 @@ func (r *gitRepo) loadRefs() { // The git protocol sends all known refs and ls-remote filters them on the client side, // so we might as well record both heads and tags in one shot. // Most of the time we only care about tags but sometimes we care about heads too. - out, err := Run(r.dir, "git", "ls-remote", "-q", r.remote) - if err != nil { - if rerr, ok := err.(*RunError); ok { + out, gitErr := Run(r.dir, "git", "ls-remote", "-q", r.remote) + if gitErr != nil { + if rerr, ok := gitErr.(*RunError); ok { if bytes.Contains(rerr.Stderr, []byte("fatal: could not read Username")) { rerr.HelpText = "Confirm the import path was entered correctly.\nIf this is a private repository, see https://golang.org/doc/faq#git_https for additional information." } } - r.refsErr = err + + // If the remote URL doesn't exist at all, ideally we should treat the whole + // repository as nonexistent by wrapping the error in a notExistError. + // For HTTP and HTTPS, that's easy to detect: we'll try to fetch the URL + // ourselves and see what code it serves. + if u, err := url.Parse(r.remoteURL); err == nil && (u.Scheme == "http" || u.Scheme == "https") { + if _, err := web.GetBytes(u); errors.Is(err, os.ErrNotExist) { + gitErr = notExistError{gitErr} + } + } + + r.refsErr = gitErr return } diff --git a/src/cmd/go/internal/modfetch/coderepo.go b/src/cmd/go/internal/modfetch/coderepo.go index f15ce67d46..541d856b28 100644 --- a/src/cmd/go/internal/modfetch/coderepo.go +++ b/src/cmd/go/internal/modfetch/coderepo.go @@ -140,7 +140,10 @@ func (r *codeRepo) Versions(prefix string) ([]string, error) { } tags, err := r.code.Tags(p) if err != nil { - return nil, err + return nil, &module.ModuleError{ + Path: r.modPath, + Err: err, + } } list := []string{} @@ -171,7 +174,10 @@ func (r *codeRepo) Versions(prefix string) ([]string, error) { // by referring to them with a +incompatible suffix, as in v17.0.0+incompatible. files, err := r.code.ReadFileRevs(incompatible, "go.mod", codehost.MaxGoMod) if err != nil { - return nil, err + return nil, &module.ModuleError{ + Path: r.modPath, + Err: err, + } } for _, rev := range incompatible { f := files[rev] @@ -632,7 +638,7 @@ func (r *codeRepo) findDir(version string) (rev, dir string, gomod []byte, err e // the real module, found at a different path, usable only in // a replace directive. // - // TODO(bcmills): This doesn't seem right. Investigate futher. + // TODO(bcmills): This doesn't seem right. Investigate further. // (Notably: why can't we replace foo/v2 with fork-of-foo/v3?) dir2 := path.Join(r.codeDir, r.pathMajor[1:]) file2 = path.Join(dir2, "go.mod") diff --git a/src/cmd/go/internal/modfetch/fetch.go b/src/cmd/go/internal/modfetch/fetch.go index 51a56028c4..2eead5f746 100644 --- a/src/cmd/go/internal/modfetch/fetch.go +++ b/src/cmd/go/internal/modfetch/fetch.go @@ -7,6 +7,7 @@ package modfetch import ( "archive/zip" "bytes" + "errors" "fmt" "io" "io/ioutil" @@ -361,19 +362,19 @@ func checkMod(mod module.Version) { // Do the file I/O before acquiring the go.sum lock. ziphash, err := CachePath(mod, "ziphash") if err != nil { - base.Fatalf("verifying %s@%s: %v", mod.Path, mod.Version, err) + base.Fatalf("verifying %v", module.VersionError(mod, err)) } data, err := renameio.ReadFile(ziphash) if err != nil { - if os.IsNotExist(err) { + if errors.Is(err, os.ErrNotExist) { // This can happen if someone does rm -rf GOPATH/src/cache/download. So it goes. return } - base.Fatalf("verifying %s@%s: %v", mod.Path, mod.Version, err) + base.Fatalf("verifying %v", module.VersionError(mod, err)) } h := strings.TrimSpace(string(data)) if !strings.HasPrefix(h, "h1:") { - base.Fatalf("verifying %s@%s: unexpected ziphash: %q", mod.Path, mod.Version, h) + base.Fatalf("verifying %v", module.VersionError(mod, fmt.Errorf("unexpected ziphash: %q", h))) } checkModSum(mod, h) diff --git a/src/cmd/go/internal/modfetch/proxy.go b/src/cmd/go/internal/modfetch/proxy.go index 569ef3a57a..8f75ad92a8 100644 --- a/src/cmd/go/internal/modfetch/proxy.go +++ b/src/cmd/go/internal/modfetch/proxy.go @@ -38,8 +38,8 @@ can be a module proxy. The GET requests sent to a Go module proxy are: -GET $GOPROXY//@v/list returns a list of all known versions of the -given module, one per line. +GET $GOPROXY//@v/list returns a list of known versions of the given +module, one per line. GET $GOPROXY//@v/.info returns JSON-formatted metadata about that version of the given module. @@ -50,6 +50,21 @@ for that version of the given module. GET $GOPROXY//@v/.zip returns the zip archive for that version of the given module. +GET $GOPROXY//@latest returns JSON-formatted metadata about the +latest known version of the given module in the same format as +/@v/.info. The latest version should be the version of +the module the go command may use if /@v/list is empty or no +listed version is suitable. /@latest is optional and may not +be implemented by a module proxy. + +When resolving the latest version of a module, the go command will request +/@v/list, then, if no suitable versions are found, /@latest. +The go command prefers, in order: the semantically highest release version, +the semantically highest pre-release version, and the chronologically +most recent pseudo-version. In Go 1.12 and earlier, the go command considered +pseudo-versions in /@v/list to be pre-release versions, but this is +no longer true since Go 1.13. + To avoid problems when serving from case-sensitive file systems, the and elements are case-encoded, replacing every uppercase letter with an exclamation mark followed by the corresponding @@ -150,13 +165,28 @@ func TryProxies(f func(proxy string) error) error { return f("off") } + var lastAttemptErr error for _, proxy := range proxies { err = f(proxy) if !errors.Is(err, os.ErrNotExist) { + lastAttemptErr = err break } + + // The error indicates that the module does not exist. + // In general we prefer to report the last such error, + // because it indicates the error that occurs after all other + // options have been exhausted. + // + // However, for modules in the NOPROXY list, the most useful error occurs + // first (with proxy set to "noproxy"), and the subsequent errors are all + // errNoProxy (which is not particularly helpful). Do not overwrite a more + // useful error with errNoproxy. + if lastAttemptErr == nil || !errors.Is(err, errNoproxy) { + lastAttemptErr = err + } } - return err + return lastAttemptErr } type proxyRepo struct { diff --git a/src/cmd/go/internal/modget/get.go b/src/cmd/go/internal/modget/get.go index 1cae311c4c..3fcd2d412a 100644 --- a/src/cmd/go/internal/modget/get.go +++ b/src/cmd/go/internal/modget/get.go @@ -678,6 +678,15 @@ func runGet(cmd *base.Command, args []string) { if *getD || len(pkgPatterns) == 0 { return } + // TODO(golang.org/issue/32483): handle paths ending with ".go" consistently + // with 'go build'. When we load packages above, we interpret arguments as + // package patterns, not source files. To preserve that interpretation here, + // we add a trailing slash to any patterns ending with ".go". + for i := range pkgPatterns { + if strings.HasSuffix(pkgPatterns[i], ".go") { + pkgPatterns[i] += "/" + } + } work.BuildInit() pkgs := load.PackagesForBuild(pkgPatterns) work.InstallPackages(pkgPatterns, pkgs) diff --git a/src/cmd/go/internal/modload/import.go b/src/cmd/go/internal/modload/import.go index 70add3507a..f0777089d4 100644 --- a/src/cmd/go/internal/modload/import.go +++ b/src/cmd/go/internal/modload/import.go @@ -28,6 +28,7 @@ import ( type ImportMissingError struct { ImportPath string Module module.Version + QueryErr error // newMissingVersion is set to a newer version of Module if one is present // in the build list. When set, we can't automatically upgrade. @@ -39,9 +40,16 @@ func (e *ImportMissingError) Error() string { if str.HasPathPrefix(e.ImportPath, "cmd") { return fmt.Sprintf("package %s is not in GOROOT (%s)", e.ImportPath, filepath.Join(cfg.GOROOT, "src", e.ImportPath)) } + if e.QueryErr != nil { + return fmt.Sprintf("cannot find module providing package %s: %v", e.ImportPath, e.QueryErr) + } return "cannot find module providing package " + e.ImportPath } - return "missing module for import: " + e.Module.Path + "@" + e.Module.Version + " provides " + e.ImportPath + return fmt.Sprintf("missing module for import: %s@%s provides %s", e.Module.Path, e.Module.Version, e.ImportPath) +} + +func (e *ImportMissingError) Unwrap() error { + return e.QueryErr } // Import finds the module and directory in the build list @@ -197,7 +205,7 @@ func Import(path string) (m module.Version, dir string, err error) { if errors.Is(err, os.ErrNotExist) { // Return "cannot find module providing package […]" instead of whatever // low-level error QueryPackage produced. - return module.Version{}, "", &ImportMissingError{ImportPath: path} + return module.Version{}, "", &ImportMissingError{ImportPath: path, QueryErr: err} } else { return module.Version{}, "", err } diff --git a/src/cmd/go/internal/module/module.go b/src/cmd/go/internal/module/module.go index 3e0baba15b..3d1ad27628 100644 --- a/src/cmd/go/internal/module/module.go +++ b/src/cmd/go/internal/module/module.go @@ -33,11 +33,13 @@ type Version struct { Path string // Version is usually a semantic version in canonical form. - // There are two exceptions to this general rule. + // There are three exceptions to this general rule. // First, the top-level target of a build has no specific version // and uses Version = "". // Second, during MVS calculations the version "none" is used // to represent the decision to take no version of a given module. + // Third, filesystem paths found in "replace" directives are + // represented by a path with an empty version. Version string `json:",omitempty"` } @@ -48,8 +50,13 @@ type ModuleError struct { Err error } -// VersionError returns a ModuleError derived from a Version and error. +// VersionError returns a ModuleError derived from a Version and error, +// or err itself if it is already such an error. func VersionError(v Version, err error) error { + var mErr *ModuleError + if errors.As(err, &mErr) && mErr.Path == v.Path && mErr.Version == v.Version { + return err + } return &ModuleError{ Path: v.Path, Version: v.Version, diff --git a/src/cmd/go/internal/renameio/umask_test.go b/src/cmd/go/internal/renameio/umask_test.go index 031fe46e09..1d5e594e7e 100644 --- a/src/cmd/go/internal/renameio/umask_test.go +++ b/src/cmd/go/internal/renameio/umask_test.go @@ -19,6 +19,7 @@ func TestWriteFileModeAppliesUmask(t *testing.T) { if err != nil { t.Fatalf("Failed to create temporary directory: %v", err) } + defer os.RemoveAll(dir) const mode = 0644 const umask = 0007 @@ -29,7 +30,6 @@ func TestWriteFileModeAppliesUmask(t *testing.T) { if err != nil { t.Fatalf("Failed to write file: %v", err) } - defer os.RemoveAll(dir) fi, err := os.Stat(file) if err != nil { diff --git a/src/cmd/go/internal/search/search.go b/src/cmd/go/internal/search/search.go index 0167c8d755..33ab4ae36e 100644 --- a/src/cmd/go/internal/search/search.go +++ b/src/cmd/go/internal/search/search.go @@ -241,7 +241,7 @@ func TreeCanMatchPattern(pattern string) func(name string) bool { // // First, /... at the end of the pattern can match an empty string, // so that net/... matches both net and packages in its subdirectories, like net/http. -// Second, any slash-separted pattern element containing a wildcard never +// Second, any slash-separated pattern element containing a wildcard never // participates in a match of the "vendor" element in the path of a vendored // package, so that ./... does not match packages in subdirectories of // ./vendor or ./mycode/vendor, but ./vendor/... and ./mycode/vendor/... do. @@ -363,30 +363,40 @@ func ImportPathsQuiet(patterns []string) []*Match { // CleanPatterns returns the patterns to use for the given // command line. It canonicalizes the patterns but does not -// evaluate any matches. +// evaluate any matches. It preserves text after '@' for commands +// that accept versions. func CleanPatterns(patterns []string) []string { if len(patterns) == 0 { return []string{"."} } var out []string for _, a := range patterns { + var p, v string + if i := strings.IndexByte(a, '@'); i < 0 { + p = a + } else { + p = a[:i] + v = a[i:] + } + // Arguments are supposed to be import paths, but // as a courtesy to Windows developers, rewrite \ to / // in command-line arguments. Handles .\... and so on. if filepath.Separator == '\\' { - a = strings.ReplaceAll(a, `\`, `/`) + p = strings.ReplaceAll(p, `\`, `/`) } // Put argument in canonical form, but preserve leading ./. - if strings.HasPrefix(a, "./") { - a = "./" + path.Clean(a) - if a == "./." { - a = "." + if strings.HasPrefix(p, "./") { + p = "./" + path.Clean(p) + if p == "./." { + p = "." } } else { - a = path.Clean(a) + p = path.Clean(p) } - out = append(out, a) + + out = append(out, p+v) } return out } diff --git a/src/cmd/go/internal/search/search_test.go b/src/cmd/go/internal/search/search_test.go index 0bef765fa4..5f27daf3fb 100644 --- a/src/cmd/go/internal/search/search_test.go +++ b/src/cmd/go/internal/search/search_test.go @@ -33,7 +33,7 @@ var matchPatternTests = ` match net net/http not not/http not/net/http netchan - # Second, any slash-separted pattern element containing a wildcard never + # Second, any slash-separated pattern element containing a wildcard never # participates in a match of the "vendor" element in the path of a vendored # package, so that ./... does not match packages in subdirectories of # ./vendor or ./mycode/vendor, but ./vendor/... and ./mycode/vendor/... do. diff --git a/src/cmd/go/internal/sumweb/client.go b/src/cmd/go/internal/sumweb/client.go index 6973e5ac17..a35dbb39be 100644 --- a/src/cmd/go/internal/sumweb/client.go +++ b/src/cmd/go/internal/sumweb/client.go @@ -107,7 +107,7 @@ func NewConn(client Client) *Conn { } } -// init initiailzes the conn (if not already initialized) +// init initializes the conn (if not already initialized) // and returns any initialization error. func (c *Conn) init() error { c.initOnce.Do(c.initWork) diff --git a/src/cmd/go/internal/tlog/tlog.go b/src/cmd/go/internal/tlog/tlog.go index 6703656b19..1746ed9157 100644 --- a/src/cmd/go/internal/tlog/tlog.go +++ b/src/cmd/go/internal/tlog/tlog.go @@ -121,7 +121,7 @@ func NodeHash(left, right Hash) Hash { func StoredHashIndex(level int, n int64) int64 { // Level L's n'th hash is written right after level L+1's 2n+1'th hash. // Work our way down to the level 0 ordering. - // We'll add back the orignal level count at the end. + // We'll add back the original level count at the end. for l := level; l > 0; l-- { n = 2*n + 1 } @@ -155,7 +155,7 @@ func SplitStoredHashIndex(index int64) (level int, n int64) { n++ indexN = x } - // The hash we want was commited with record n, + // The hash we want was committed with record n, // meaning it is one of (0, n), (1, n/2), (2, n/4), ... level = int(index - indexN) return level, n >> uint(level) diff --git a/src/cmd/go/internal/web/api.go b/src/cmd/go/internal/web/api.go index cd0e19d3ff..ad99eb2f8c 100644 --- a/src/cmd/go/internal/web/api.go +++ b/src/cmd/go/internal/web/api.go @@ -10,12 +10,15 @@ package web import ( + "bytes" "fmt" "io" "io/ioutil" "net/url" "os" "strings" + "unicode" + "unicode/utf8" ) // SecurityMode specifies whether a function should make network @@ -34,9 +37,32 @@ type HTTPError struct { URL string // redacted Status string StatusCode int + Err error // underlying error, if known + Detail string // limited to maxErrorDetailLines and maxErrorDetailBytes } +const ( + maxErrorDetailLines = 8 + maxErrorDetailBytes = maxErrorDetailLines * 81 +) + func (e *HTTPError) Error() string { + if e.Detail != "" { + detailSep := " " + if strings.ContainsRune(e.Detail, '\n') { + detailSep = "\n\t" + } + return fmt.Sprintf("reading %s: %v\n\tserver response:%s%s", e.URL, e.Status, detailSep, e.Detail) + } + + if err := e.Err; err != nil { + if pErr, ok := e.Err.(*os.PathError); ok && strings.HasSuffix(e.URL, pErr.Path) { + // Remove the redundant copy of the path. + err = pErr.Err + } + return fmt.Sprintf("reading %s: %v", e.URL, err) + } + return fmt.Sprintf("reading %s: %v", e.URL, e.Status) } @@ -44,6 +70,10 @@ func (e *HTTPError) Is(target error) bool { return target == os.ErrNotExist && (e.StatusCode == 404 || e.StatusCode == 410) } +func (e *HTTPError) Unwrap() error { + return e.Err +} + // GetBytes returns the body of the requested resource, or an error if the // response status was not http.StatusOK. // @@ -69,16 +99,69 @@ type Response struct { Status string StatusCode int Header map[string][]string - Body io.ReadCloser + Body io.ReadCloser // Either the original body or &errorDetail. + + fileErr error + errorDetail errorDetailBuffer } // Err returns an *HTTPError corresponding to the response r. -// It returns nil if the response r has StatusCode 200 or 0 (unset). +// If the response r has StatusCode 200 or 0 (unset), Err returns nil. +// Otherwise, Err may read from r.Body in order to extract relevant error detail. func (r *Response) Err() error { if r.StatusCode == 200 || r.StatusCode == 0 { return nil } - return &HTTPError{URL: r.URL, Status: r.Status, StatusCode: r.StatusCode} + + return &HTTPError{ + URL: r.URL, + Status: r.Status, + StatusCode: r.StatusCode, + Err: r.fileErr, + Detail: r.formatErrorDetail(), + } +} + +// formatErrorDetail converts r.errorDetail (a prefix of the output of r.Body) +// into a short, tab-indented summary. +func (r *Response) formatErrorDetail() string { + if r.Body != &r.errorDetail { + return "" // Error detail collection not enabled. + } + + // Ensure that r.errorDetail has been populated. + _, _ = io.Copy(ioutil.Discard, r.Body) + + s := r.errorDetail.buf.String() + if !utf8.ValidString(s) { + return "" // Don't try to recover non-UTF-8 error messages. + } + for _, r := range s { + if !unicode.IsGraphic(r) && !unicode.IsSpace(r) { + return "" // Don't let the server do any funny business with the user's terminal. + } + } + + var detail strings.Builder + for i, line := range strings.Split(s, "\n") { + if strings.TrimSpace(line) == "" { + break // Stop at the first blank line. + } + if i > 0 { + detail.WriteString("\n\t") + } + if i >= maxErrorDetailLines { + detail.WriteString("[Truncated: too many lines.]") + break + } + if detail.Len()+len(line) > maxErrorDetailBytes { + detail.WriteString("[Truncated: too long.]") + break + } + detail.WriteString(line) + } + + return detail.String() } // Get returns the body of the HTTP or HTTPS resource specified at the given URL. @@ -131,3 +214,39 @@ func Join(u *url.URL, path string) *url.URL { j.RawPath = strings.TrimSuffix(u.RawPath, "/") + "/" + strings.TrimPrefix(path, "/") return &j } + +// An errorDetailBuffer is an io.ReadCloser that copies up to +// maxErrorDetailLines into a buffer for later inspection. +type errorDetailBuffer struct { + r io.ReadCloser + buf strings.Builder + bufLines int +} + +func (b *errorDetailBuffer) Close() error { + return b.r.Close() +} + +func (b *errorDetailBuffer) Read(p []byte) (n int, err error) { + n, err = b.r.Read(p) + + // Copy the first maxErrorDetailLines+1 lines into b.buf, + // discarding any further lines. + // + // Note that the read may begin or end in the middle of a UTF-8 character, + // so don't try to do anything fancy with characters that encode to larger + // than one byte. + if b.bufLines <= maxErrorDetailLines { + for _, line := range bytes.SplitAfterN(p[:n], []byte("\n"), maxErrorDetailLines-b.bufLines) { + b.buf.Write(line) + if len(line) > 0 && line[len(line)-1] == '\n' { + b.bufLines++ + if b.bufLines > maxErrorDetailLines { + break + } + } + } + } + + return n, err +} diff --git a/src/cmd/go/internal/web/http.go b/src/cmd/go/internal/web/http.go index b790fe9916..5e4319b00e 100644 --- a/src/cmd/go/internal/web/http.go +++ b/src/cmd/go/internal/web/http.go @@ -14,7 +14,7 @@ package web import ( "crypto/tls" "fmt" - "io/ioutil" + "mime" "net/http" urlpkg "net/url" "os" @@ -64,7 +64,7 @@ func get(security SecurityMode, url *urlpkg.URL) (*Response, error) { Status: "404 testing", StatusCode: 404, Header: make(map[string][]string), - Body: ioutil.NopCloser(strings.NewReader("")), + Body: http.NoBody, } if cfg.BuildX { fmt.Fprintf(os.Stderr, "# get %s: %v (%.3fs)\n", Redacted(url), res.Status, time.Since(start).Seconds()) @@ -111,7 +111,7 @@ func get(security SecurityMode, url *urlpkg.URL) (*Response, error) { fetched, res, err = fetch(secure) if err != nil { if cfg.BuildX { - fmt.Fprintf(os.Stderr, "# get %s: %v\n", Redacted(url), err) + fmt.Fprintf(os.Stderr, "# get %s: %v\n", Redacted(secure), err) } if security != Insecure || url.Scheme == "https" { // HTTPS failed, and we can't fall back to plain HTTP. @@ -146,7 +146,7 @@ func get(security SecurityMode, url *urlpkg.URL) (*Response, error) { insecure.Scheme = "http" if insecure.User != nil && security != Insecure { if cfg.BuildX { - fmt.Fprintf(os.Stderr, "# get %s: insecure credentials\n", Redacted(url)) + fmt.Fprintf(os.Stderr, "# get %s: insecure credentials\n", Redacted(insecure)) } return nil, fmt.Errorf("refusing to pass credentials to insecure URL: %s", Redacted(insecure)) } @@ -154,7 +154,7 @@ func get(security SecurityMode, url *urlpkg.URL) (*Response, error) { fetched, res, err = fetch(insecure) if err != nil { if cfg.BuildX { - fmt.Fprintf(os.Stderr, "# get %s: %v\n", Redacted(url), err) + fmt.Fprintf(os.Stderr, "# get %s: %v\n", Redacted(insecure), err) } // HTTP failed, and we already tried HTTPS if applicable. // Report the error from the HTTP attempt. @@ -165,8 +165,9 @@ func get(security SecurityMode, url *urlpkg.URL) (*Response, error) { // Note: accepting a non-200 OK here, so people can serve a // meta import in their http 404 page. if cfg.BuildX { - fmt.Fprintf(os.Stderr, "# get %s: %v (%.3fs)\n", Redacted(url), res.Status, time.Since(start).Seconds()) + fmt.Fprintf(os.Stderr, "# get %s: %v (%.3fs)\n", Redacted(fetched), res.Status, time.Since(start).Seconds()) } + r := &Response{ URL: Redacted(fetched), Status: res.Status, @@ -174,6 +175,20 @@ func get(security SecurityMode, url *urlpkg.URL) (*Response, error) { Header: map[string][]string(res.Header), Body: res.Body, } + + if res.StatusCode != http.StatusOK { + contentType := res.Header.Get("Content-Type") + if mediaType, params, _ := mime.ParseMediaType(contentType); mediaType == "text/plain" { + switch charset := strings.ToLower(params["charset"]); charset { + case "us-ascii", "utf-8", "": + // Body claims to be plain text in UTF-8 or a subset thereof. + // Try to extract a useful error message from it. + r.errorDetail.r = res.Body + r.Body = &r.errorDetail + } + } + } + return r, nil } @@ -190,6 +205,7 @@ func getFile(u *urlpkg.URL) (*Response, error) { Status: http.StatusText(http.StatusNotFound), StatusCode: http.StatusNotFound, Body: http.NoBody, + fileErr: err, }, nil } @@ -199,6 +215,7 @@ func getFile(u *urlpkg.URL) (*Response, error) { Status: http.StatusText(http.StatusForbidden), StatusCode: http.StatusForbidden, Body: http.NoBody, + fileErr: err, }, nil } diff --git a/src/cmd/go/internal/work/action.go b/src/cmd/go/internal/work/action.go index 33b7818fb2..0f35739976 100644 --- a/src/cmd/go/internal/work/action.go +++ b/src/cmd/go/internal/work/action.go @@ -702,7 +702,7 @@ func (b *Builder) addInstallHeaderAction(a *Action) { } // buildmodeShared takes the "go build" action a1 into the building of a shared library of a1.Deps. -// That is, the input a1 represents "go build pkgs" and the result represents "go build -buidmode=shared pkgs". +// That is, the input a1 represents "go build pkgs" and the result represents "go build -buildmode=shared pkgs". func (b *Builder) buildmodeShared(mode, depMode BuildMode, args []string, pkgs []*load.Package, a1 *Action) *Action { name, err := libname(args, pkgs) if err != nil { diff --git a/src/cmd/go/internal/work/build.go b/src/cmd/go/internal/work/build.go index 9305b2d859..9d6fa0c25b 100644 --- a/src/cmd/go/internal/work/build.go +++ b/src/cmd/go/internal/work/build.go @@ -62,11 +62,13 @@ and test commands: The default is the number of CPUs available. -race enable data race detection. - Supported only on linux/amd64, freebsd/amd64, darwin/amd64 and windows/amd64. + Supported only on linux/amd64, freebsd/amd64, darwin/amd64, windows/amd64, + linux/ppc64le and linux/arm64 (only for 48-bit VMA). -msan enable interoperation with memory sanitizer. Supported only on linux/amd64, linux/arm64 and only with Clang/LLVM as the host C compiler. + On linux/arm64, pie build mode will be used. -v print the names of packages as they are compiled. -work diff --git a/src/cmd/go/internal/work/buildid.go b/src/cmd/go/internal/work/buildid.go index bf485d75ad..27bde8c615 100644 --- a/src/cmd/go/internal/work/buildid.go +++ b/src/cmd/go/internal/work/buildid.go @@ -292,14 +292,19 @@ func (b *Builder) gccgoToolID(name, language string) (string, error) { exe = lp } } - if _, err := os.Stat(exe); err != nil { - return "", fmt.Errorf("%s: can not find compiler %q: %v; output %q", name, exe, err, out) + id, err = buildid.ReadFile(exe) + if err != nil { + return "", err + } + + // If we can't find a build ID, use a hash. + if id == "" { + id = b.fileHash(exe) } - id = b.fileHash(exe) } b.id.Lock() - b.toolIDCache[name] = id + b.toolIDCache[key] = id b.id.Unlock() return id, nil diff --git a/src/cmd/go/internal/work/exec.go b/src/cmd/go/internal/work/exec.go index b68f902853..4564e32e65 100644 --- a/src/cmd/go/internal/work/exec.go +++ b/src/cmd/go/internal/work/exec.go @@ -54,8 +54,9 @@ func actionList(root *Action) []*Action { // do runs the action graph rooted at root. func (b *Builder) Do(root *Action) { - if c := cache.Default(); c != nil && !b.IsCmdList { + if !b.IsCmdList { // If we're doing real work, take time at the end to trim the cache. + c := cache.Default() defer c.Trim() } @@ -406,15 +407,19 @@ func (b *Builder) build(a *Action) (err error) { if b.NeedExport { p.Export = a.built } - if need&needCompiledGoFiles != 0 && b.loadCachedSrcFiles(a) { - need &^= needCompiledGoFiles + if need&needCompiledGoFiles != 0 { + if err := b.loadCachedSrcFiles(a); err == nil { + need &^= needCompiledGoFiles + } } } // Source files might be cached, even if the full action is not // (e.g., go list -compiled -find). - if !cachedBuild && need&needCompiledGoFiles != 0 && b.loadCachedSrcFiles(a) { - need &^= needCompiledGoFiles + if !cachedBuild && need&needCompiledGoFiles != 0 { + if err := b.loadCachedSrcFiles(a); err == nil { + need &^= needCompiledGoFiles + } } if need == 0 { @@ -459,16 +464,20 @@ func (b *Builder) build(a *Action) (err error) { objdir := a.Objdir // Load cached cgo header, but only if we're skipping the main build (cachedBuild==true). - if cachedBuild && need&needCgoHdr != 0 && b.loadCachedCgoHdr(a) { - need &^= needCgoHdr + if cachedBuild && need&needCgoHdr != 0 { + if err := b.loadCachedCgoHdr(a); err == nil { + need &^= needCgoHdr + } } // Load cached vet config, but only if that's all we have left // (need == needVet, not testing just the one bit). // If we are going to do a full build anyway, // we're going to regenerate the files below anyway. - if need == needVet && b.loadCachedVet(a) { - need &^= needVet + if need == needVet { + if err := b.loadCachedVet(a); err == nil { + need &^= needVet + } } if need == 0 { return nil @@ -615,8 +624,8 @@ func (b *Builder) build(a *Action) (err error) { need &^= needVet } if need&needCompiledGoFiles != 0 { - if !b.loadCachedSrcFiles(a) { - return fmt.Errorf("failed to cache compiled Go files") + if err := b.loadCachedSrcFiles(a); err != nil { + return fmt.Errorf("loading compiled Go files from cache: %w", err) } need &^= needCompiledGoFiles } @@ -788,7 +797,7 @@ func (b *Builder) cacheObjdirFile(a *Action, c *cache.Cache, name string) error func (b *Builder) findCachedObjdirFile(a *Action, c *cache.Cache, name string) (string, error) { file, _, err := c.GetFile(cache.Subkey(a.actionID, name)) if err != nil { - return "", err + return "", fmt.Errorf("loading cached file %s: %w", name, err) } return file, nil } @@ -803,26 +812,16 @@ func (b *Builder) loadCachedObjdirFile(a *Action, c *cache.Cache, name string) e func (b *Builder) cacheCgoHdr(a *Action) { c := cache.Default() - if c == nil { - return - } b.cacheObjdirFile(a, c, "_cgo_install.h") } -func (b *Builder) loadCachedCgoHdr(a *Action) bool { +func (b *Builder) loadCachedCgoHdr(a *Action) error { c := cache.Default() - if c == nil { - return false - } - err := b.loadCachedObjdirFile(a, c, "_cgo_install.h") - return err == nil + return b.loadCachedObjdirFile(a, c, "_cgo_install.h") } func (b *Builder) cacheSrcFiles(a *Action, srcfiles []string) { c := cache.Default() - if c == nil { - return - } var buf bytes.Buffer for _, file := range srcfiles { if !strings.HasPrefix(file, a.Objdir) { @@ -842,14 +841,11 @@ func (b *Builder) cacheSrcFiles(a *Action, srcfiles []string) { c.PutBytes(cache.Subkey(a.actionID, "srcfiles"), buf.Bytes()) } -func (b *Builder) loadCachedVet(a *Action) bool { +func (b *Builder) loadCachedVet(a *Action) error { c := cache.Default() - if c == nil { - return false - } list, _, err := c.GetBytes(cache.Subkey(a.actionID, "srcfiles")) if err != nil { - return false + return fmt.Errorf("reading srcfiles list: %w", err) } var srcfiles []string for _, name := range strings.Split(string(list), "\n") { @@ -861,22 +857,19 @@ func (b *Builder) loadCachedVet(a *Action) bool { continue } if err := b.loadCachedObjdirFile(a, c, name); err != nil { - return false + return err } srcfiles = append(srcfiles, a.Objdir+name) } buildVetConfig(a, srcfiles) - return true + return nil } -func (b *Builder) loadCachedSrcFiles(a *Action) bool { +func (b *Builder) loadCachedSrcFiles(a *Action) error { c := cache.Default() - if c == nil { - return false - } list, _, err := c.GetBytes(cache.Subkey(a.actionID, "srcfiles")) if err != nil { - return false + return fmt.Errorf("reading srcfiles list: %w", err) } var files []string for _, name := range strings.Split(string(list), "\n") { @@ -889,12 +882,12 @@ func (b *Builder) loadCachedSrcFiles(a *Action) bool { } file, err := b.findCachedObjdirFile(a, c, name) if err != nil { - return false + return fmt.Errorf("finding %s: %w", name, err) } files = append(files, file) } a.Package.CompiledGoFiles = files - return true + return nil } // vetConfig is the configuration passed to vet describing a single package. @@ -1058,12 +1051,11 @@ func (b *Builder) vet(a *Action) error { } key := cache.ActionID(h.Sum()) - if vcfg.VetxOnly { - if c := cache.Default(); c != nil && !cfg.BuildA { - if file, _, err := c.GetFile(key); err == nil { - a.built = file - return nil - } + if vcfg.VetxOnly && !cfg.BuildA { + c := cache.Default() + if file, _, err := c.GetFile(key); err == nil { + a.built = file + return nil } } @@ -1091,9 +1083,7 @@ func (b *Builder) vet(a *Action) error { // If vet wrote export data, save it for input to future vets. if f, err := os.Open(vcfg.VetxOutput); err == nil { a.built = vcfg.VetxOutput - if c := cache.Default(); c != nil { - c.Put(key, f) - } + cache.Default().Put(key, f) f.Close() } @@ -1650,7 +1640,7 @@ func (b *Builder) copyFile(dst, src string, perm os.FileMode, force bool) error df, err = os.OpenFile(dst, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, perm) } if err != nil { - return err + return fmt.Errorf("copying %s: %w", src, err) // err should already refer to dst } _, err = io.Copy(df, sf) diff --git a/src/cmd/go/internal/work/gccgo.go b/src/cmd/go/internal/work/gccgo.go index 24d856ca1e..cf5dba189e 100644 --- a/src/cmd/go/internal/work/gccgo.go +++ b/src/cmd/go/internal/work/gccgo.go @@ -332,7 +332,7 @@ func (tools gccgoToolchain) link(b *Builder, root *Action, out, importcfg string } if haveShlib[filepath.Base(a.Target)] { - // This is a shared library we want to link againt. + // This is a shared library we want to link against. if !addedShlib[a.Target] { shlibs = append(shlibs, a.Target) addedShlib[a.Target] = true diff --git a/src/cmd/go/internal/work/init.go b/src/cmd/go/internal/work/init.go index c220d87123..548e73515f 100644 --- a/src/cmd/go/internal/work/init.go +++ b/src/cmd/go/internal/work/init.go @@ -60,6 +60,11 @@ func instrumentInit() { mode := "race" if cfg.BuildMSan { mode = "msan" + // MSAN does not support non-PIE binaries on ARM64. + // See issue #33712 for details. + if cfg.Goos == "linux" && cfg.Goarch == "arm64" && cfg.BuildBuildmode == "default" { + cfg.BuildBuildmode = "pie" + } } modeFlag := "-" + mode diff --git a/src/cmd/go/testdata/mod/example.com_dotgo.go_v1.0.0.txt b/src/cmd/go/testdata/mod/example.com_dotgo.go_v1.0.0.txt new file mode 100644 index 0000000000..4f7f4d7dd2 --- /dev/null +++ b/src/cmd/go/testdata/mod/example.com_dotgo.go_v1.0.0.txt @@ -0,0 +1,16 @@ +This module's path ends with ".go". +Based on github.com/nats-io/nats.go. +Used in regression tests for golang.org/issue/32483. + +-- .mod -- +module example.com/dotgo.go + +go 1.13 +-- .info -- +{"Version":"v1.0.0"} +-- go.mod -- +module example.com/dotgo.go + +go 1.13 +-- dotgo.go -- +package dotgo diff --git a/src/cmd/go/testdata/script/cover_pkgall_multiple_mains.txt b/src/cmd/go/testdata/script/cover_pkgall_multiple_mains.txt index ab7cd66949..f21cd8b3a8 100644 --- a/src/cmd/go/testdata/script/cover_pkgall_multiple_mains.txt +++ b/src/cmd/go/testdata/script/cover_pkgall_multiple_mains.txt @@ -1,29 +1,32 @@ # This test checks that multiple main packages can be tested # with -coverpkg=all without duplicate symbol errors. -# Verifies golang.org/issue/30374. - -env GO111MODULE=on +# Verifies golang.org/issue/30374, golang.org/issue/34114. [short] skip +cd $GOPATH/src/example.com/cov + +env GO111MODULE=on +go test -coverpkg=all ./... +env GO111MODULE=off go test -coverpkg=all ./... --- go.mod -- +-- $GOPATH/src/example.com/cov/go.mod -- module example.com/cov --- mainonly/mainonly.go -- +-- $GOPATH/src/example.com/cov/mainonly/mainonly.go -- package main func main() {} --- mainwithtest/mainwithtest.go -- +-- $GOPATH/src/example.com/cov/mainwithtest/mainwithtest.go -- package main func main() {} func Foo() {} --- mainwithtest/mainwithtest_test.go -- +-- $GOPATH/src/example.com/cov/mainwithtest/mainwithtest_test.go -- package main import "testing" @@ -32,10 +35,10 @@ func TestFoo(t *testing.T) { Foo() } --- xtest/x.go -- +-- $GOPATH/src/example.com/cov/xtest/x.go -- package x --- xtest/x_test.go -- +-- $GOPATH/src/example.com/cov/xtest/x_test.go -- package x_test import "testing" diff --git a/src/cmd/go/testdata/script/mod_auth.txt b/src/cmd/go/testdata/script/mod_auth.txt index fe1d65794a..5bcbcd1a18 100644 --- a/src/cmd/go/testdata/script/mod_auth.txt +++ b/src/cmd/go/testdata/script/mod_auth.txt @@ -8,6 +8,8 @@ env GOSUMDB=off # basic auth should fail. env NETRC=$WORK/empty ! go list all +stderr '^\tserver response: ACCESS DENIED, buddy$' +stderr '^\tserver response: File\? What file\?$' # With credentials from a netrc file, it should succeed. env NETRC=$WORK/netrc diff --git a/src/cmd/go/testdata/script/mod_get_trailing_slash.txt b/src/cmd/go/testdata/script/mod_get_trailing_slash.txt new file mode 100644 index 0000000000..8828738abb --- /dev/null +++ b/src/cmd/go/testdata/script/mod_get_trailing_slash.txt @@ -0,0 +1,32 @@ +# go list should fail to load a package ending with ".go" since that denotes +# a source file. However, ".go/" should work. +# TODO(golang.org/issue/32483): perhaps we should treat non-existent paths +# with .go suffixes as package paths instead. +! go list example.com/dotgo.go +go list example.com/dotgo.go/ +stdout ^example.com/dotgo.go$ + +# go get -d should succeed in either case, with or without a version. +# Arguments are interpreted as packages or package patterns with versions, +# not source files. +go get -d example.com/dotgo.go +go get -d example.com/dotgo.go/ +go get -d example.com/dotgo.go@v1.0.0 +go get -d example.com/dotgo.go/@v1.0.0 + +# go get (without -d) should also succeed in either case. +# TODO(golang.org/issue/32483): we should be consistent with 'go build', +# 'go list', and other commands. 'go list example.com/dotgo.go' (above) and +# 'go get example.com/dotgo.go' should both succeed or both fail. +[short] skip +go get example.com/dotgo.go +go get example.com/dotgo.go/ +go get example.com/dotgo.go@v1.0.0 +go get example.com/dotgo.go/@v1.0.0 + +-- go.mod -- +module m + +go 1.13 + +require example.com/dotgo.go v1.0.0 diff --git a/src/cmd/go/testdata/script/mod_getx.txt b/src/cmd/go/testdata/script/mod_getx.txt new file mode 100644 index 0000000000..36f33426df --- /dev/null +++ b/src/cmd/go/testdata/script/mod_getx.txt @@ -0,0 +1,13 @@ +[short] skip +[!net] skip + +env GO111MODULE=on +env GOPROXY=direct +env GOSUMDB=off + +# 'go get -x' should log URLs with an HTTP or HTTPS scheme. +# A bug had caused us to log schemeless URLs instead. +go get -x -d golang.org/x/text@v0.1.0 +stderr '^# get https://golang.org/x/text\?go-get=1$' +stderr '^# get https://golang.org/x/text\?go-get=1: 200 OK \([0-9.]+s\)$' +! stderr '^# get //.*' diff --git a/src/cmd/go/testdata/script/mod_missing_repo.txt b/src/cmd/go/testdata/script/mod_missing_repo.txt new file mode 100644 index 0000000000..8dae85fa88 --- /dev/null +++ b/src/cmd/go/testdata/script/mod_missing_repo.txt @@ -0,0 +1,15 @@ +# Regression test for golang.org/issue/34094: modules hosted within gitlab.com +# subgroups could not be fetched because the server returned bogus go-import +# tags for prefixes of the module path. + +[!net] skip +[!exec:git] skip + +env GO111MODULE=on +env GOPROXY=direct +env GOSUMDB=off + +! go get -d vcs-test.golang.org/go/missingrepo/missingrepo-git +stderr 'vcs-test.golang.org/go/missingrepo/missingrepo-git: git ls-remote .*: exit status .*' + +go get -d vcs-test.golang.org/go/missingrepo/missingrepo-git/notmissing diff --git a/src/cmd/go/testdata/script/mod_proxy_errors.txt b/src/cmd/go/testdata/script/mod_proxy_errors.txt new file mode 100644 index 0000000000..9cd1a824f0 --- /dev/null +++ b/src/cmd/go/testdata/script/mod_proxy_errors.txt @@ -0,0 +1,19 @@ +[!net] skip + +env GO111MODULE=on +env GOSUMDB=off +env GOPROXY=direct + +# Server responses should be truncated to some reasonable number of lines. +# (For now, exactly eight.) +! go list -m vcs-test.golang.org/auth/ormanylines@latest +stderr '\tserver response:\n(.|\n)*\tline 8\n\t\[Truncated: too many lines.\]$' + +# Server responses should be truncated to some reasonable number of characters. +! go list -m vcs-test.golang.org/auth/oronelongline@latest +! stderr 'blah{40}' +stderr '\tserver response: \[Truncated: too long\.\]$' + +# Responses from servers using the 'mod' protocol should be propagated. +! go list -m vcs-test.golang.org/go/modauth404@latest +stderr '\tserver response: File\? What file\?' diff --git a/src/cmd/go/testdata/script/mod_sumdb_file_path.txt b/src/cmd/go/testdata/script/mod_sumdb_file_path.txt index 47c8a3a0f3..4f4b99575a 100644 --- a/src/cmd/go/testdata/script/mod_sumdb_file_path.txt +++ b/src/cmd/go/testdata/script/mod_sumdb_file_path.txt @@ -7,10 +7,13 @@ env GOPATH=$WORK/gopath1 # With a file-based proxy with an empty checksum directory, # downloading a new module should fail, even if a subsequent # proxy contains a more complete mirror of the sum database. +# +# TODO(bcmills): The error message here is a bit redundant. +# It comes from the sumweb package, which isn't yet producing structured errors. [windows] env GOPROXY=file:///$WORK/sumproxy,https://proxy.golang.org [!windows] env GOPROXY=file://$WORK/sumproxy,https://proxy.golang.org ! go get -d golang.org/x/text@v0.3.2 -stderr '^verifying golang.org/x/text.*: Not Found' +stderr '^verifying golang.org/x/text@v0.3.2: golang.org/x/text@v0.3.2: reading file://.*/sumdb/sum.golang.org/lookup/golang.org/x/text@v0.3.2: (no such file or directory|.*cannot find the file specified.*)' # If the proxy does not claim to support the database, # checksum verification should fall through to the next proxy, diff --git a/src/cmd/gofmt/rewrite.go b/src/cmd/gofmt/rewrite.go index 79b7858a5a..bab22e04cd 100644 --- a/src/cmd/gofmt/rewrite.go +++ b/src/cmd/gofmt/rewrite.go @@ -271,6 +271,12 @@ func subst(m map[string]reflect.Value, pattern reflect.Value, pos reflect.Value) // Otherwise copy. switch p := pattern; p.Kind() { case reflect.Slice: + if p.IsNil() { + // Do not turn nil slices into empty slices. go/ast + // guarantees that certain lists will be nil if not + // populated. + return reflect.Zero(p.Type()) + } v := reflect.MakeSlice(p.Type(), p.Len(), p.Len()) for i := 0; i < p.Len(); i++ { v.Index(i).Set(subst(m, p.Index(i), pos)) diff --git a/src/cmd/gofmt/testdata/rewrite10.golden b/src/cmd/gofmt/testdata/rewrite10.golden new file mode 100644 index 0000000000..1dd781fbb0 --- /dev/null +++ b/src/cmd/gofmt/testdata/rewrite10.golden @@ -0,0 +1,19 @@ +//gofmt -r=a->a + +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Issue 33103, 33104, and 33105. + +package pkg + +func fn() { + _ = func() { + switch { + default: + } + } + _ = func() string {} + _ = func() { var ptr *string; println(ptr) } +} diff --git a/src/cmd/gofmt/testdata/rewrite10.input b/src/cmd/gofmt/testdata/rewrite10.input new file mode 100644 index 0000000000..1dd781fbb0 --- /dev/null +++ b/src/cmd/gofmt/testdata/rewrite10.input @@ -0,0 +1,19 @@ +//gofmt -r=a->a + +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Issue 33103, 33104, and 33105. + +package pkg + +func fn() { + _ = func() { + switch { + default: + } + } + _ = func() string {} + _ = func() { var ptr *string; println(ptr) } +} diff --git a/src/cmd/internal/obj/arm64/asm7.go b/src/cmd/internal/obj/arm64/asm7.go index 88a447bcc8..47586ba262 100644 --- a/src/cmd/internal/obj/arm64/asm7.go +++ b/src/cmd/internal/obj/arm64/asm7.go @@ -1561,7 +1561,7 @@ func rclass(r int16) int { return C_GOK } -// con32class reclassifies the constant of 32-bit instruction. Becuase the constant type is 32-bit, +// con32class reclassifies the constant of 32-bit instruction. Because the constant type is 32-bit, // but saved in Offset which type is int64, con32class treats it as uint32 type and reclassifies it. func (c *ctxt7) con32class(a *obj.Addr) int { v := uint32(a.Offset) diff --git a/src/cmd/internal/obj/objfile.go b/src/cmd/internal/obj/objfile.go index 62d41e11eb..a3281a99e4 100644 --- a/src/cmd/internal/obj/objfile.go +++ b/src/cmd/internal/obj/objfile.go @@ -658,7 +658,7 @@ func (ctxt *Link) DwarfAbstractFunc(curfn interface{}, s *LSym, myimportpath str } } -// This table is designed to aid in the creation of references betweeen +// This table is designed to aid in the creation of references between // DWARF subprogram DIEs. // // In most cases when one DWARF DIE has to refer to another DWARF DIE, diff --git a/src/cmd/internal/obj/riscv/anames.go b/src/cmd/internal/obj/riscv/anames.go index b0d7b39fa7..c034b637bd 100644 --- a/src/cmd/internal/obj/riscv/anames.go +++ b/src/cmd/internal/obj/riscv/anames.go @@ -5,21 +5,7 @@ package riscv import "cmd/internal/obj" var Anames = []string{ - obj.A_ARCHSPECIFIC: "SLLIRV32", - "SRLIRV32", - "SRAIRV32", - "JAL", - "JALR", - "BEQ", - "BNE", - "BLT", - "BLTU", - "BGE", - "BGEU", - "FENCE", - "FENCEI", - "FENCETSO", - "ADDI", + obj.A_ARCHSPECIFIC: "ADDI", "SLTI", "SLTIU", "ANDI", @@ -40,6 +26,29 @@ var Anames = []string{ "SRL", "SUB", "SRA", + "SLLIRV32", + "SRLIRV32", + "SRAIRV32", + "JAL", + "JALR", + "BEQ", + "BNE", + "BLT", + "BLTU", + "BGE", + "BGEU", + "LW", + "LWU", + "LH", + "LHU", + "LB", + "LBU", + "SW", + "SH", + "SB", + "FENCE", + "FENCEI", + "FENCETSO", "ADDIW", "SLLIW", "SRLIW", @@ -50,16 +59,7 @@ var Anames = []string{ "SUBW", "SRAW", "LD", - "LW", - "LWU", - "LH", - "LHU", - "LB", - "LBU", "SD", - "SW", - "SH", - "SB", "MUL", "MULH", "MULHU", diff --git a/src/cmd/internal/obj/riscv/cpu.go b/src/cmd/internal/obj/riscv/cpu.go index a4823c3fbf..8c6817284b 100644 --- a/src/cmd/internal/obj/riscv/cpu.go +++ b/src/cmd/internal/obj/riscv/cpu.go @@ -211,27 +211,7 @@ const ( // Unprivileged ISA (Document Version 20190608-Base-Ratified) // 2.4: Integer Computational Instructions - ASLLIRV32 = obj.ABaseRISCV + obj.A_ARCHSPECIFIC + iota - ASRLIRV32 - ASRAIRV32 - - // 2.5: Control Transfer Instructions - AJAL - AJALR - ABEQ - ABNE - ABLT - ABLTU - ABGE - ABGEU - - // 2.7: Memory Ordering Instructions - AFENCE - AFENCEI - AFENCETSO - - // 5.2: Integer Computational Instructions - AADDI + AADDI = obj.ABaseRISCV + obj.A_ARCHSPECIFIC + iota ASLTI ASLTIU AANDI @@ -252,6 +232,40 @@ const ( ASRL ASUB ASRA + + // The SLL/SRL/SRA instructions differ slightly between RV32 and RV64, + // hence there are pseudo-opcodes for the RV32 specific versions. + ASLLIRV32 + ASRLIRV32 + ASRAIRV32 + + // 2.5: Control Transfer Instructions + AJAL + AJALR + ABEQ + ABNE + ABLT + ABLTU + ABGE + ABGEU + + // 2.6: Load and Store Instructions + ALW + ALWU + ALH + ALHU + ALB + ALBU + ASW + ASH + ASB + + // 2.7: Memory Ordering Instructions + AFENCE + AFENCEI + AFENCETSO + + // 5.2: Integer Computational Instructions (RV64I) AADDIW ASLLIW ASRLIW @@ -262,18 +276,9 @@ const ( ASUBW ASRAW - // 5.3: Load and Store Instructions + // 5.3: Load and Store Instructions (RV64I) ALD - ALW - ALWU - ALH - ALHU - ALB - ALBU ASD - ASW - ASH - ASB // 7.1: Multiplication Operations AMUL @@ -515,6 +520,7 @@ const ( ASEQZ ASNEZ + // End marker ALAST ) diff --git a/src/cmd/internal/obj/riscv/list.go b/src/cmd/internal/obj/riscv/list.go index 50f5e21828..f5f7ef21e4 100644 --- a/src/cmd/internal/obj/riscv/list.go +++ b/src/cmd/internal/obj/riscv/list.go @@ -1,29 +1,6 @@ -// Copyright © 1994-1999 Lucent Technologies Inc. All rights reserved. -// Portions Copyright © 1995-1997 C H Forsyth (forsyth@terzarima.net) -// Portions Copyright © 1997-1999 Vita Nuova Limited -// Portions Copyright © 2000-2008 Vita Nuova Holdings Limited (www.vitanuova.com) -// Portions Copyright © 2004,2006 Bruce Ellis -// Portions Copyright © 2005-2007 C H Forsyth (forsyth@terzarima.net) -// Revisions Copyright © 2000-2008 Lucent Technologies Inc. and others -// Portions Copyright © 2009 The Go Authors. All rights reserved. -// -// Permission is hereby granted, free of charge, to any person obtaining a copy -// of this software and associated documentation files (the "Software"), to deal -// in the Software without restriction, including without limitation the rights -// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -// copies of the Software, and to permit persons to whom the Software is -// furnished to do so, subject to the following conditions: -// -// The above copyright notice and this permission notice shall be included in -// all copies or substantial portions of the Software. -// -// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN -// THE SOFTWARE. +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. package riscv @@ -33,219 +10,24 @@ import ( "cmd/internal/obj" ) -var ( - // Instructions is a map of instruction names to instruction IDs. - Instructions = make(map[string]obj.As) - - // Registers is a map of register names to integer IDs. - Registers = make(map[string]int16) - - // RegNames is a map for register IDs to names. We use the ABI name. - RegNames = map[int16]string{ - 0: "NONE", - - // General registers with ABI names. - REG_ZERO: "ZERO", - REG_RA: "RA", - REG_SP: "SP", - REG_GP: "GP", - // REG_TP is REG_G - REG_T0: "T0", - REG_T1: "T1", - REG_T2: "T2", - REG_S0: "S0", - REG_S1: "S1", - REG_A0: "A0", - REG_A1: "A1", - REG_A2: "A2", - REG_A3: "A3", - REG_A4: "A4", - REG_A5: "A5", - REG_A6: "A6", - REG_A7: "A7", - REG_S2: "S2", - REG_S3: "S3", - // REG_S4 is REG_CTXT. - REG_S5: "S5", - REG_S6: "S6", - REG_S7: "S7", - REG_S8: "S8", - REG_S9: "S9", - REG_S10: "S10", - REG_S11: "S11", - REG_T3: "T3", - REG_T4: "T4", - REG_T5: "T5", - // REG_T6 is REG_TMP. - - // Go runtime register names. - REG_G: "g", - REG_CTXT: "CTXT", - REG_TMP: "TMP", - - // ABI names for floating point registers. - REG_FT0: "FT0", - REG_FT1: "FT1", - REG_FT2: "FT2", - REG_FT3: "FT3", - REG_FT4: "FT4", - REG_FT5: "FT5", - REG_FT6: "FT6", - REG_FT7: "FT7", - REG_FS0: "FS0", - REG_FS1: "FS1", - REG_FA0: "FA0", - REG_FA1: "FA1", - REG_FA2: "FA2", - REG_FA3: "FA3", - REG_FA4: "FA4", - REG_FA5: "FA5", - REG_FA6: "FA6", - REG_FA7: "FA7", - REG_FS2: "FS2", - REG_FS3: "FS3", - REG_FS4: "FS4", - REG_FS5: "FS5", - REG_FS6: "FS6", - REG_FS7: "FS7", - REG_FS8: "FS8", - REG_FS9: "FS9", - REG_FS10: "FS10", - REG_FS11: "FS11", - REG_FT8: "FT8", - REG_FT9: "FT9", - REG_FT10: "FT10", - REG_FT11: "FT11", - } -) - -// initRegisters initializes the Registers map. arch.archRiscv will also add -// some psuedoregisters. -func initRegisters() { - // Standard register names. - for i := REG_X0; i <= REG_X31; i++ { - name := fmt.Sprintf("X%d", i-REG_X0) - Registers[name] = int16(i) - } - for i := REG_F0; i <= REG_F31; i++ { - name := fmt.Sprintf("F%d", i-REG_F0) - Registers[name] = int16(i) - } - - // General registers with ABI names. - Registers["ZERO"] = REG_ZERO - Registers["RA"] = REG_RA - Registers["SP"] = REG_SP - Registers["GP"] = REG_GP - Registers["TP"] = REG_TP - Registers["T0"] = REG_T0 - Registers["T1"] = REG_T1 - Registers["T2"] = REG_T2 - Registers["S0"] = REG_S0 - Registers["S1"] = REG_S1 - Registers["A0"] = REG_A0 - Registers["A1"] = REG_A1 - Registers["A2"] = REG_A2 - Registers["A3"] = REG_A3 - Registers["A4"] = REG_A4 - Registers["A5"] = REG_A5 - Registers["A6"] = REG_A6 - Registers["A7"] = REG_A7 - Registers["S2"] = REG_S2 - Registers["S3"] = REG_S3 - Registers["S4"] = REG_S4 - Registers["S5"] = REG_S5 - Registers["S6"] = REG_S6 - Registers["S7"] = REG_S7 - Registers["S8"] = REG_S8 - Registers["S9"] = REG_S9 - Registers["S10"] = REG_S10 - Registers["S11"] = REG_S11 - Registers["T3"] = REG_T3 - Registers["T4"] = REG_T4 - Registers["T5"] = REG_T5 - Registers["T6"] = REG_T6 - - // Golang runtime register names. - Registers["g"] = REG_G - Registers["CTXT"] = REG_CTXT - Registers["TMP"] = REG_TMP - - // ABI names for floating point registers. - Registers["FT0"] = REG_FT0 - Registers["FT1"] = REG_FT1 - Registers["FT2"] = REG_FT2 - Registers["FT3"] = REG_FT3 - Registers["FT4"] = REG_FT4 - Registers["FT5"] = REG_FT5 - Registers["FT6"] = REG_FT6 - Registers["FT7"] = REG_FT7 - Registers["FS0"] = REG_FS0 - Registers["FS1"] = REG_FS1 - Registers["FA0"] = REG_FA0 - Registers["FA1"] = REG_FA1 - Registers["FA2"] = REG_FA2 - Registers["FA3"] = REG_FA3 - Registers["FA4"] = REG_FA4 - Registers["FA5"] = REG_FA5 - Registers["FA6"] = REG_FA6 - Registers["FA7"] = REG_FA7 - Registers["FS2"] = REG_FS2 - Registers["FS3"] = REG_FS3 - Registers["FS4"] = REG_FS4 - Registers["FS5"] = REG_FS5 - Registers["FS6"] = REG_FS6 - Registers["FS7"] = REG_FS7 - Registers["FS8"] = REG_FS8 - Registers["FS9"] = REG_FS9 - Registers["FS10"] = REG_FS10 - Registers["FS11"] = REG_FS11 - Registers["FT8"] = REG_FT8 - Registers["FT9"] = REG_FT9 - Registers["FT10"] = REG_FT10 - Registers["FT11"] = REG_FT11 -} - -// checkRegNames asserts that RegNames includes all registers. -func checkRegNames() { - for i := REG_X0; i <= REG_X31; i++ { - if _, ok := RegNames[int16(i)]; !ok { - panic(fmt.Sprintf("REG_X%d missing from RegNames", i)) - } - } - for i := REG_F0; i <= REG_F31; i++ { - if _, ok := RegNames[int16(i)]; !ok { - panic(fmt.Sprintf("REG_F%d missing from RegNames", i)) - } - } -} - -func initInstructions() { - for i, s := range obj.Anames { - Instructions[s] = obj.As(i) - } - for i, s := range Anames { - if obj.As(i) >= obj.A_ARCHSPECIFIC { - Instructions[s] = obj.As(i) + obj.ABaseRISCV - } - } -} - func init() { - // initRegnames uses Registers during initialization, - // and must be called after initRegisters. - initRegisters() - checkRegNames() - initInstructions() - obj.RegisterRegister(obj.RBaseRISCV, REG_END, PrettyPrintReg) + obj.RegisterRegister(obj.RBaseRISCV, REG_END, regName) obj.RegisterOpcode(obj.ABaseRISCV, Anames) } -func PrettyPrintReg(r int) string { - name, ok := RegNames[int16(r)] - if !ok { - name = fmt.Sprintf("R???%d", r) // Similar format to As. +func regName(r int) string { + switch { + case r == 0: + return "NONE" + case r == REG_G: + return "g" + case r == REG_SP: + return "SP" + case REG_X0 <= r && r <= REG_X31: + return fmt.Sprintf("X%d", r-REG_X0) + case REG_F0 <= r && r <= REG_F31: + return fmt.Sprintf("F%d", r-REG_F0) + default: + return fmt.Sprintf("Rgok(%d)", r-obj.RBaseRISCV) } - - return name } diff --git a/src/cmd/internal/obj/riscv/obj.go b/src/cmd/internal/obj/riscv/obj.go index 3a726e89e1..841723d50a 100644 --- a/src/cmd/internal/obj/riscv/obj.go +++ b/src/cmd/internal/obj/riscv/obj.go @@ -43,6 +43,7 @@ import ( "fmt" ) +// TODO(jsing): Populate. var RISCV64DWARFRegisters = map[int16]int16{} func init() { @@ -50,6 +51,8 @@ func init() { RISCV64DWARFRegisters[1] = 2 } +func buildop(ctxt *obj.Link) {} + // stackOffset updates Addr offsets based on the current stack size. // // The stack looks like: @@ -352,12 +355,12 @@ func progedit(ctxt *obj.Link, p *obj.Prog, newprog obj.ProgAlloc) { } } -// setpcs sets the Pc field in all instructions reachable from p. It uses pc as -// the initial value. -func setpcs(p *obj.Prog, pc int64) { +// setPCs sets the Pc field in all instructions reachable from p. +// It uses pc as the initial value. +func setPCs(p *obj.Prog, pc int64) { for ; p != nil; p = p.Link { p.Pc = pc - pc += encodingForP(p).length + pc += int64(encodingForProg(p).length) } } @@ -972,7 +975,7 @@ func preprocess(ctxt *obj.Link, cursym *obj.LSym, newprog obj.ProgAlloc) { // a fixed point will be reached). No attempt to handle functions > 2GiB. for { rescan := false - setpcs(cursym.Func.Text, 0) + setPCs(cursym.Func.Text, 0) for p := cursym.Func.Text; p != nil; p = p.Link { switch p.As { case ABEQ, ABNE, ABLT, ABGE, ABLTU, ABGEU: @@ -1583,8 +1586,8 @@ func encodeUJ(p *obj.Prog) uint32 { } func validateRaw(p *obj.Prog) { - // Treat the raw value specially as a 32-bit unsigned integer. Nobody - // wants to enter negative machine code. + // Treat the raw value specially as a 32-bit unsigned integer. + // Nobody wants to enter negative machine code. a := p.From if a.Type != obj.TYPE_CONST { p.Ctxt.Diag("%v\texpected immediate in raw position but got %s", p, obj.Dconv(p, &a)) @@ -1596,8 +1599,8 @@ func validateRaw(p *obj.Prog) { } func encodeRaw(p *obj.Prog) uint32 { - // Treat the raw value specially as a 32-bit unsigned integer. Nobody - // wants to enter negative machine code. + // Treat the raw value specially as a 32-bit unsigned integer. + // Nobody wants to enter negative machine code. a := p.From if a.Type != obj.TYPE_CONST { panic(fmt.Sprintf("ill typed: %+v", a)) @@ -1611,7 +1614,7 @@ func encodeRaw(p *obj.Prog) uint32 { type encoding struct { encode func(*obj.Prog) uint32 // encode returns the machine code for a Prog validate func(*obj.Prog) // validate validates a Prog, calling ctxt.Diag for any issues - length int64 // length of encoded instruction; 0 for pseudo-ops, 4 otherwise + length int // length of encoded instruction; 0 for pseudo-ops, 4 otherwise } var ( @@ -1793,8 +1796,8 @@ var encodingForAs = [ALAST & obj.AMask]encoding{ obj.ANOP: pseudoOpEncoding, } -// encodingForP returns the encoding (encode+validate funcs) for a Prog. -func encodingForP(p *obj.Prog) encoding { +// encodingForProg returns the encoding (encode+validate funcs) for an *obj.Prog. +func encodingForProg(p *obj.Prog) encoding { if base := p.As &^ obj.AMask; base != obj.ABaseRISCV && base != 0 { p.Ctxt.Diag("encodingForP: not a RISC-V instruction %s", p.As) return badEncoding @@ -1806,7 +1809,7 @@ func encodingForP(p *obj.Prog) encoding { } enc := encodingForAs[as] if enc.validate == nil { - p.Ctxt.Diag("encodingForP: no encoding for instruction %s", p.As) + p.Ctxt.Diag("encodingForProg: no encoding for instruction %s", p.As) return badEncoding } return enc @@ -1857,7 +1860,7 @@ func assemble(ctxt *obj.Link, cursym *obj.LSym, newprog obj.ProgAlloc) { rel.Type = t } - enc := encodingForP(p) + enc := encodingForProg(p) if enc.length > 0 { symcode = append(symcode, enc.encode(p)) } @@ -1870,10 +1873,6 @@ func assemble(ctxt *obj.Link, cursym *obj.LSym, newprog obj.ProgAlloc) { } } -func buildop(ctxt *obj.Link) { - // XXX -} - var LinkRISCV64 = obj.LinkArch{ Arch: sys.ArchRISCV64, Init: buildop, diff --git a/src/cmd/internal/obj/s390x/a.out.go b/src/cmd/internal/obj/s390x/a.out.go index d11a3834b0..cc0bfab26b 100644 --- a/src/cmd/internal/obj/s390x/a.out.go +++ b/src/cmd/internal/obj/s390x/a.out.go @@ -240,6 +240,7 @@ const ( AMULLD AMULHD AMULHDU + AMLGR ASUB ASUBC ASUBV @@ -966,6 +967,8 @@ const ( AVMSLOG AVMSLEOG + ANOPH // NOP + // binary ABYTE AWORD diff --git a/src/cmd/internal/obj/s390x/anames.go b/src/cmd/internal/obj/s390x/anames.go index a9bdfcafe9..c9e44e3f7a 100644 --- a/src/cmd/internal/obj/s390x/anames.go +++ b/src/cmd/internal/obj/s390x/anames.go @@ -21,6 +21,7 @@ var Anames = []string{ "MULLD", "MULHD", "MULHDU", + "MLGR", "SUB", "SUBC", "SUBV", @@ -696,6 +697,7 @@ var Anames = []string{ "VMSLEG", "VMSLOG", "VMSLEOG", + "NOPH", "BYTE", "WORD", "DWORD", diff --git a/src/cmd/internal/obj/s390x/asmz.go b/src/cmd/internal/obj/s390x/asmz.go index 2b187edca5..2ba3d12969 100644 --- a/src/cmd/internal/obj/s390x/asmz.go +++ b/src/cmd/internal/obj/s390x/asmz.go @@ -174,6 +174,7 @@ var optab = []Optab{ {i: 12, as: ASUB, a1: C_LAUTO, a6: C_REG}, {i: 4, as: AMULHD, a1: C_REG, a6: C_REG}, {i: 4, as: AMULHD, a1: C_REG, a2: C_REG, a6: C_REG}, + {i: 62, as: AMLGR, a1: C_REG, a6: C_REG}, {i: 2, as: ADIVW, a1: C_REG, a2: C_REG, a6: C_REG}, {i: 2, as: ADIVW, a1: C_REG, a6: C_REG}, {i: 10, as: ASUB, a1: C_REG, a2: C_REG, a6: C_REG}, @@ -314,6 +315,9 @@ var optab = []Optab{ // undefined (deliberate illegal instruction) {i: 78, as: obj.AUNDEF}, + // 2 byte no-operation + {i: 66, as: ANOPH}, + // vector instructions // VRX store @@ -3317,7 +3321,12 @@ func (c *ctxtz) asmout(p *obj.Prog, asm *[]byte) { x2 = REGTMP d2 = 0 } - zRXY(c.zopstore(p.As), uint32(p.From.Reg), uint32(x2), uint32(b2), uint32(d2), asm) + // Emits an RX instruction if an appropriate one exists and the displacement fits in 12 bits. Otherwise use an RXY instruction. + if op, ok := c.zopstore12(p.As); ok && isU12(d2) { + zRX(op, uint32(p.From.Reg), uint32(x2), uint32(b2), uint32(d2), asm) + } else { + zRXY(c.zopstore(p.As), uint32(p.From.Reg), uint32(x2), uint32(b2), uint32(d2), asm) + } case 36: // mov mem reg (no relocation) d2 := c.regoff(&p.From) @@ -3334,7 +3343,12 @@ func (c *ctxtz) asmout(p *obj.Prog, asm *[]byte) { x2 = REGTMP d2 = 0 } - zRXY(c.zopload(p.As), uint32(p.To.Reg), uint32(x2), uint32(b2), uint32(d2), asm) + // Emits an RX instruction if an appropriate one exists and the displacement fits in 12 bits. Otherwise use an RXY instruction. + if op, ok := c.zopload12(p.As); ok && isU12(d2) { + zRX(op, uint32(p.To.Reg), uint32(x2), uint32(b2), uint32(d2), asm) + } else { + zRXY(c.zopload(p.As), uint32(p.To.Reg), uint32(x2), uint32(b2), uint32(d2), asm) + } case 40: // word/byte wd := uint32(c.regoff(&p.From)) @@ -3394,6 +3408,12 @@ func (c *ctxtz) asmout(p *obj.Prog, asm *[]byte) { d2 := c.regoff(&p.To) zRXE(opcode, uint32(p.From.Reg), 0, 0, uint32(d2), 0, asm) + case 62: // equivalent of Mul64 in math/bits + zRRE(op_MLGR, uint32(p.To.Reg), uint32(p.From.Reg), asm) + + case 66: + zRR(op_BCR, 0, 0, asm) + case 67: // fmov $0 freg var opcode uint32 switch p.As { @@ -4209,6 +4229,22 @@ func (c *ctxtz) regoff(a *obj.Addr) int32 { return int32(c.vregoff(a)) } +// find if the displacement is within 12 bit +func isU12(displacement int32) bool { + return displacement >= 0 && displacement < DISP12 +} + +// zopload12 returns the RX op with 12 bit displacement for the given load +func (c *ctxtz) zopload12(a obj.As) (uint32, bool) { + switch a { + case AFMOVD: + return op_LD, true + case AFMOVS: + return op_LE, true + } + return 0, false +} + // zopload returns the RXY op for the given load func (c *ctxtz) zopload(a obj.As) uint32 { switch a { @@ -4247,6 +4283,23 @@ func (c *ctxtz) zopload(a obj.As) uint32 { return 0 } +// zopstore12 returns the RX op with 12 bit displacement for the given store +func (c *ctxtz) zopstore12(a obj.As) (uint32, bool) { + switch a { + case AFMOVD: + return op_STD, true + case AFMOVS: + return op_STE, true + case AMOVW, AMOVWZ: + return op_ST, true + case AMOVH, AMOVHZ: + return op_STH, true + case AMOVB, AMOVBZ: + return op_STC, true + } + return 0, false +} + // zopstore returns the RXY op for the given store func (c *ctxtz) zopstore(a obj.As) uint32 { switch a { diff --git a/src/cmd/internal/obj/x86/asm6.go b/src/cmd/internal/obj/x86/asm6.go index 1f668a0166..e7c76d6da2 100644 --- a/src/cmd/internal/obj/x86/asm6.go +++ b/src/cmd/internal/obj/x86/asm6.go @@ -3055,7 +3055,7 @@ func (ab *AsmBuf) Put(b []byte) { } // PutOpBytesLit writes zero terminated sequence of bytes from op, -// starting at specified offsed (e.g. z counter value). +// starting at specified offset (e.g. z counter value). // Trailing 0 is not written. // // Intended to be used for literal Z cases. diff --git a/src/cmd/internal/objfile/goobj.go b/src/cmd/internal/objfile/goobj.go index 473e773ec2..7c04b6d5ce 100644 --- a/src/cmd/internal/objfile/goobj.go +++ b/src/cmd/internal/objfile/goobj.go @@ -29,7 +29,7 @@ func openGoFile(r *os.File) (*File, error) { } rf := &goobjFile{goobj: f, f: r} if len(f.Native) == 0 { - return &File{r, []*Entry{&Entry{raw: rf}}}, nil + return &File{r, []*Entry{{raw: rf}}}, nil } entries := make([]*Entry, len(f.Native)+1) entries[0] = &Entry{ diff --git a/src/cmd/internal/objfile/objfile.go b/src/cmd/internal/objfile/objfile.go index 41c5d9b9f5..a58e0e159c 100644 --- a/src/cmd/internal/objfile/objfile.go +++ b/src/cmd/internal/objfile/objfile.go @@ -76,7 +76,7 @@ func Open(name string) (*File, error) { } for _, try := range openers { if raw, err := try(r); err == nil { - return &File{r, []*Entry{&Entry{raw: raw}}}, nil + return &File{r, []*Entry{{raw: raw}}}, nil } } r.Close() diff --git a/src/cmd/link/internal/ld/config.go b/src/cmd/link/internal/ld/config.go index c66e4941bb..dcbe136832 100644 --- a/src/cmd/link/internal/ld/config.go +++ b/src/cmd/link/internal/ld/config.go @@ -229,6 +229,7 @@ func mustLinkExternal(ctxt *Link) (res bool, reason string) { // flag and the iscgo externalobj variables are set. func determineLinkMode(ctxt *Link) { extNeeded, extReason := mustLinkExternal(ctxt) + via := "" if ctxt.LinkMode == LinkAuto { // The environment variable GO_EXTLINK_ENABLED controls the @@ -237,18 +238,13 @@ func determineLinkMode(ctxt *Link) { // cmd/link was compiled. (See make.bash.) switch objabi.Getgoextlinkenabled() { case "0": - if extNeeded { - Exitf("internal linking requested via GO_EXTLINK_ENABLED, but external linking required: %s", extReason) - } ctxt.LinkMode = LinkInternal + via = "via GO_EXTLINK_ENABLED " case "1": ctxt.LinkMode = LinkExternal + via = "via GO_EXTLINK_ENABLED " default: - if extNeeded { - ctxt.LinkMode = LinkExternal - } else if iscgo && externalobj { - ctxt.LinkMode = LinkExternal - } else if ctxt.BuildMode == BuildModePIE { + if extNeeded || (iscgo && externalobj) || ctxt.BuildMode == BuildModePIE { ctxt.LinkMode = LinkExternal } else { ctxt.LinkMode = LinkInternal @@ -259,7 +255,7 @@ func determineLinkMode(ctxt *Link) { switch ctxt.LinkMode { case LinkInternal: if extNeeded { - Exitf("internal linking requested but external linking required: %s", extReason) + Exitf("internal linking requested %sbut external linking required: %s", via, extReason) } case LinkExternal: switch { diff --git a/src/cmd/link/internal/ld/pe.go b/src/cmd/link/internal/ld/pe.go index ab51c874ce..12363626ae 100644 --- a/src/cmd/link/internal/ld/pe.go +++ b/src/cmd/link/internal/ld/pe.go @@ -297,7 +297,7 @@ type peStringTable struct { stringsLen int } -// size resturns size of string table t. +// size returns size of string table t. func (t *peStringTable) size() int { // string table starts with 4-byte length at the beginning return t.stringsLen + 4 diff --git a/src/cmd/link/internal/ld/xcoff.go b/src/cmd/link/internal/ld/xcoff.go index b5e5f5213b..c64066acec 100644 --- a/src/cmd/link/internal/ld/xcoff.go +++ b/src/cmd/link/internal/ld/xcoff.go @@ -628,7 +628,7 @@ func logBase2(a int) uint8 { return uint8(bits.Len(uint(a)) - 1) } -// Write symbols needed when a new file appared : +// Write symbols needed when a new file appeared: // - a C_FILE with one auxiliary entry for its name // - C_DWARF symbols to provide debug information // - a C_HIDEXT which will be a csect containing all of its functions @@ -663,7 +663,7 @@ func (f *xcoffFile) writeSymbolNewFile(ctxt *Link, name string, firstEntry uint6 // Find the size of this corresponding package DWARF compilation unit. // This size is set during DWARF generation (see dwarf.go). dwsize = getDwsectCUSize(sect.Name, name) - // .debug_abbrev is commun to all packages and not found with the previous function + // .debug_abbrev is common to all packages and not found with the previous function if sect.Name == ".debug_abbrev" { s := ctxt.Syms.ROLookup(sect.Name, 0) dwsize = uint64(s.Size) @@ -779,7 +779,7 @@ func (f *xcoffFile) writeSymbolFunc(ctxt *Link, x *sym.Symbol) []xcoffSym { // in the current file. // Same goes for runtime.text.X symbols. } else if x.File == "" { // Undefined global symbol - // If this happens, the algorithme must be redone. + // If this happens, the algorithm must be redone. if currSymSrcFile.name != "" { Exitf("undefined global symbol found inside another file") } diff --git a/src/compress/lzw/reader.go b/src/compress/lzw/reader.go index 1be52d55e4..f08021190c 100644 --- a/src/compress/lzw/reader.go +++ b/src/compress/lzw/reader.go @@ -63,8 +63,7 @@ type decoder struct { // // last is the most recently seen code, or decoderInvalidCode. // - // An invariant is that - // (hi < overflow) || (hi == overflow && last == decoderInvalidCode) + // An invariant is that hi < overflow. clear, eof, hi, overflow, last uint16 // Each code c in [lo, hi] expands to two or more bytes. For c != hi: @@ -200,15 +199,18 @@ loop: } d.last, d.hi = code, d.hi+1 if d.hi >= d.overflow { + if d.hi > d.overflow { + panic("unreachable") + } if d.width == maxWidth { d.last = decoderInvalidCode // Undo the d.hi++ a few lines above, so that (1) we maintain - // the invariant that d.hi <= d.overflow, and (2) d.hi does not + // the invariant that d.hi < d.overflow, and (2) d.hi does not // eventually overflow a uint16. d.hi-- } else { d.width++ - d.overflow <<= 1 + d.overflow = 1 << d.width } } if d.o >= flushBuffer { diff --git a/src/crypto/tls/handshake_client.go b/src/crypto/tls/handshake_client.go index 5ac2098ceb..f04311320e 100644 --- a/src/crypto/tls/handshake_client.go +++ b/src/crypto/tls/handshake_client.go @@ -991,7 +991,7 @@ func mutualProtocol(protos, preferenceProtos []string) (string, bool) { return protos[0], true } -// hostnameInSNI converts name into an approriate hostname for SNI. +// hostnameInSNI converts name into an appropriate hostname for SNI. // Literal IP addresses and absolute FQDNs are not permitted as SNI values. // See RFC 6066, Section 3. func hostnameInSNI(name string) string { diff --git a/src/crypto/tls/prf.go b/src/crypto/tls/prf.go index 05b87a9b89..aeba5fcbd7 100644 --- a/src/crypto/tls/prf.go +++ b/src/crypto/tls/prf.go @@ -263,7 +263,7 @@ func (h *finishedHash) discardHandshakeBuffer() { } // noExportedKeyingMaterial is used as a value of -// ConnectionState.ekm when renegotation is enabled and thus +// ConnectionState.ekm when renegotiation is enabled and thus // we wish to fail all key-material export requests. func noExportedKeyingMaterial(label string, context []byte, length int) ([]byte, error) { return nil, errors.New("crypto/tls: ExportKeyingMaterial is unavailable when renegotiation is enabled") diff --git a/src/crypto/x509/name_constraints_test.go b/src/crypto/x509/name_constraints_test.go index 2020e37a5b..5469e28de2 100644 --- a/src/crypto/x509/name_constraints_test.go +++ b/src/crypto/x509/name_constraints_test.go @@ -1457,7 +1457,7 @@ var nameConstraintsTests = []nameConstraintsTest{ // that we can process CA certificates in the wild that have invalid SANs. // See https://github.com/golang/go/issues/23995 - // #77: an invalid DNS or mail SAN will not be detected if name constaint + // #77: an invalid DNS or mail SAN will not be detected if name constraint // checking is not triggered. { roots: make([]constraintsSpec, 1), diff --git a/src/database/sql/convert.go b/src/database/sql/convert.go index 4c056a1eda..b966ef970c 100644 --- a/src/database/sql/convert.go +++ b/src/database/sql/convert.go @@ -565,7 +565,7 @@ func callValuerValue(vr driver.Valuer) (v driver.Value, err error) { // coefficient (also known as a significand) as a []byte, and an int32 exponent. // These are composed into a final value as "decimal = (neg) (form=finite) coefficient * 10 ^ exponent". // A zero length coefficient is a zero value. -// The big-endian integer coefficent stores the most significant byte first (at coefficent[0]). +// The big-endian integer coefficient stores the most significant byte first (at coefficient[0]). // If the form is not finite the coefficient and exponent should be ignored. // The negative parameter may be set to true for any form, although implementations are not required // to respect the negative parameter in the non-finite form. diff --git a/src/database/sql/convert_test.go b/src/database/sql/convert_test.go index 8a82891c25..2668a5ed5e 100644 --- a/src/database/sql/convert_test.go +++ b/src/database/sql/convert_test.go @@ -525,7 +525,7 @@ func (d *dec) Compose(form byte, negative bool, coefficient []byte, exponent int // This isn't strictly correct, as the extra bytes could be all zero, // ignore this for this test. if len(coefficient) > 16 { - return fmt.Errorf("coefficent too large") + return fmt.Errorf("coefficient too large") } copy(d.coefficient[:], coefficient) @@ -558,7 +558,7 @@ func (d *decFinite) Compose(form byte, negative bool, coefficient []byte, expone // This isn't strictly correct, as the extra bytes could be all zero, // ignore this for this test. if len(coefficient) > 16 { - return fmt.Errorf("coefficent too large") + return fmt.Errorf("coefficient too large") } copy(d.coefficient[:], coefficient) diff --git a/src/database/sql/sql.go b/src/database/sql/sql.go index 5c5b7dc7e9..93001635be 100644 --- a/src/database/sql/sql.go +++ b/src/database/sql/sql.go @@ -2040,7 +2040,7 @@ func (tx *Tx) grabConn(ctx context.Context) (*driverConn, releaseConn, error) { return nil, nil, ctx.Err() } - // closeme.RLock must come before the check for isDone to prevent the Tx from + // closemu.RLock must come before the check for isDone to prevent the Tx from // closing while a query is executing. tx.closemu.RLock() if tx.isDone() { diff --git a/src/database/sql/sql_test.go b/src/database/sql/sql_test.go index f68cefe43a..ed0099e0e9 100644 --- a/src/database/sql/sql_test.go +++ b/src/database/sql/sql_test.go @@ -3783,7 +3783,7 @@ func (c *ctxOnlyConn) ExecContext(ctx context.Context, q string, args []driver.N // TestQueryExecContextOnly ensures drivers only need to implement QueryContext // and ExecContext methods. func TestQueryExecContextOnly(t *testing.T) { - // Ensure connection does not implment non-context interfaces. + // Ensure connection does not implement non-context interfaces. var connType driver.Conn = &ctxOnlyConn{} if _, ok := connType.(driver.Execer); ok { t.Fatalf("%T must not implement driver.Execer", connType) diff --git a/src/debug/dwarf/type_test.go b/src/debug/dwarf/type_test.go index aa2fbeca0b..fda03fdbb0 100644 --- a/src/debug/dwarf/type_test.go +++ b/src/debug/dwarf/type_test.go @@ -223,7 +223,7 @@ func TestUnsupportedTypes(t *testing.T) { } } if dumpseen { - for k, _ := range seen { + for k := range seen { fmt.Printf("seen: %s\n", k) } } diff --git a/src/debug/pe/file_test.go b/src/debug/pe/file_test.go index 42d328b547..bc41be2669 100644 --- a/src/debug/pe/file_test.go +++ b/src/debug/pe/file_test.go @@ -215,7 +215,7 @@ var fileTests = []fileTest{ // testdata/vmlinuz-4.15.0-47-generic is a trimmed down version of Linux Kernel image. // The original Linux Kernel image is about 8M and it is not recommended to add such a big binary file to the repo. // Moreover only a very small portion of the original Kernel image was being parsed by debug/pe package. - // Inorder to indentify this portion, the original image was first parsed by modified debug/pe package. + // In order to identify this portion, the original image was first parsed by modified debug/pe package. // Modification essentially communicated reader's positions before and after parsing. // Finally, bytes between those positions where written to a separate file, // generating trimmed down version Linux Kernel image used in this test case. diff --git a/src/encoding/asn1/asn1.go b/src/encoding/asn1/asn1.go index 3cfd9d1276..fd4dd68021 100644 --- a/src/encoding/asn1/asn1.go +++ b/src/encoding/asn1/asn1.go @@ -27,6 +27,7 @@ import ( "reflect" "strconv" "time" + "unicode/utf16" "unicode/utf8" ) @@ -475,6 +476,29 @@ func parseUTF8String(bytes []byte) (ret string, err error) { return string(bytes), nil } +// BMPString + +// parseBMPString parses an ASN.1 BMPString (Basic Multilingual Plane of +// ISO/IEC/ITU 10646-1) from the given byte slice and returns it. +func parseBMPString(bmpString []byte) (string, error) { + if len(bmpString)%2 != 0 { + return "", errors.New("pkcs12: odd-length BMP string") + } + + // Strip terminator if present. + if l := len(bmpString); l >= 2 && bmpString[l-1] == 0 && bmpString[l-2] == 0 { + bmpString = bmpString[:l-2] + } + + s := make([]uint16, 0, len(bmpString)/2) + for len(bmpString) > 0 { + s = append(s, uint16(bmpString[0])<<8+uint16(bmpString[1])) + bmpString = bmpString[2:] + } + + return string(utf16.Decode(s)), nil +} + // A RawValue represents an undecoded ASN.1 object. type RawValue struct { Class, Tag int @@ -589,7 +613,7 @@ func parseSequenceOf(bytes []byte, sliceType reflect.Type, elemType reflect.Type return } switch t.tag { - case TagIA5String, TagGeneralString, TagT61String, TagUTF8String, TagNumericString: + case TagIA5String, TagGeneralString, TagT61String, TagUTF8String, TagNumericString, TagBMPString: // We pretend that various other string types are // PRINTABLE STRINGs so that a sequence of them can be // parsed into a []string. @@ -691,6 +715,8 @@ func parseField(v reflect.Value, bytes []byte, initOffset int, params fieldParam result, err = parseGeneralizedTime(innerBytes) case TagOctetString: result = innerBytes + case TagBMPString: + result, err = parseBMPString(innerBytes) default: // If we don't know how to handle the type, we just leave Value as nil. } @@ -759,7 +785,7 @@ func parseField(v reflect.Value, bytes []byte, initOffset int, params fieldParam if universalTag == TagPrintableString { if t.class == ClassUniversal { switch t.tag { - case TagIA5String, TagGeneralString, TagT61String, TagUTF8String, TagNumericString: + case TagIA5String, TagGeneralString, TagT61String, TagUTF8String, TagNumericString, TagBMPString: universalTag = t.tag } } else if params.stringType != 0 { @@ -957,6 +983,9 @@ func parseField(v reflect.Value, bytes []byte, initOffset int, params fieldParam // that allow the encoding to change midstring and // such. We give up and pass it as an 8-bit string. v, err = parseT61String(innerBytes) + case TagBMPString: + v, err = parseBMPString(innerBytes) + default: err = SyntaxError{fmt.Sprintf("internal error: unknown string type %d", universalTag)} } diff --git a/src/encoding/asn1/asn1_test.go b/src/encoding/asn1/asn1_test.go index f0a54e0cb2..d5649bff9f 100644 --- a/src/encoding/asn1/asn1_test.go +++ b/src/encoding/asn1/asn1_test.go @@ -6,6 +6,7 @@ package asn1 import ( "bytes" + "encoding/hex" "fmt" "math" "math/big" @@ -1096,3 +1097,35 @@ func TestTaggedRawValue(t *testing.T) { } } } + +var bmpStringTests = []struct { + decoded string + encodedHex string +}{ + {"", "0000"}, + // Example from https://tools.ietf.org/html/rfc7292#appendix-B. + {"Beavis", "0042006500610076006900730000"}, + // Some characters from the "Letterlike Symbols Unicode block". + {"\u2115 - Double-struck N", "21150020002d00200044006f00750062006c0065002d00730074007200750063006b0020004e0000"}, +} + +func TestBMPString(t *testing.T) { + for i, test := range bmpStringTests { + encoded, err := hex.DecodeString(test.encodedHex) + if err != nil { + t.Fatalf("#%d: failed to decode from hex string", i) + } + + decoded, err := parseBMPString(encoded) + + if err != nil { + t.Errorf("#%d: decoding output gave an error: %s", i, err) + continue + } + + if decoded != test.decoded { + t.Errorf("#%d: decoding output resulted in %q, but it should have been %q", i, decoded, test.decoded) + continue + } + } +} diff --git a/src/encoding/asn1/common.go b/src/encoding/asn1/common.go index 255d1ebfa8..e2aa8bd9c5 100644 --- a/src/encoding/asn1/common.go +++ b/src/encoding/asn1/common.go @@ -37,6 +37,7 @@ const ( TagUTCTime = 23 TagGeneralizedTime = 24 TagGeneralString = 27 + TagBMPString = 30 ) // ASN.1 class types represent the namespace of the tag. diff --git a/src/encoding/base32/base32.go b/src/encoding/base32/base32.go index e14d2d4987..2f7d3637e5 100644 --- a/src/encoding/base32/base32.go +++ b/src/encoding/base32/base32.go @@ -6,10 +6,8 @@ package base32 import ( - "bytes" "io" "strconv" - "strings" ) /* @@ -62,13 +60,6 @@ var StdEncoding = NewEncoding(encodeStd) // It is typically used in DNS. var HexEncoding = NewEncoding(encodeHex) -var removeNewlinesMapper = func(r rune) rune { - if r == '\r' || r == '\n' { - return -1 - } - return r -} - // WithPadding creates a new encoding identical to enc except // with a specified padding character, or NoPadding to disable padding. // The padding character must not be '\r' or '\n', must not @@ -302,7 +293,7 @@ func (enc *Encoding) decode(dst, src []byte) (n int, end bool, err error) { // We have reached the end and are missing padding return n, false, CorruptInputError(olen - len(src) - j) } - // We have reached the end and are not expecing any padding + // We have reached the end and are not expecting any padding dlen, end = j, true break } @@ -372,17 +363,18 @@ func (enc *Encoding) decode(dst, src []byte) (n int, end bool, err error) { // number of bytes successfully written and CorruptInputError. // New line characters (\r and \n) are ignored. func (enc *Encoding) Decode(dst, src []byte) (n int, err error) { - src = bytes.Map(removeNewlinesMapper, src) - n, _, err = enc.decode(dst, src) + buf := make([]byte, len(src)) + l := stripNewlines(buf, src) + n, _, err = enc.decode(dst, buf[:l]) return } // DecodeString returns the bytes represented by the base32 string s. func (enc *Encoding) DecodeString(s string) ([]byte, error) { - s = strings.Map(removeNewlinesMapper, s) - dbuf := make([]byte, enc.DecodedLen(len(s))) - n, _, err := enc.decode(dbuf, []byte(s)) - return dbuf[:n], err + buf := []byte(s) + l := stripNewlines(buf, buf) + n, _, err := enc.decode(buf, buf[:l]) + return buf[:n], err } type decoder struct { @@ -497,18 +489,25 @@ type newlineFilteringReader struct { wrapped io.Reader } +// stripNewlines removes newline characters and returns the number +// of non-newline characters copied to dst. +func stripNewlines(dst, src []byte) int { + offset := 0 + for _, b := range src { + if b == '\r' || b == '\n' { + continue + } + dst[offset] = b + offset++ + } + return offset +} + func (r *newlineFilteringReader) Read(p []byte) (int, error) { n, err := r.wrapped.Read(p) for n > 0 { - offset := 0 - for i, b := range p[0:n] { - if b != '\r' && b != '\n' { - if i != offset { - p[offset] = b - } - offset++ - } - } + s := p[0:n] + offset := stripNewlines(s, s) if err != nil || offset > 0 { return offset, err } diff --git a/src/encoding/base32/base32_test.go b/src/encoding/base32/base32_test.go index eb14f1eb26..0b611db0b2 100644 --- a/src/encoding/base32/base32_test.go +++ b/src/encoding/base32/base32_test.go @@ -445,6 +445,15 @@ LNEBUWIIDFON2CA3DBMJXXE5LNFY== } } +func BenchmarkEncode(b *testing.B) { + data := make([]byte, 8192) + buf := make([]byte, StdEncoding.EncodedLen(len(data))) + b.SetBytes(int64(len(data))) + for i := 0; i < b.N; i++ { + StdEncoding.Encode(buf, data) + } +} + func BenchmarkEncodeToString(b *testing.B) { data := make([]byte, 8192) b.SetBytes(int64(len(data))) @@ -453,6 +462,15 @@ func BenchmarkEncodeToString(b *testing.B) { } } +func BenchmarkDecode(b *testing.B) { + data := make([]byte, StdEncoding.EncodedLen(8192)) + StdEncoding.Encode(data, make([]byte, 8192)) + buf := make([]byte, 8192) + b.SetBytes(int64(len(data))) + for i := 0; i < b.N; i++ { + StdEncoding.Decode(buf, data) + } +} func BenchmarkDecodeString(b *testing.B) { data := StdEncoding.EncodeToString(make([]byte, 8192)) b.SetBytes(int64(len(data))) diff --git a/src/encoding/csv/fuzz.go b/src/encoding/csv/fuzz.go index dc33893dd7..8be21d5d28 100644 --- a/src/encoding/csv/fuzz.go +++ b/src/encoding/csv/fuzz.go @@ -17,13 +17,13 @@ func Fuzz(data []byte) int { buf := new(bytes.Buffer) for _, tt := range []Reader{ - Reader{}, - Reader{Comma: ';'}, - Reader{Comma: '\t'}, - Reader{LazyQuotes: true}, - Reader{TrimLeadingSpace: true}, - Reader{Comment: '#'}, - Reader{Comment: ';'}, + {}, + {Comma: ';'}, + {Comma: '\t'}, + {LazyQuotes: true}, + {TrimLeadingSpace: true}, + {Comment: '#'}, + {Comment: ';'}, } { r := NewReader(bytes.NewReader(data)) r.Comma = tt.Comma diff --git a/src/encoding/gob/codec_test.go b/src/encoding/gob/codec_test.go index 494abc9b91..f38e88b638 100644 --- a/src/encoding/gob/codec_test.go +++ b/src/encoding/gob/codec_test.go @@ -591,7 +591,7 @@ func TestEndToEnd(t *testing.T) { B: 18, C: -5, M: map[string]*float64{"pi": &pi, "e": &e}, - M2: map[int]T3{4: T3{X: pi, Z: &meaning}, 10: T3{X: e, Z: &fingers}}, + M2: map[int]T3{4: {X: pi, Z: &meaning}, 10: {X: e, Z: &fingers}}, Mstring: map[string]string{"pi": "3.14", "e": "2.71"}, Mintptr: map[int]*int{meaning: &fingers, fingers: &meaning}, Mcomp: map[complex128]complex128{comp1: comp2, comp2: comp1}, diff --git a/src/encoding/json/bench_test.go b/src/encoding/json/bench_test.go index f2592e3dbd..f92d39f0c6 100644 --- a/src/encoding/json/bench_test.go +++ b/src/encoding/json/bench_test.go @@ -297,6 +297,22 @@ func BenchmarkIssue10335(b *testing.B) { }) } +func BenchmarkIssue34127(b *testing.B) { + b.ReportAllocs() + j := struct { + Bar string `json:"bar,string"` + }{ + Bar: `foobar`, + } + b.RunParallel(func(pb *testing.PB) { + for pb.Next() { + if _, err := Marshal(&j); err != nil { + b.Fatal(err) + } + } + }) +} + func BenchmarkUnmapped(b *testing.B) { b.ReportAllocs() j := []byte(`{"s": "hello", "y": 2, "o": {"x": 0}, "a": [1, 99, {"x": 1}]}`) diff --git a/src/encoding/json/decode.go b/src/encoding/json/decode.go index df1c085917..360fc69d04 100644 --- a/src/encoding/json/decode.go +++ b/src/encoding/json/decode.go @@ -72,7 +72,8 @@ import ( // use. If the map is nil, Unmarshal allocates a new map. Otherwise Unmarshal // reuses the existing map, keeping existing entries. Unmarshal then stores // key-value pairs from the JSON object into the map. The map's key type must -// either be a string, an integer, or implement encoding.TextUnmarshaler. +// either be any string type, an integer, implement json.Unmarshaler, or +// implement encoding.TextUnmarshaler. // // If a JSON value is not appropriate for a given target type, // or if a JSON number overflows the target type, Unmarshal @@ -415,8 +416,9 @@ func (d *decodeState) valueQuoted() interface{} { // indirect walks down v allocating pointers as needed, // until it gets to a non-pointer. -// if it encounters an Unmarshaler, indirect stops and returns that. -// if decodingNull is true, indirect stops at the last pointer so it can be set to nil. +// If it encounters an Unmarshaler, indirect stops and returns that. +// If decodingNull is true, indirect stops at the first settable pointer so it +// can be set to nil. func indirect(v reflect.Value, decodingNull bool) (Unmarshaler, encoding.TextUnmarshaler, reflect.Value) { // Issue #24153 indicates that it is generally not a guaranteed property // that you may round-trip a reflect.Value by calling Value.Addr().Elem() @@ -455,7 +457,7 @@ func indirect(v reflect.Value, decodingNull bool) (Unmarshaler, encoding.TextUnm break } - if v.Elem().Kind() != reflect.Ptr && decodingNull && v.CanSet() { + if decodingNull && v.CanSet() { break } diff --git a/src/encoding/json/decode_test.go b/src/encoding/json/decode_test.go index 72d384a80f..489f8674d0 100644 --- a/src/encoding/json/decode_test.go +++ b/src/encoding/json/decode_test.go @@ -401,6 +401,11 @@ type B struct { B bool `json:",string"` } +type DoublePtr struct { + I **int + J **int +} + var unmarshalTests = []unmarshalTest{ // basic types {in: `true`, ptr: new(bool), out: true}, @@ -656,6 +661,11 @@ var unmarshalTests = []unmarshalTest{ err: fmt.Errorf("json: unknown field \"X\""), disallowUnknownFields: true, }, + { + in: `{"I": 0, "I": null, "J": null}`, + ptr: new(DoublePtr), + out: DoublePtr{I: nil, J: nil}, + }, // invalid UTF-8 is coerced to valid UTF-8. { diff --git a/src/encoding/json/encode.go b/src/encoding/json/encode.go index f085b5a08d..e5dd1b7799 100644 --- a/src/encoding/json/encode.go +++ b/src/encoding/json/encode.go @@ -164,7 +164,6 @@ func Marshal(v interface{}) ([]byte, error) { } buf := append([]byte(nil), e.Bytes()...) - e.Reset() encodeStatePool.Put(e) return buf, nil @@ -482,7 +481,11 @@ func textMarshalerEncoder(e *encodeState, v reflect.Value, opts encOpts) { e.WriteString("null") return } - m := v.Interface().(encoding.TextMarshaler) + m, ok := v.Interface().(encoding.TextMarshaler) + if !ok { + e.WriteString("null") + return + } b, err := m.MarshalText() if err != nil { e.error(&MarshalerError{v.Type(), err}) @@ -601,11 +604,11 @@ func stringEncoder(e *encodeState, v reflect.Value, opts encOpts) { return } if opts.quoted { - sb, err := Marshal(v.String()) - if err != nil { - e.error(err) - } - e.string(string(sb), opts.escapeHTML) + b := make([]byte, 0, v.Len()+2) + b = append(b, '"') + b = append(b, []byte(v.String())...) + b = append(b, '"') + e.stringBytes(b, opts.escapeHTML) } else { e.string(v.String(), opts.escapeHTML) } diff --git a/src/encoding/json/encode_test.go b/src/encoding/json/encode_test.go index 642f397fb9..18a92bae7c 100644 --- a/src/encoding/json/encode_test.go +++ b/src/encoding/json/encode_test.go @@ -6,6 +6,7 @@ package json import ( "bytes" + "encoding" "fmt" "log" "math" @@ -453,18 +454,31 @@ type BugX struct { BugB } -// Issue 16042. Even if a nil interface value is passed in -// as long as it implements MarshalJSON, it should be marshaled. -type nilMarshaler string +// golang.org/issue/16042. +// Even if a nil interface value is passed in, as long as +// it implements Marshaler, it should be marshaled. +type nilJSONMarshaler string -func (nm *nilMarshaler) MarshalJSON() ([]byte, error) { +func (nm *nilJSONMarshaler) MarshalJSON() ([]byte, error) { if nm == nil { return Marshal("0zenil0") } return Marshal("zenil:" + string(*nm)) } -// Issue 16042. +// golang.org/issue/34235. +// Even if a nil interface value is passed in, as long as +// it implements encoding.TextMarshaler, it should be marshaled. +type nilTextMarshaler string + +func (nm *nilTextMarshaler) MarshalText() ([]byte, error) { + if nm == nil { + return []byte("0zenil0"), nil + } + return []byte("zenil:" + string(*nm)), nil +} + +// See golang.org/issue/16042 and golang.org/issue/34235. func TestNilMarshal(t *testing.T) { testCases := []struct { v interface{} @@ -478,8 +492,11 @@ func TestNilMarshal(t *testing.T) { {v: []byte(nil), want: `null`}, {v: struct{ M string }{"gopher"}, want: `{"M":"gopher"}`}, {v: struct{ M Marshaler }{}, want: `{"M":null}`}, - {v: struct{ M Marshaler }{(*nilMarshaler)(nil)}, want: `{"M":"0zenil0"}`}, - {v: struct{ M interface{} }{(*nilMarshaler)(nil)}, want: `{"M":null}`}, + {v: struct{ M Marshaler }{(*nilJSONMarshaler)(nil)}, want: `{"M":"0zenil0"}`}, + {v: struct{ M interface{} }{(*nilJSONMarshaler)(nil)}, want: `{"M":null}`}, + {v: struct{ M encoding.TextMarshaler }{}, want: `{"M":null}`}, + {v: struct{ M encoding.TextMarshaler }{(*nilTextMarshaler)(nil)}, want: `{"M":"0zenil0"}`}, + {v: struct{ M interface{} }{(*nilTextMarshaler)(nil)}, want: `{"M":null}`}, } for _, tt := range testCases { @@ -796,8 +813,8 @@ func TestTextMarshalerMapKeysAreSorted(t *testing.T) { // https://golang.org/issue/33675 func TestNilMarshalerTextMapKey(t *testing.T) { b, err := Marshal(map[*unmarshalerText]int{ - (*unmarshalerText)(nil): 1, - &unmarshalerText{"A", "B"}: 2, + (*unmarshalerText)(nil): 1, + {"A", "B"}: 2, }) if err != nil { t.Fatalf("Failed to Marshal *text.Marshaler: %v", err) diff --git a/src/encoding/json/stream_test.go b/src/encoding/json/stream_test.go index e3317ddeb0..ebb4f231d1 100644 --- a/src/encoding/json/stream_test.go +++ b/src/encoding/json/stream_test.go @@ -118,6 +118,11 @@ func TestEncoderSetEscapeHTML(t *testing.T) { Ptr strPtrMarshaler }{`""`, `""`} + // https://golang.org/issue/34154 + stringOption := struct { + Bar string `json:"bar,string"` + }{`foobar`} + for _, tt := range []struct { name string v interface{} @@ -137,6 +142,11 @@ func TestEncoderSetEscapeHTML(t *testing.T) { `{"NonPtr":"\u003cstr\u003e","Ptr":"\u003cstr\u003e"}`, `{"NonPtr":"","Ptr":""}`, }, + { + "stringOption", stringOption, + `{"bar":"\"\u003chtml\u003efoobar\u003c/html\u003e\""}`, + `{"bar":"\"foobar\""}`, + }, } { var buf bytes.Buffer enc := NewEncoder(&buf) diff --git a/src/go/build/build.go b/src/go/build/build.go index f8547606aa..722fead20e 100644 --- a/src/go/build/build.go +++ b/src/go/build/build.go @@ -981,7 +981,7 @@ Found: var errNoModules = errors.New("not using modules") // importGo checks whether it can use the go command to find the directory for path. -// If using the go command is not appopriate, importGo returns errNoModules. +// If using the go command is not appropriate, importGo returns errNoModules. // Otherwise, importGo tries using the go command and reports whether that succeeded. // Using the go command lets build.Import and build.Context.Import find code // in Go modules. In the long term we want tools to use go/packages (currently golang.org/x/tools/go/packages), diff --git a/src/go/build/deps_test.go b/src/go/build/deps_test.go index 6b5772226e..c914d66b4d 100644 --- a/src/go/build/deps_test.go +++ b/src/go/build/deps_test.go @@ -244,51 +244,51 @@ var pkgDeps = map[string][]string{ "go/types": {"L4", "GOPARSER", "container/heap", "go/constant"}, // One of a kind. - "archive/tar": {"L4", "OS", "syscall", "os/user"}, - "archive/zip": {"L4", "OS", "compress/flate"}, - "container/heap": {"sort"}, - "compress/bzip2": {"L4"}, - "compress/flate": {"L4"}, - "compress/gzip": {"L4", "compress/flate"}, - "compress/lzw": {"L4"}, - "compress/zlib": {"L4", "compress/flate"}, - "context": {"errors", "internal/reflectlite", "sync", "time"}, - "database/sql": {"L4", "container/list", "context", "database/sql/driver", "database/sql/internal"}, - "database/sql/driver": {"L4", "context", "time", "database/sql/internal"}, - "debug/dwarf": {"L4"}, - "debug/elf": {"L4", "OS", "debug/dwarf", "compress/zlib"}, - "debug/gosym": {"L4"}, - "debug/macho": {"L4", "OS", "debug/dwarf", "compress/zlib"}, - "debug/pe": {"L4", "OS", "debug/dwarf", "compress/zlib"}, - "debug/plan9obj": {"L4", "OS"}, - "encoding": {"L4"}, - "encoding/ascii85": {"L4"}, - "encoding/asn1": {"L4", "math/big"}, - "encoding/csv": {"L4"}, - "encoding/gob": {"L4", "OS", "encoding"}, - "encoding/hex": {"L4"}, - "encoding/json": {"L4", "encoding"}, - "encoding/pem": {"L4"}, - "encoding/xml": {"L4", "encoding"}, - "flag": {"L4", "OS"}, - "go/build": {"L4", "OS", "GOPARSER", "internal/goroot", "internal/goversion"}, - "html": {"L4"}, - "image/draw": {"L4", "image/internal/imageutil"}, - "image/gif": {"L4", "compress/lzw", "image/color/palette", "image/draw"}, - "image/internal/imageutil": {"L4"}, - "image/jpeg": {"L4", "image/internal/imageutil"}, - "image/png": {"L4", "compress/zlib"}, - "index/suffixarray": {"L4", "regexp"}, - "internal/goroot": {"L4", "OS"}, - "internal/singleflight": {"sync"}, - "internal/trace": {"L4", "OS", "container/heap"}, - "internal/xcoff": {"L4", "OS", "debug/dwarf"}, - "math/big": {"L4"}, - "mime": {"L4", "OS", "syscall", "internal/syscall/windows/registry"}, - "mime/quotedprintable": {"L4"}, - "net/internal/socktest": {"L4", "OS", "syscall", "internal/syscall/windows"}, - "net/url": {"L4"}, - "plugin": {"L0", "OS", "CGO"}, + "archive/tar": {"L4", "OS", "syscall", "os/user"}, + "archive/zip": {"L4", "OS", "compress/flate"}, + "container/heap": {"sort"}, + "compress/bzip2": {"L4"}, + "compress/flate": {"L4"}, + "compress/gzip": {"L4", "compress/flate"}, + "compress/lzw": {"L4"}, + "compress/zlib": {"L4", "compress/flate"}, + "context": {"errors", "internal/reflectlite", "sync", "time"}, + "database/sql": {"L4", "container/list", "context", "database/sql/driver", "database/sql/internal"}, + "database/sql/driver": {"L4", "context", "time", "database/sql/internal"}, + "debug/dwarf": {"L4"}, + "debug/elf": {"L4", "OS", "debug/dwarf", "compress/zlib"}, + "debug/gosym": {"L4"}, + "debug/macho": {"L4", "OS", "debug/dwarf", "compress/zlib"}, + "debug/pe": {"L4", "OS", "debug/dwarf", "compress/zlib"}, + "debug/plan9obj": {"L4", "OS"}, + "encoding": {"L4"}, + "encoding/ascii85": {"L4"}, + "encoding/asn1": {"L4", "math/big"}, + "encoding/csv": {"L4"}, + "encoding/gob": {"L4", "OS", "encoding"}, + "encoding/hex": {"L4"}, + "encoding/json": {"L4", "encoding"}, + "encoding/pem": {"L4"}, + "encoding/xml": {"L4", "encoding"}, + "flag": {"L4", "OS"}, + "go/build": {"L4", "OS", "GOPARSER", "internal/goroot", "internal/goversion"}, + "html": {"L4"}, + "image/draw": {"L4", "image/internal/imageutil"}, + "image/gif": {"L4", "compress/lzw", "image/color/palette", "image/draw"}, + "image/internal/imageutil": {"L4"}, + "image/jpeg": {"L4", "image/internal/imageutil"}, + "image/png": {"L4", "compress/zlib"}, + "index/suffixarray": {"L4", "regexp"}, + "internal/goroot": {"L4", "OS"}, + "internal/singleflight": {"sync"}, + "internal/trace": {"L4", "OS", "container/heap"}, + "internal/xcoff": {"L4", "OS", "debug/dwarf"}, + "math/big": {"L4"}, + "mime": {"L4", "OS", "syscall", "internal/syscall/windows/registry"}, + "mime/quotedprintable": {"L4"}, + "net/internal/socktest": {"L4", "OS", "syscall", "internal/syscall/windows"}, + "net/url": {"L4"}, + "plugin": {"L0", "OS", "CGO"}, "runtime/pprof/internal/profile": {"L4", "OS", "compress/gzip", "regexp"}, "testing/internal/testdeps": {"L4", "internal/testlog", "runtime/pprof", "regexp"}, "text/scanner": {"L4", "OS"}, diff --git a/src/go/internal/gccgoimporter/importer_test.go b/src/go/internal/gccgoimporter/importer_test.go index beb39e6089..37250fde41 100644 --- a/src/go/internal/gccgoimporter/importer_test.go +++ b/src/go/internal/gccgoimporter/importer_test.go @@ -96,6 +96,7 @@ var importerTests = [...]importerTest{ {pkgpath: "issue29198", name: "FooServer", gccgoVersion: 7, want: "type FooServer struct{FooServer *FooServer; user string; ctx context.Context}"}, {pkgpath: "issue30628", name: "Apple", want: "type Apple struct{hey sync.RWMutex; x int; RQ [517]struct{Count uintptr; NumBytes uintptr; Last uintptr}}"}, {pkgpath: "issue31540", name: "S", gccgoVersion: 7, want: "type S struct{b int; map[Y]Z}"}, + {pkgpath: "issue34182", name: "T1", want: "type T1 struct{f *T2}"}, } func TestGoxImporter(t *testing.T) { diff --git a/src/go/internal/gccgoimporter/parser.go b/src/go/internal/gccgoimporter/parser.go index 76f30bb9ca..9204b004f9 100644 --- a/src/go/internal/gccgoimporter/parser.go +++ b/src/go/internal/gccgoimporter/parser.go @@ -254,7 +254,7 @@ func (p *parser) parseField(pkg *types.Package) (field *types.Var, tag string) { case *types.Named: name = typ.Obj().Name() default: - p.error("anonymous field expected") + p.error("embedded field expected") } } } @@ -469,31 +469,47 @@ func (p *parser) reserve(n int) { } } -// update sets the type map entries for the given type numbers nlist to t. -func (p *parser) update(t types.Type, nlist []int) { - if len(nlist) != 0 { - if t == reserved { - p.errorf("internal error: update(%v) invoked on reserved", nlist) - } - if t == nil { - p.errorf("internal error: update(%v) invoked on nil", nlist) - } +// update sets the type map entries for the entries in nlist to t. +// An entry in nlist can be a type number in p.typeList, +// used to resolve named types, or it can be a *types.Pointer, +// used to resolve pointers to named types in case they are referenced +// by embedded fields. +func (p *parser) update(t types.Type, nlist []interface{}) { + if t == reserved { + p.errorf("internal error: update(%v) invoked on reserved", nlist) + } + if t == nil { + p.errorf("internal error: update(%v) invoked on nil", nlist) } for _, n := range nlist { - if p.typeList[n] == t { - continue - } - if p.typeList[n] != reserved { - p.errorf("internal error: update(%v): %d not reserved", nlist, n) + switch n := n.(type) { + case int: + if p.typeList[n] == t { + continue + } + if p.typeList[n] != reserved { + p.errorf("internal error: update(%v): %d not reserved", nlist, n) + } + p.typeList[n] = t + case *types.Pointer: + if *n != (types.Pointer{}) { + elem := n.Elem() + if elem == t { + continue + } + p.errorf("internal error: update: pointer already set to %v, expected %v", elem, t) + } + *n = *types.NewPointer(t) + default: + p.errorf("internal error: %T on nlist", n) } - p.typeList[n] = t } } // NamedType = TypeName [ "=" ] Type { Method } . // TypeName = ExportedName . // Method = "func" "(" Param ")" Name ParamList ResultList [InlineBody] ";" . -func (p *parser) parseNamedType(nlist []int) types.Type { +func (p *parser) parseNamedType(nlist []interface{}) types.Type { pkg, name := p.parseExportedName() scope := pkg.Scope() obj := scope.Lookup(name) @@ -504,7 +520,7 @@ func (p *parser) parseNamedType(nlist []int) types.Type { // type alias if p.tok == '=' { p.next() - p.aliases[nlist[len(nlist)-1]] = name + p.aliases[nlist[len(nlist)-1].(int)] = name if obj != nil { // use the previously imported (canonical) type t := obj.Type() @@ -603,7 +619,7 @@ func (p *parser) parseInt() int { } // ArrayOrSliceType = "[" [ int ] "]" Type . -func (p *parser) parseArrayOrSliceType(pkg *types.Package, nlist []int) types.Type { +func (p *parser) parseArrayOrSliceType(pkg *types.Package, nlist []interface{}) types.Type { p.expect('[') if p.tok == ']' { p.next() @@ -626,7 +642,7 @@ func (p *parser) parseArrayOrSliceType(pkg *types.Package, nlist []int) types.Ty } // MapType = "map" "[" Type "]" Type . -func (p *parser) parseMapType(pkg *types.Package, nlist []int) types.Type { +func (p *parser) parseMapType(pkg *types.Package, nlist []interface{}) types.Type { p.expectKeyword("map") t := new(types.Map) @@ -642,7 +658,7 @@ func (p *parser) parseMapType(pkg *types.Package, nlist []int) types.Type { } // ChanType = "chan" ["<-" | "-<"] Type . -func (p *parser) parseChanType(pkg *types.Package, nlist []int) types.Type { +func (p *parser) parseChanType(pkg *types.Package, nlist []interface{}) types.Type { p.expectKeyword("chan") t := new(types.Chan) @@ -669,7 +685,7 @@ func (p *parser) parseChanType(pkg *types.Package, nlist []int) types.Type { } // StructType = "struct" "{" { Field } "}" . -func (p *parser) parseStructType(pkg *types.Package, nlist []int) types.Type { +func (p *parser) parseStructType(pkg *types.Package, nlist []interface{}) types.Type { p.expectKeyword("struct") t := new(types.Struct) @@ -736,7 +752,7 @@ func (p *parser) parseResultList(pkg *types.Package) *types.Tuple { } // FunctionType = ParamList ResultList . -func (p *parser) parseFunctionType(pkg *types.Package, nlist []int) *types.Signature { +func (p *parser) parseFunctionType(pkg *types.Package, nlist []interface{}) *types.Signature { t := new(types.Signature) p.update(t, nlist) @@ -776,7 +792,7 @@ func (p *parser) parseFunc(pkg *types.Package) *types.Func { } // InterfaceType = "interface" "{" { ("?" Type | Func) ";" } "}" . -func (p *parser) parseInterfaceType(pkg *types.Package, nlist []int) types.Type { +func (p *parser) parseInterfaceType(pkg *types.Package, nlist []interface{}) types.Type { p.expectKeyword("interface") t := new(types.Interface) @@ -805,7 +821,7 @@ func (p *parser) parseInterfaceType(pkg *types.Package, nlist []int) types.Type } // PointerType = "*" ("any" | Type) . -func (p *parser) parsePointerType(pkg *types.Package, nlist []int) types.Type { +func (p *parser) parsePointerType(pkg *types.Package, nlist []interface{}) types.Type { p.expect('*') if p.tok == scanner.Ident { p.expectKeyword("any") @@ -817,13 +833,13 @@ func (p *parser) parsePointerType(pkg *types.Package, nlist []int) types.Type { t := new(types.Pointer) p.update(t, nlist) - *t = *types.NewPointer(p.parseType(pkg)) + *t = *types.NewPointer(p.parseType(pkg, t)) return t } // TypeSpec = NamedType | MapType | ChanType | StructType | InterfaceType | PointerType | ArrayOrSliceType | FunctionType . -func (p *parser) parseTypeSpec(pkg *types.Package, nlist []int) types.Type { +func (p *parser) parseTypeSpec(pkg *types.Package, nlist []interface{}) types.Type { switch p.tok { case scanner.String: return p.parseNamedType(nlist) @@ -912,14 +928,14 @@ func lookupBuiltinType(typ int) types.Type { // // parseType updates the type map to t for all type numbers n. // -func (p *parser) parseType(pkg *types.Package, n ...int) types.Type { +func (p *parser) parseType(pkg *types.Package, n ...interface{}) types.Type { p.expect('<') t, _ := p.parseTypeAfterAngle(pkg, n...) return t } // (*parser).Type after reading the "<". -func (p *parser) parseTypeAfterAngle(pkg *types.Package, n ...int) (t types.Type, n1 int) { +func (p *parser) parseTypeAfterAngle(pkg *types.Package, n ...interface{}) (t types.Type, n1 int) { p.expectKeyword("type") n1 = 0 @@ -962,7 +978,7 @@ func (p *parser) parseTypeAfterAngle(pkg *types.Package, n ...int) (t types.Type // parseTypeExtended is identical to parseType, but if the type in // question is a saved type, returns the index as well as the type // pointer (index returned is zero if we parsed a builtin). -func (p *parser) parseTypeExtended(pkg *types.Package, n ...int) (t types.Type, n1 int) { +func (p *parser) parseTypeExtended(pkg *types.Package, n ...interface{}) (t types.Type, n1 int) { p.expect('<') t, n1 = p.parseTypeAfterAngle(pkg, n...) return @@ -1044,12 +1060,12 @@ func (p *parser) parseTypes(pkg *types.Package) { } for i := 1; i < int(exportedp1); i++ { - p.parseSavedType(pkg, i, []int{}) + p.parseSavedType(pkg, i, nil) } } // parseSavedType parses one saved type definition. -func (p *parser) parseSavedType(pkg *types.Package, i int, nlist []int) { +func (p *parser) parseSavedType(pkg *types.Package, i int, nlist []interface{}) { defer func(s *scanner.Scanner, tok rune, lit string) { p.scanner = s p.tok = tok diff --git a/src/go/internal/gccgoimporter/testdata/issue34182.go b/src/go/internal/gccgoimporter/testdata/issue34182.go new file mode 100644 index 0000000000..2a5c333a05 --- /dev/null +++ b/src/go/internal/gccgoimporter/testdata/issue34182.go @@ -0,0 +1,17 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package issue34182 + +type T1 struct { + f *T2 +} + +type T2 struct { + f T3 +} + +type T3 struct { + *T2 +} diff --git a/src/go/internal/gccgoimporter/testdata/issue34182.gox b/src/go/internal/gccgoimporter/testdata/issue34182.gox new file mode 100644 index 0000000000..671a7d62d6 --- /dev/null +++ b/src/go/internal/gccgoimporter/testdata/issue34182.gox @@ -0,0 +1,13 @@ +v3; +package issue34182 +pkgpath issue34182 +init issue34182 ~go.issue34182 +types 8 4 21 21 21 17 30 45 45 +type 1 "T1" +type 2 "T2" +type 3 "T3" +type 4 * +type 5 struct { ? ; } +type 6 struct { .go.issue34182.f ; } +type 7 struct { .go.issue34182.f ; } +checksum FF02C49BAF44B06C087ED4E573F7CC880C79C208 diff --git a/src/go/parser/interface.go b/src/go/parser/interface.go index 9de160a798..0c3f824c98 100644 --- a/src/go/parser/interface.go +++ b/src/go/parser/interface.go @@ -173,7 +173,7 @@ func ParseDir(fset *token.FileSet, path string, filter func(os.FileInfo) bool, m // be a valid Go (type or value) expression. Specifically, fset must not // be nil. // -func ParseExprFrom(fset *token.FileSet, filename string, src interface{}, mode Mode) (ast.Expr, error) { +func ParseExprFrom(fset *token.FileSet, filename string, src interface{}, mode Mode) (expr ast.Expr, err error) { if fset == nil { panic("parser.ParseExprFrom: no token.FileSet provided (fset == nil)") } diff --git a/src/go/parser/parser_test.go b/src/go/parser/parser_test.go index fb35a88ba1..18c05bce20 100644 --- a/src/go/parser/parser_test.go +++ b/src/go/parser/parser_test.go @@ -42,6 +42,22 @@ func nameFilter(filename string) bool { func dirFilter(f os.FileInfo) bool { return nameFilter(f.Name()) } +func TestParseFile(t *testing.T) { + src := "package p\nvar _=s[::]+\ns[::]+\ns[::]+\ns[::]+\ns[::]+\ns[::]+\ns[::]+\ns[::]+\ns[::]+\ns[::]+\ns[::]+\ns[::]" + _, err := ParseFile(token.NewFileSet(), "", src, 0) + if err == nil { + t.Errorf("ParseFile(%s) succeeded unexpectedly", src) + } +} + +func TestParseExprFrom(t *testing.T) { + src := "s[::]+\ns[::]+\ns[::]+\ns[::]+\ns[::]+\ns[::]+\ns[::]+\ns[::]+\ns[::]+\ns[::]+\ns[::]+\ns[::]" + _, err := ParseExprFrom(token.NewFileSet(), "", src, 0) + if err == nil { + t.Errorf("ParseExprFrom(%s) succeeded unexpectedly", src) + } +} + func TestParseDir(t *testing.T) { path := "." pkgs, err := ParseDir(token.NewFileSet(), path, dirFilter, 0) diff --git a/src/go/types/builtins.go b/src/go/types/builtins.go index 0a62f4beaf..fb660b5cc8 100644 --- a/src/go/types/builtins.go +++ b/src/go/types/builtins.go @@ -362,7 +362,7 @@ func (check *Checker) builtin(x *operand, call *ast.CallExpr, id builtinId) (_ b // convert or check untyped argument if isUntyped(x.typ) { if x.mode == constant_ { - // an untyped constant number can alway be considered + // an untyped constant number can always be considered // as a complex constant if isNumeric(x.typ) { x.typ = Typ[UntypedComplex] diff --git a/src/go/types/call.go b/src/go/types/call.go index 6220794718..1400e0f00b 100644 --- a/src/go/types/call.go +++ b/src/go/types/call.go @@ -439,7 +439,7 @@ func (check *Checker) selector(x *operand, e *ast.SelectorExpr) { // Verify that LookupFieldOrMethod and MethodSet.Lookup agree. // TODO(gri) This only works because we call LookupFieldOrMethod // _before_ calling NewMethodSet: LookupFieldOrMethod completes - // any incomplete interfaces so they are avaible to NewMethodSet + // any incomplete interfaces so they are available to NewMethodSet // (which assumes that interfaces have been completed already). typ := x.typ if x.mode == variable { diff --git a/src/go/types/decl.go b/src/go/types/decl.go index 1e2790a171..a13442c951 100644 --- a/src/go/types/decl.go +++ b/src/go/types/decl.go @@ -333,8 +333,8 @@ func (check *Checker) constDecl(obj *Const, typ, init ast.Expr) { assert(obj.typ == nil) // use the correct value of iota + defer func(iota constant.Value) { check.iota = iota }(check.iota) check.iota = obj.val - defer func() { check.iota = nil }() // provide valid constant value under all circumstances obj.val = constant.MakeUnknown() diff --git a/src/go/types/testdata/const0.src b/src/go/types/testdata/const0.src index 19fb1bdbbe..adbbf2863b 100644 --- a/src/go/types/testdata/const0.src +++ b/src/go/types/testdata/const0.src @@ -308,6 +308,8 @@ const ( _ = unsafe.Sizeof([iota-1]int{} == x) // assert types are equal _ = unsafe.Sizeof([Two]int{} == x) // assert types are equal ) + var z [iota]int // [2]int + _ = unsafe.Sizeof([2]int{} == z) // assert types are equal }) three = iota // the sequence continues ) @@ -334,3 +336,15 @@ var _ = []int64{ 1 * 1e9, 5 * 1e9, } + +const _ = unsafe.Sizeof(func() { + const _ = 0 + _ = iota + + const ( + zero = iota + one + ) + assert(one == 1) + assert(iota == 0) +}) diff --git a/src/image/gif/writer_test.go b/src/image/gif/writer_test.go index 0bc24d1bee..b619961787 100644 --- a/src/image/gif/writer_test.go +++ b/src/image/gif/writer_test.go @@ -52,7 +52,7 @@ func averageDelta(m0, m1 image.Image) int64 { } // averageDeltaBounds returns the average delta in RGB space. The average delta is -// calulated in the specified bounds. +// calculated in the specified bounds. func averageDeltaBound(m0, m1 image.Image, b0, b1 image.Rectangle) int64 { var sum, n int64 for y := b0.Min.Y; y < b0.Max.Y; y++ { diff --git a/src/internal/bytealg/count_ppc64x.s b/src/internal/bytealg/count_ppc64x.s index 7abdce1954..a64d7d792d 100644 --- a/src/internal/bytealg/count_ppc64x.s +++ b/src/internal/bytealg/count_ppc64x.s @@ -49,7 +49,7 @@ TEXT countbytebody<>(SB), NOSPLIT|NOFRAME, $0-0 cmploop: LXVW4X (R3), VS32 // load bytes from string - // when the bytes match, the corresonding byte contains all 1s + // when the bytes match, the corresponding byte contains all 1s VCMPEQUB V1, V0, V2 // compare bytes VPOPCNTD V2, V3 // each double word contains its count VADDUDM V3, V5, V5 // accumulate bit count in each double word diff --git a/src/internal/cpu/cpu.go b/src/internal/cpu/cpu.go index 76fc878abe..f326b06332 100644 --- a/src/internal/cpu/cpu.go +++ b/src/internal/cpu/cpu.go @@ -160,7 +160,7 @@ type option struct { // processOptions enables or disables CPU feature values based on the parsed env string. // The env string is expected to be of the form cpu.feature1=value1,cpu.feature2=value2... -// where feature names is one of the architecture specifc list stored in the +// where feature names is one of the architecture specific list stored in the // cpu packages options variable and values are either 'on' or 'off'. // If env contains cpu.all=off then all cpu features referenced through the options // variable are disabled. Other feature names and values result in warning messages. diff --git a/src/internal/fmtsort/sort_test.go b/src/internal/fmtsort/sort_test.go index e060d4bf51..aaa0004666 100644 --- a/src/internal/fmtsort/sort_test.go +++ b/src/internal/fmtsort/sort_test.go @@ -119,7 +119,7 @@ var sortTests = []sortTest{ "PTR0:0 PTR1:1 PTR2:2", }, { - map[toy]string{toy{7, 2}: "72", toy{7, 1}: "71", toy{3, 4}: "34"}, + map[toy]string{{7, 2}: "72", {7, 1}: "71", {3, 4}: "34"}, "{3 4}:34 {7 1}:71 {7 2}:72", }, { diff --git a/src/internal/reflectlite/value.go b/src/internal/reflectlite/value.go index 308cf98fc8..6a493938f5 100644 --- a/src/internal/reflectlite/value.go +++ b/src/internal/reflectlite/value.go @@ -305,7 +305,7 @@ func (v Value) IsNil() bool { // IsValid reports whether v represents a value. // It returns false if v is the zero Value. // If IsValid returns false, all other methods except String panic. -// Most functions and methods never return an invalid value. +// Most functions and methods never return an invalid Value. // If one does, its documentation states the conditions explicitly. func (v Value) IsValid() bool { return v.flag != 0 diff --git a/src/log/log.go b/src/log/log.go index 12a9e7b8ce..1bf39ae9a3 100644 --- a/src/log/log.go +++ b/src/log/log.go @@ -25,8 +25,9 @@ import ( // These flags define which text to prefix to each log entry generated by the Logger. // Bits are or'ed together to control what's printed. -// There is no control over the order they appear (the order listed -// here) or the format they present (as described in the comments). +// With the exception of the Lmsgprefix flag, there is no +// control over the order they appear (the order listed here) +// or the format they present (as described in the comments). // The prefix is followed by a colon only when Llongfile or Lshortfile // is specified. // For example, flags Ldate | Ltime (or LstdFlags) produce, @@ -40,6 +41,7 @@ const ( Llongfile // full file name and line number: /a/b/c/d.go:23 Lshortfile // final file name element and line number: d.go:23. overrides Llongfile LUTC // if Ldate or Ltime is set, use UTC rather than the local time zone + Lmsgprefix // move the "prefix" from the beginning of the line to before the message LstdFlags = Ldate | Ltime // initial values for the standard logger ) @@ -49,7 +51,7 @@ const ( // multiple goroutines; it guarantees to serialize access to the Writer. type Logger struct { mu sync.Mutex // ensures atomic writes; protects the following fields - prefix string // prefix to write at beginning of each line + prefix string // prefix on each line to identify the logger (but see Lmsgprefix) flag int // properties out io.Writer // destination for output buf []byte // for accumulating text to write @@ -57,7 +59,8 @@ type Logger struct { // New creates a new Logger. The out variable sets the // destination to which log data will be written. -// The prefix appears at the beginning of each generated log line. +// The prefix appears at the beginning of each generated log line, or +// after the log header if the Lmsgprefix flag is provided. // The flag argument defines the logging properties. func New(out io.Writer, prefix string, flag int) *Logger { return &Logger{out: out, prefix: prefix, flag: flag} @@ -90,11 +93,14 @@ func itoa(buf *[]byte, i int, wid int) { } // formatHeader writes log header to buf in following order: -// * l.prefix (if it's not blank), +// * l.prefix (if it's not blank and Lmsgprefix is unset), // * date and/or time (if corresponding flags are provided), -// * file and line number (if corresponding flags are provided). +// * file and line number (if corresponding flags are provided), +// * l.prefix (if it's not blank and Lmsgprefix is set). func (l *Logger) formatHeader(buf *[]byte, t time.Time, file string, line int) { - *buf = append(*buf, l.prefix...) + if l.flag&Lmsgprefix == 0 { + *buf = append(*buf, l.prefix...) + } if l.flag&(Ldate|Ltime|Lmicroseconds) != 0 { if l.flag&LUTC != 0 { t = t.UTC() @@ -138,6 +144,9 @@ func (l *Logger) formatHeader(buf *[]byte, t time.Time, file string, line int) { itoa(buf, line, -1) *buf = append(*buf, ": "...) } + if l.flag&Lmsgprefix != 0 { + *buf = append(*buf, l.prefix...) + } } // Output writes the output for a logging event. The string s contains diff --git a/src/log/log_test.go b/src/log/log_test.go index b79251877e..cdccbc554d 100644 --- a/src/log/log_test.go +++ b/src/log/log_test.go @@ -20,7 +20,7 @@ const ( Rdate = `[0-9][0-9][0-9][0-9]/[0-9][0-9]/[0-9][0-9]` Rtime = `[0-9][0-9]:[0-9][0-9]:[0-9][0-9]` Rmicroseconds = `\.[0-9][0-9][0-9][0-9][0-9][0-9]` - Rline = `(57|59):` // must update if the calls to l.Printf / l.Print below move + Rline = `(60|62):` // must update if the calls to l.Printf / l.Print below move Rlongfile = `.*/[A-Za-z0-9_\-]+\.go:` + Rline Rshortfile = `[A-Za-z0-9_\-]+\.go:` + Rline ) @@ -37,6 +37,7 @@ var tests = []tester{ {0, "XXX", "XXX"}, {Ldate, "", Rdate + " "}, {Ltime, "", Rtime + " "}, + {Ltime | Lmsgprefix, "XXX", Rtime + " XXX"}, {Ltime | Lmicroseconds, "", Rtime + Rmicroseconds + " "}, {Lmicroseconds, "", Rtime + Rmicroseconds + " "}, // microsec implies time {Llongfile, "", Rlongfile + " "}, @@ -45,6 +46,8 @@ var tests = []tester{ // everything at once: {Ldate | Ltime | Lmicroseconds | Llongfile, "XXX", "XXX" + Rdate + " " + Rtime + Rmicroseconds + " " + Rlongfile + " "}, {Ldate | Ltime | Lmicroseconds | Lshortfile, "XXX", "XXX" + Rdate + " " + Rtime + Rmicroseconds + " " + Rshortfile + " "}, + {Ldate | Ltime | Lmicroseconds | Llongfile | Lmsgprefix, "XXX", Rdate + " " + Rtime + Rmicroseconds + " " + Rlongfile + " XXX"}, + {Ldate | Ltime | Lmicroseconds | Lshortfile | Lmsgprefix, "XXX", Rdate + " " + Rtime + Rmicroseconds + " " + Rshortfile + " XXX"}, } // Test using Println("hello", 23, "world") or using Printf("hello %d world", 23) diff --git a/src/log/syslog/syslog_test.go b/src/log/syslog/syslog_test.go index 447654a874..8a28d67c98 100644 --- a/src/log/syslog/syslog_test.go +++ b/src/log/syslog/syslog_test.go @@ -134,6 +134,9 @@ func startServer(n, la string, done chan<- string) (addr string, sock io.Closer, } func TestWithSimulated(t *testing.T) { + if runtime.GOOS == "darwin" && (runtime.GOARCH == "arm" || runtime.GOARCH == "arm64") { + t.Skipf("sysctl is not supported on iOS") + } t.Parallel() msg := "Test 123" var transport []string @@ -272,6 +275,9 @@ func check(t *testing.T, in, out string) { } func TestWrite(t *testing.T) { + if runtime.GOOS == "darwin" && (runtime.GOARCH == "arm" || runtime.GOARCH == "arm64") { + t.Skipf("sysctl is not supported on iOS") + } t.Parallel() tests := []struct { pri Priority diff --git a/src/math/big/int.go b/src/math/big/int.go index 23221c083d..f4d9a08d72 100644 --- a/src/math/big/int.go +++ b/src/math/big/int.go @@ -468,7 +468,7 @@ func (x *Int) TrailingZeroBits() uint { // If m == nil or m == 0, z = x**y unless y <= 0 then z = 1. If m > 0, y < 0, // and x and n are not relatively prime, z is unchanged and nil is returned. // -// Modular exponentation of inputs of a particular size is not a +// Modular exponentiation of inputs of a particular size is not a // cryptographically constant-time operation. func (z *Int) Exp(x, y, m *Int) *Int { // See Knuth, volume 2, section 4.6.3. diff --git a/src/net/http/http_test.go b/src/net/http/http_test.go index 8f466bb366..224b46c796 100644 --- a/src/net/http/http_test.go +++ b/src/net/http/http_test.go @@ -9,6 +9,7 @@ package http import ( "bytes" "internal/testenv" + "net/url" "os/exec" "reflect" "testing" @@ -109,3 +110,28 @@ func TestCmdGoNoHTTPServer(t *testing.T) { } } } + +var valuesCount int + +func BenchmarkCopyValues(b *testing.B) { + b.ReportAllocs() + src := url.Values{ + "a": {"1", "2", "3", "4", "5"}, + "b": {"2", "2", "3", "4", "5"}, + "c": {"3", "2", "3", "4", "5"}, + "d": {"4", "2", "3", "4", "5"}, + "e": {"1", "1", "2", "3", "4", "5", "6", "7", "abcdef", "l", "a", "b", "c", "d", "z"}, + "j": {"1", "2"}, + "m": nil, + } + for i := 0; i < b.N; i++ { + dst := url.Values{"a": {"b"}, "b": {"2"}, "c": {"3"}, "d": {"4"}, "j": nil, "m": {"x"}} + copyValues(dst, src) + if valuesCount = len(dst["a"]); valuesCount != 6 { + b.Fatalf(`%d items in dst["a"] but expected 6`, valuesCount) + } + } + if valuesCount == 0 { + b.Fatal("Benchmark wasn't run") + } +} diff --git a/src/net/http/request.go b/src/net/http/request.go index 6e113f1607..0b195a89a6 100644 --- a/src/net/http/request.go +++ b/src/net/http/request.go @@ -1165,9 +1165,7 @@ func (l *maxBytesReader) Close() error { func copyValues(dst, src url.Values) { for k, vs := range src { - for _, value := range vs { - dst.Add(k, value) - } + dst[k] = append(dst[k], vs...) } } diff --git a/src/net/http/transport.go b/src/net/http/transport.go index b23e68f7b3..44dbbef43f 100644 --- a/src/net/http/transport.go +++ b/src/net/http/transport.go @@ -437,7 +437,7 @@ func (tr *transportRequest) setError(err error) { tr.mu.Unlock() } -// useRegisteredProtocol reports whether an alternate protocol (as reqistered +// useRegisteredProtocol reports whether an alternate protocol (as registered // with Transport.RegisterProtocol) should be respected for this request. func (t *Transport) useRegisteredProtocol(req *Request) bool { if req.URL.Scheme == "https" && req.requiresHTTP1() { @@ -1903,7 +1903,7 @@ func (pc *persistConn) readLoop() { } return } - pc.readLimit = maxInt64 // effictively no limit for response bodies + pc.readLimit = maxInt64 // effectively no limit for response bodies pc.mu.Lock() pc.numExpectedResponses-- diff --git a/src/net/interface_test.go b/src/net/interface_test.go index fb6032fbc0..6cdfb6265f 100644 --- a/src/net/interface_test.go +++ b/src/net/interface_test.go @@ -51,6 +51,9 @@ func ipv6LinkLocalUnicastAddr(ifi *Interface) string { } func TestInterfaces(t *testing.T) { + if runtime.GOOS == "darwin" && (runtime.GOARCH == "arm" || runtime.GOARCH == "arm64") { + t.Skipf("sysctl is not supported on iOS") + } ift, err := Interfaces() if err != nil { t.Fatal(err) @@ -82,6 +85,9 @@ func TestInterfaces(t *testing.T) { } func TestInterfaceAddrs(t *testing.T) { + if runtime.GOOS == "darwin" && (runtime.GOARCH == "arm" || runtime.GOARCH == "arm64") { + t.Skipf("sysctl is not supported on iOS") + } ift, err := Interfaces() if err != nil { t.Fatal(err) @@ -101,6 +107,9 @@ func TestInterfaceAddrs(t *testing.T) { } func TestInterfaceUnicastAddrs(t *testing.T) { + if runtime.GOOS == "darwin" && (runtime.GOARCH == "arm" || runtime.GOARCH == "arm64") { + t.Skipf("sysctl is not supported on iOS") + } ift, err := Interfaces() if err != nil { t.Fatal(err) @@ -128,6 +137,9 @@ func TestInterfaceUnicastAddrs(t *testing.T) { } func TestInterfaceMulticastAddrs(t *testing.T) { + if runtime.GOOS == "darwin" && (runtime.GOARCH == "arm" || runtime.GOARCH == "arm64") { + t.Skipf("sysctl is not supported on iOS") + } ift, err := Interfaces() if err != nil { t.Fatal(err) diff --git a/src/net/interface_unix_test.go b/src/net/interface_unix_test.go index 20e75cd036..6a2b7f1a88 100644 --- a/src/net/interface_unix_test.go +++ b/src/net/interface_unix_test.go @@ -186,7 +186,7 @@ func TestInterfaceArrivalAndDepartureZoneCache(t *testing.T) { } // Ensure zoneCache is filled: - _, _ = Listen("tcp", "[fe80::1%nonexistant]:0") + _, _ = Listen("tcp", "[fe80::1%nonexistent]:0") ti := &testInterface{local: "fe80::1"} if err := ti.setLinkLocal(0); err != nil { @@ -200,7 +200,7 @@ func TestInterfaceArrivalAndDepartureZoneCache(t *testing.T) { time.Sleep(3 * time.Millisecond) // If Listen fails (on Linux with “bind: invalid argument”), zoneCache was - // not updated when encountering a nonexistant interface: + // not updated when encountering a nonexistent interface: ln, err := Listen("tcp", "[fe80::1%"+ti.name+"]:0") if err != nil { t.Fatal(err) diff --git a/src/net/url/url.go b/src/net/url/url.go index f29e658af9..bd706eac84 100644 --- a/src/net/url/url.go +++ b/src/net/url/url.go @@ -449,7 +449,7 @@ func getscheme(rawurl string) (scheme, path string, err error) { return "", rawurl, nil } -// split slices s into two substrings separated by the first occurence of +// split slices s into two substrings separated by the first occurrence of // sep. If cutc is true then sep is included with the second substring. // If sep does not occur in s then s and the empty string is returned. func split(s string, sep byte, cutc bool) (string, string) { diff --git a/src/os/file.go b/src/os/file.go index 46ae1a46aa..9afc0ba360 100644 --- a/src/os/file.go +++ b/src/os/file.go @@ -250,7 +250,7 @@ func Mkdir(name string, perm FileMode) error { return nil } -// setStickyBit adds ModeSticky to the permision bits of path, non atomic. +// setStickyBit adds ModeSticky to the permission bits of path, non atomic. func setStickyBit(name string) error { fi, err := Stat(name) if err != nil { diff --git a/src/os/os_test.go b/src/os/os_test.go index 6c88d7e8b8..974374ec66 100644 --- a/src/os/os_test.go +++ b/src/os/os_test.go @@ -1521,6 +1521,9 @@ func testWindowsHostname(t *testing.T, hostname string) { } func TestHostname(t *testing.T) { + if runtime.GOOS == "darwin" && (runtime.GOARCH == "arm" || runtime.GOARCH == "arm64") { + t.Skipf("sysctl is not supported on iOS") + } hostname, err := Hostname() if err != nil { t.Fatal(err) diff --git a/src/os/os_windows_test.go b/src/os/os_windows_test.go index 2693904e56..651fe63b3f 100644 --- a/src/os/os_windows_test.go +++ b/src/os/os_windows_test.go @@ -1183,4 +1183,4 @@ func TestMkdirDevNull(t *testing.T) { if errno != syscall.ENOTDIR { t.Fatalf("error %d is not syscall.ENOTDIR", errno) } -} \ No newline at end of file +} diff --git a/src/path/path.go b/src/path/path.go index 5c905110a1..09a9d00c34 100644 --- a/src/path/path.go +++ b/src/path/path.go @@ -149,9 +149,10 @@ func Split(path string) (dir, file string) { return path[:i+1], path[i+1:] } -// Join joins any number of path elements into a single path, adding a -// separating slash if necessary. The result is Cleaned; in particular, -// all empty strings are ignored. +// Join joins the argument's path elements into a single path, +// separating them with slashes. The result is Cleaned. However, +// if the argument list is empty or all its elements are empty, +// Join returns an empty string. func Join(elem ...string) string { for i, e := range elem { if e != "" { diff --git a/src/reflect/all_test.go b/src/reflect/all_test.go index 4431ce2391..fbb6feb0d9 100644 --- a/src/reflect/all_test.go +++ b/src/reflect/all_test.go @@ -787,6 +787,7 @@ type Loopy interface{} var loop1, loop2 Loop var loopy1, loopy2 Loopy +var cycleMap1, cycleMap2, cycleMap3 map[string]interface{} func init() { loop1 = &loop2 @@ -794,6 +795,13 @@ func init() { loopy1 = &loopy2 loopy2 = &loopy1 + + cycleMap1 = map[string]interface{}{} + cycleMap1["cycle"] = cycleMap1 + cycleMap2 = map[string]interface{}{} + cycleMap2["cycle"] = cycleMap2 + cycleMap3 = map[string]interface{}{} + cycleMap3["different"] = cycleMap3 } var deepEqualTests = []DeepEqualTest{ @@ -860,6 +868,8 @@ var deepEqualTests = []DeepEqualTest{ {&loop1, &loop2, true}, {&loopy1, &loopy1, true}, {&loopy1, &loopy2, true}, + {&cycleMap1, &cycleMap2, true}, + {&cycleMap1, &cycleMap3, false}, } func TestDeepEqual(t *testing.T) { @@ -868,7 +878,7 @@ func TestDeepEqual(t *testing.T) { test.b = test.a } if r := DeepEqual(test.a, test.b); r != test.eq { - t.Errorf("DeepEqual(%v, %v) = %v, want %v", test.a, test.b, r, test.eq) + t.Errorf("DeepEqual(%#v, %#v) = %v, want %v", test.a, test.b, r, test.eq) } } } diff --git a/src/reflect/deepequal.go b/src/reflect/deepequal.go index 5b6694d3f0..f2d46165b5 100644 --- a/src/reflect/deepequal.go +++ b/src/reflect/deepequal.go @@ -33,18 +33,20 @@ func deepValueEqual(v1, v2 Value, visited map[visit]bool, depth int) bool { // We want to avoid putting more in the visited map than we need to. // For any possible reference cycle that might be encountered, - // hard(t) needs to return true for at least one of the types in the cycle. - hard := func(k Kind) bool { - switch k { + // hard(v1, v2) needs to return true for at least one of the types in the cycle, + // and it's safe and valid to get Value's internal pointer. + hard := func(v1, v2 Value) bool { + switch v1.Kind() { case Map, Slice, Ptr, Interface: - return true + // Nil pointers cannot be cyclic. Avoid putting them in the visited map. + return !v1.IsNil() && !v2.IsNil() } return false } - if v1.CanAddr() && v2.CanAddr() && hard(v1.Kind()) { - addr1 := unsafe.Pointer(v1.UnsafeAddr()) - addr2 := unsafe.Pointer(v2.UnsafeAddr()) + if hard(v1, v2) { + addr1 := v1.ptr + addr2 := v2.ptr if uintptr(addr1) > uintptr(addr2) { // Canonicalize order to reduce number of entries in visited. // Assumes non-moving garbage collector. diff --git a/src/reflect/value.go b/src/reflect/value.go index 2e80bfe77f..7fec09962c 100644 --- a/src/reflect/value.go +++ b/src/reflect/value.go @@ -1076,7 +1076,7 @@ func (v Value) IsNil() bool { // IsValid reports whether v represents a value. // It returns false if v is the zero Value. // If IsValid returns false, all other methods except String panic. -// Most functions and methods never return an invalid value. +// Most functions and methods never return an invalid Value. // If one does, its documentation states the conditions explicitly. func (v Value) IsValid() bool { return v.flag != 0 diff --git a/src/regexp/example_test.go b/src/regexp/example_test.go index 2d87580eca..57b18e3fd7 100644 --- a/src/regexp/example_test.go +++ b/src/regexp/example_test.go @@ -181,6 +181,13 @@ func ExampleRegexp_MatchString() { // true } +func ExampleRegexp_NumSubexp() { + re := regexp.MustCompile(`(.*)((a)b)(.*)a`) + fmt.Println(re.NumSubexp()) + // Output: + // 4 +} + func ExampleRegexp_ReplaceAll() { re := regexp.MustCompile(`a(x*)b`) fmt.Printf("%s\n", re.ReplaceAll([]byte("-ab-axxb-"), []byte("T"))) diff --git a/src/runtime/crash_test.go b/src/runtime/crash_test.go index c54bb57da2..c2cab7c813 100644 --- a/src/runtime/crash_test.go +++ b/src/runtime/crash_test.go @@ -143,6 +143,15 @@ func buildTestProg(t *testing.T, binary string, flags ...string) (string, error) return exe, nil } +func TestVDSO(t *testing.T) { + t.Parallel() + output := runTestProg(t, "testprog", "SignalInVDSO") + want := "success\n" + if output != want { + t.Fatalf("output:\n%s\n\nwanted:\n%s", output, want); + } +} + var ( staleRuntimeOnce sync.Once // guards init of staleRuntimeErr staleRuntimeErr error diff --git a/src/runtime/map_benchmark_test.go b/src/runtime/map_benchmark_test.go index cf04ead115..bae1aa0dbd 100644 --- a/src/runtime/map_benchmark_test.go +++ b/src/runtime/map_benchmark_test.go @@ -251,7 +251,7 @@ func BenchmarkMapLast(b *testing.B) { } func BenchmarkMapCycle(b *testing.B) { - // Arrange map entries to be a permuation, so that + // Arrange map entries to be a permutation, so that // we hit all entries, and one lookup is data dependent // on the previous lookup. const N = 3127 diff --git a/src/runtime/map_test.go b/src/runtime/map_test.go index 593e32267d..1b7ccad6ed 100644 --- a/src/runtime/map_test.go +++ b/src/runtime/map_test.go @@ -435,11 +435,11 @@ func TestEmptyKeyAndValue(t *testing.T) { // ("quick keys") as well as long keys. func TestSingleBucketMapStringKeys_DupLen(t *testing.T) { testMapLookups(t, map[string]string{ - "x": "x1val", - "xx": "x2val", - "foo": "fooval", - "bar": "barval", // same key length as "foo" - "xxxx": "x4val", + "x": "x1val", + "xx": "x2val", + "foo": "fooval", + "bar": "barval", // same key length as "foo" + "xxxx": "x4val", strings.Repeat("x", 128): "longval1", strings.Repeat("y", 128): "longval2", }) diff --git a/src/runtime/mgcscavenge.go b/src/runtime/mgcscavenge.go index 45a9eb2b2a..284e6698d1 100644 --- a/src/runtime/mgcscavenge.go +++ b/src/runtime/mgcscavenge.go @@ -74,7 +74,7 @@ func heapRetained() uint64 { // its rate and RSS goal. // // The RSS goal is based on the current heap goal with a small overhead -// to accomodate non-determinism in the allocator. +// to accommodate non-determinism in the allocator. // // The pacing is based on scavengePageRate, which applies to both regular and // huge pages. See that constant for more information. diff --git a/src/runtime/os2_aix.go b/src/runtime/os2_aix.go index d9b94d438c..b79fa08dc2 100644 --- a/src/runtime/os2_aix.go +++ b/src/runtime/os2_aix.go @@ -6,7 +6,7 @@ // Pollset syscalls are in netpoll_aix.go. // The implementation is based on Solaris and Windows. // Each syscall is made by calling its libc symbol using asmcgocall and asmsyscall6 -// asssembly functions. +// assembly functions. package runtime diff --git a/src/runtime/os_windows.go b/src/runtime/os_windows.go index d5af836f4e..567c567000 100644 --- a/src/runtime/os_windows.go +++ b/src/runtime/os_windows.go @@ -775,7 +775,7 @@ func newosproc(mp *m) { func newosproc0(mp *m, stk unsafe.Pointer) { // TODO: this is completely broken. The args passed to newosproc0 (in asm_amd64.s) // are stacksize and function, not *m and stack. - // Check os_linux.go for an implemention that might actually work. + // Check os_linux.go for an implementation that might actually work. throw("bad newosproc0") } diff --git a/src/runtime/pprof/label_test.go b/src/runtime/pprof/label_test.go index 240445f098..de39d85d3a 100644 --- a/src/runtime/pprof/label_test.go +++ b/src/runtime/pprof/label_test.go @@ -24,7 +24,7 @@ func (s labelSorter) Swap(i, j int) { s[i], s[j] = s[j], s[i] } func (s labelSorter) Less(i, j int) bool { return s[i].key < s[j].key } func TestContextLabels(t *testing.T) { - // Background context starts with no lablels. + // Background context starts with no labels. ctx := context.Background() labels := labelsSorted(ctx) if len(labels) != 0 { diff --git a/src/runtime/signal_unix.go b/src/runtime/signal_unix.go index 436c18c126..c9f57a7ba4 100644 --- a/src/runtime/signal_unix.go +++ b/src/runtime/signal_unix.go @@ -274,6 +274,21 @@ func sigpipe() { dieFromSignal(_SIGPIPE) } +// sigFetchG fetches the value of G safely when running in a signal handler. +// On some architectures, the g value may be clobbered when running in a VDSO. +// See issue #32912. +// +//go:nosplit +func sigFetchG(c *sigctxt) *g { + switch GOARCH { + case "arm", "arm64", "ppc64", "ppc64le": + if inVDSOPage(c.sigpc()) { + return nil + } + } + return getg() +} + // sigtrampgo is called from the signal handler function, sigtramp, // written in assembly code. // This is called by the signal handler, and the world may be stopped. @@ -289,9 +304,9 @@ func sigtrampgo(sig uint32, info *siginfo, ctx unsafe.Pointer) { if sigfwdgo(sig, info, ctx) { return } - g := getg() + c := &sigctxt{info, ctx} + g := sigFetchG(c) if g == nil { - c := &sigctxt{info, ctx} if sig == _SIGPROF { sigprofNonGoPC(c.sigpc()) return @@ -347,7 +362,6 @@ func sigtrampgo(sig uint32, info *siginfo, ctx unsafe.Pointer) { signalDuringFork(sig) } - c := &sigctxt{info, ctx} c.fixsigcode(sig) sighandler(sig, info, ctx, g) setg(g) @@ -657,9 +671,10 @@ func sigfwdgo(sig uint32, info *siginfo, ctx unsafe.Pointer) bool { return false } // Determine if the signal occurred inside Go code. We test that: - // (1) we were in a goroutine (i.e., m.curg != nil), and - // (2) we weren't in CGO. - g := getg() + // (1) we weren't in VDSO page, + // (2) we were in a goroutine (i.e., m.curg != nil), and + // (3) we weren't in CGO. + g := sigFetchG(c) if g != nil && g.m != nil && g.m.curg != nil && !g.m.incgo { return false } diff --git a/src/runtime/signal_windows_test.go b/src/runtime/signal_windows_test.go index c56da15292..9748403412 100644 --- a/src/runtime/signal_windows_test.go +++ b/src/runtime/signal_windows_test.go @@ -28,7 +28,7 @@ func TestVectoredHandlerDontCrashOnLibrary(t *testing.T) { if err != nil { t.Fatalf("failed to create temp directory: %v", err) } - defer os.Remove(dir) + defer os.RemoveAll(dir) // build go dll dll := filepath.Join(dir, "testwinlib.dll") diff --git a/src/runtime/testdata/testprog/vdso.go b/src/runtime/testdata/testprog/vdso.go new file mode 100644 index 0000000000..6036f45bc8 --- /dev/null +++ b/src/runtime/testdata/testprog/vdso.go @@ -0,0 +1,55 @@ +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Invoke signal hander in the VDSO context (see issue 32912). + +package main + +import ( + "fmt" + "io/ioutil" + "os" + "runtime/pprof" + "time" +) + +func init() { + register("SignalInVDSO", signalInVDSO) +} + +func signalInVDSO() { + f, err := ioutil.TempFile("", "timeprofnow") + if err != nil { + fmt.Fprintln(os.Stderr, err) + os.Exit(2) + } + + if err := pprof.StartCPUProfile(f); err != nil { + fmt.Fprintln(os.Stderr, err) + os.Exit(2) + } + + t0 := time.Now() + t1 := t0 + // We should get a profiling signal 100 times a second, + // so running for 1 second should be sufficient. + for t1.Sub(t0) < time.Second { + t1 = time.Now() + } + + pprof.StopCPUProfile() + + name := f.Name() + if err := f.Close(); err != nil { + fmt.Fprintln(os.Stderr, err) + os.Exit(2) + } + + if err := os.Remove(name); err != nil { + fmt.Fprintln(os.Stderr, err) + os.Exit(2) + } + + fmt.Println("success"); +} diff --git a/src/runtime/vdso_linux.go b/src/runtime/vdso_linux.go index 71ba4ce416..8518276867 100644 --- a/src/runtime/vdso_linux.go +++ b/src/runtime/vdso_linux.go @@ -281,6 +281,7 @@ func vdsoauxv(tag, val uintptr) { } // vdsoMarker reports whether PC is on the VDSO page. +//go:nosplit func inVDSOPage(pc uintptr) bool { for _, k := range vdsoSymbolKeys { if *k.ptr != 0 { diff --git a/src/syscall/asm_linux_arm64.s b/src/syscall/asm_linux_arm64.s index 7edeafca81..fb22f8d547 100644 --- a/src/syscall/asm_linux_arm64.s +++ b/src/syscall/asm_linux_arm64.s @@ -103,6 +103,29 @@ ok: MOVD ZR, err+72(FP) // errno RET +// func rawVforkSyscall(trap, a1 uintptr) (r1, err uintptr) +TEXT ·rawVforkSyscall(SB),NOSPLIT,$0-32 + MOVD a1+8(FP), R0 + MOVD $0, R1 + MOVD $0, R2 + MOVD $0, R3 + MOVD $0, R4 + MOVD $0, R5 + MOVD trap+0(FP), R8 // syscall entry + SVC + CMN $4095, R0 + BCC ok + MOVD $-1, R4 + MOVD R4, r1+16(FP) // r1 + NEG R0, R0 + MOVD R0, err+24(FP) // errno + RET +ok: + MOVD R0, r1+16(FP) // r1 + MOVD ZR, err+24(FP) // errno + RET + + // func rawSyscallNoError(trap uintptr, a1, a2, a3 uintptr) (r1, r2 uintptr); TEXT ·rawSyscallNoError(SB),NOSPLIT,$0-48 MOVD a1+8(FP), R0 diff --git a/src/syscall/exec_linux.go b/src/syscall/exec_linux.go index a2242b2057..3540d511bf 100644 --- a/src/syscall/exec_linux.go +++ b/src/syscall/exec_linux.go @@ -196,7 +196,7 @@ func forkAndExecInChild1(argv0 *byte, argv, envv []*byte, chroot, dir *byte, att } } - hasRawVforkSyscall := runtime.GOARCH == "amd64" || runtime.GOARCH == "ppc64" || runtime.GOARCH == "s390x" + hasRawVforkSyscall := runtime.GOARCH == "amd64" || runtime.GOARCH == "ppc64" || runtime.GOARCH == "s390x" || runtime.GOARCH == "arm64" // About to call fork. // No more allocation or calls of non-assembly functions. diff --git a/src/syscall/syscall_darwin.go b/src/syscall/syscall_darwin.go index 7d795ee4d3..2efcf3bea9 100644 --- a/src/syscall/syscall_darwin.go +++ b/src/syscall/syscall_darwin.go @@ -337,7 +337,6 @@ func Kill(pid int, signum Signal) (err error) { return kill(pid, int(signum), 1) //sysnb ioctlPtr(fd int, req uint, arg unsafe.Pointer) (err error) = SYS_ioctl //sysnb execve(path *byte, argv **byte, envp **byte) (err error) //sysnb exit(res int) (err error) -//sys sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) //sys fcntlPtr(fd int, cmd int, arg unsafe.Pointer) (val int, err error) = SYS_fcntl //sys unlinkat(fd int, path string, flags int) (err error) //sys openat(fd int, path string, flags int, perm uint32) (fdret int, err error) diff --git a/src/syscall/syscall_darwin_386.go b/src/syscall/syscall_darwin_386.go index 8c5b82da55..46714bab57 100644 --- a/src/syscall/syscall_darwin_386.go +++ b/src/syscall/syscall_darwin_386.go @@ -22,6 +22,7 @@ func setTimeval(sec, usec int64) Timeval { //sys Statfs(path string, stat *Statfs_t) (err error) = SYS_statfs64 //sys fstatat(fd int, path string, stat *Stat_t, flags int) (err error) = SYS_fstatat64 //sys ptrace(request int, pid int, addr uintptr, data uintptr) (err error) +//sys sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) func SetKevent(k *Kevent_t, fd, mode, flags int) { k.Ident = uint32(fd) diff --git a/src/syscall/syscall_darwin_amd64.go b/src/syscall/syscall_darwin_amd64.go index 23a4e5f996..43506e4531 100644 --- a/src/syscall/syscall_darwin_amd64.go +++ b/src/syscall/syscall_darwin_amd64.go @@ -22,6 +22,7 @@ func setTimeval(sec, usec int64) Timeval { //sys Statfs(path string, stat *Statfs_t) (err error) = SYS_statfs64 //sys fstatat(fd int, path string, stat *Stat_t, flags int) (err error) = SYS_fstatat64 //sys ptrace(request int, pid int, addr uintptr, data uintptr) (err error) +//sys sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) func SetKevent(k *Kevent_t, fd, mode, flags int) { k.Ident = uint64(fd) diff --git a/src/syscall/syscall_darwin_arm.go b/src/syscall/syscall_darwin_arm.go index 7f39cf4003..4ca2e300e4 100644 --- a/src/syscall/syscall_darwin_arm.go +++ b/src/syscall/syscall_darwin_arm.go @@ -29,6 +29,10 @@ func ptrace(request int, pid int, addr uintptr, data uintptr) error { return ENOTSUP } +func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) error { + return ENOTSUP +} + func SetKevent(k *Kevent_t, fd, mode, flags int) { k.Ident = uint32(fd) k.Filter = int16(mode) diff --git a/src/syscall/syscall_darwin_arm64.go b/src/syscall/syscall_darwin_arm64.go index bd110f2e7f..fde48f3596 100644 --- a/src/syscall/syscall_darwin_arm64.go +++ b/src/syscall/syscall_darwin_arm64.go @@ -29,6 +29,10 @@ func ptrace(request int, pid int, addr uintptr, data uintptr) error { return ENOTSUP } +func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) error { + return ENOTSUP +} + func SetKevent(k *Kevent_t, fd, mode, flags int) { k.Ident = uint64(fd) k.Filter = int16(mode) diff --git a/src/syscall/syscall_linux_arm64.go b/src/syscall/syscall_linux_arm64.go index 1ea48892ba..95065bfe2d 100644 --- a/src/syscall/syscall_linux_arm64.go +++ b/src/syscall/syscall_linux_arm64.go @@ -155,6 +155,4 @@ const ( SYS_EPOLL_WAIT = 1069 ) -func rawVforkSyscall(trap, a1 uintptr) (r1 uintptr, err Errno) { - panic("not implemented") -} +func rawVforkSyscall(trap, a1 uintptr) (r1 uintptr, err Errno) diff --git a/src/syscall/timestruct.go b/src/syscall/timestruct.go index d17811c121..09be22c971 100644 --- a/src/syscall/timestruct.go +++ b/src/syscall/timestruct.go @@ -8,7 +8,7 @@ package syscall // TimespecToNsec converts a Timespec value into a number of // nanoseconds since the Unix epoch. -func TimespecToNsec(ts Timespec) int64 { return int64(ts.Sec)*1e9 + int64(ts.Nsec) } +func TimespecToNsec(ts Timespec) int64 { return ts.Nano() } // NsecToTimespec takes a number of nanoseconds since the Unix epoch // and returns the corresponding Timespec value. @@ -24,7 +24,7 @@ func NsecToTimespec(nsec int64) Timespec { // TimevalToNsec converts a Timeval value into a number of nanoseconds // since the Unix epoch. -func TimevalToNsec(tv Timeval) int64 { return int64(tv.Sec)*1e9 + int64(tv.Usec)*1e3 } +func TimevalToNsec(tv Timeval) int64 { return tv.Nano() } // NsecToTimeval takes a number of nanoseconds since the Unix epoch // and returns the corresponding Timeval value. diff --git a/src/syscall/zsyscall_darwin_386.go b/src/syscall/zsyscall_darwin_386.go index 2c3b15f5f9..0ffc692116 100644 --- a/src/syscall/zsyscall_darwin_386.go +++ b/src/syscall/zsyscall_darwin_386.go @@ -1870,27 +1870,6 @@ func libc_exit_trampoline() // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { - var _p0 unsafe.Pointer - if len(mib) > 0 { - _p0 = unsafe.Pointer(&mib[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := syscall6(funcPC(libc_sysctl_trampoline), uintptr(_p0), uintptr(len(mib)), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(oldlen)), uintptr(unsafe.Pointer(new)), uintptr(newlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -func libc_sysctl_trampoline() - -//go:linkname libc_sysctl libc_sysctl -//go:cgo_import_dynamic libc_sysctl sysctl "/usr/lib/libSystem.B.dylib" - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func fcntlPtr(fd int, cmd int, arg unsafe.Pointer) (val int, err error) { r0, _, e1 := syscall(funcPC(libc_fcntl_trampoline), uintptr(fd), uintptr(cmd), uintptr(arg)) val = int(r0) @@ -2081,3 +2060,24 @@ func libc_ptrace_trampoline() //go:linkname libc_ptrace libc_ptrace //go:cgo_import_dynamic libc_ptrace ptrace "/usr/lib/libSystem.B.dylib" + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { + var _p0 unsafe.Pointer + if len(mib) > 0 { + _p0 = unsafe.Pointer(&mib[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := syscall6(funcPC(libc_sysctl_trampoline), uintptr(_p0), uintptr(len(mib)), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(oldlen)), uintptr(unsafe.Pointer(new)), uintptr(newlen)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +func libc_sysctl_trampoline() + +//go:linkname libc_sysctl libc_sysctl +//go:cgo_import_dynamic libc_sysctl sysctl "/usr/lib/libSystem.B.dylib" diff --git a/src/syscall/zsyscall_darwin_386.s b/src/syscall/zsyscall_darwin_386.s index d84b46229e..0dec5baa22 100644 --- a/src/syscall/zsyscall_darwin_386.s +++ b/src/syscall/zsyscall_darwin_386.s @@ -229,8 +229,6 @@ TEXT ·libc_execve_trampoline(SB),NOSPLIT,$0-0 JMP libc_execve(SB) TEXT ·libc_exit_trampoline(SB),NOSPLIT,$0-0 JMP libc_exit(SB) -TEXT ·libc_sysctl_trampoline(SB),NOSPLIT,$0-0 - JMP libc_sysctl(SB) TEXT ·libc_unlinkat_trampoline(SB),NOSPLIT,$0-0 JMP libc_unlinkat(SB) TEXT ·libc_openat_trampoline(SB),NOSPLIT,$0-0 @@ -251,3 +249,5 @@ TEXT ·libc_fstatat64_trampoline(SB),NOSPLIT,$0-0 JMP libc_fstatat64(SB) TEXT ·libc_ptrace_trampoline(SB),NOSPLIT,$0-0 JMP libc_ptrace(SB) +TEXT ·libc_sysctl_trampoline(SB),NOSPLIT,$0-0 + JMP libc_sysctl(SB) diff --git a/src/syscall/zsyscall_darwin_amd64.go b/src/syscall/zsyscall_darwin_amd64.go index 83214de2fb..942152ae39 100644 --- a/src/syscall/zsyscall_darwin_amd64.go +++ b/src/syscall/zsyscall_darwin_amd64.go @@ -1870,27 +1870,6 @@ func libc_exit_trampoline() // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { - var _p0 unsafe.Pointer - if len(mib) > 0 { - _p0 = unsafe.Pointer(&mib[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := syscall6(funcPC(libc_sysctl_trampoline), uintptr(_p0), uintptr(len(mib)), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(oldlen)), uintptr(unsafe.Pointer(new)), uintptr(newlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -func libc_sysctl_trampoline() - -//go:linkname libc_sysctl libc_sysctl -//go:cgo_import_dynamic libc_sysctl sysctl "/usr/lib/libSystem.B.dylib" - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func fcntlPtr(fd int, cmd int, arg unsafe.Pointer) (val int, err error) { r0, _, e1 := syscall(funcPC(libc_fcntl_trampoline), uintptr(fd), uintptr(cmd), uintptr(arg)) val = int(r0) @@ -2081,3 +2060,24 @@ func libc_ptrace_trampoline() //go:linkname libc_ptrace libc_ptrace //go:cgo_import_dynamic libc_ptrace ptrace "/usr/lib/libSystem.B.dylib" + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + +func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { + var _p0 unsafe.Pointer + if len(mib) > 0 { + _p0 = unsafe.Pointer(&mib[0]) + } else { + _p0 = unsafe.Pointer(&_zero) + } + _, _, e1 := syscall6(funcPC(libc_sysctl_trampoline), uintptr(_p0), uintptr(len(mib)), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(oldlen)), uintptr(unsafe.Pointer(new)), uintptr(newlen)) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +func libc_sysctl_trampoline() + +//go:linkname libc_sysctl libc_sysctl +//go:cgo_import_dynamic libc_sysctl sysctl "/usr/lib/libSystem.B.dylib" diff --git a/src/syscall/zsyscall_darwin_amd64.s b/src/syscall/zsyscall_darwin_amd64.s index 23ddbe06c0..fcb7f9c291 100644 --- a/src/syscall/zsyscall_darwin_amd64.s +++ b/src/syscall/zsyscall_darwin_amd64.s @@ -229,8 +229,6 @@ TEXT ·libc_execve_trampoline(SB),NOSPLIT,$0-0 JMP libc_execve(SB) TEXT ·libc_exit_trampoline(SB),NOSPLIT,$0-0 JMP libc_exit(SB) -TEXT ·libc_sysctl_trampoline(SB),NOSPLIT,$0-0 - JMP libc_sysctl(SB) TEXT ·libc_unlinkat_trampoline(SB),NOSPLIT,$0-0 JMP libc_unlinkat(SB) TEXT ·libc_openat_trampoline(SB),NOSPLIT,$0-0 @@ -251,3 +249,5 @@ TEXT ·libc_fstatat64_trampoline(SB),NOSPLIT,$0-0 JMP libc_fstatat64(SB) TEXT ·libc_ptrace_trampoline(SB),NOSPLIT,$0-0 JMP libc_ptrace(SB) +TEXT ·libc_sysctl_trampoline(SB),NOSPLIT,$0-0 + JMP libc_sysctl(SB) diff --git a/src/syscall/zsyscall_darwin_arm.go b/src/syscall/zsyscall_darwin_arm.go index 2a643f209f..73ee205d33 100644 --- a/src/syscall/zsyscall_darwin_arm.go +++ b/src/syscall/zsyscall_darwin_arm.go @@ -1870,27 +1870,6 @@ func libc_exit_trampoline() // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { - var _p0 unsafe.Pointer - if len(mib) > 0 { - _p0 = unsafe.Pointer(&mib[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := syscall6(funcPC(libc_sysctl_trampoline), uintptr(_p0), uintptr(len(mib)), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(oldlen)), uintptr(unsafe.Pointer(new)), uintptr(newlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -func libc_sysctl_trampoline() - -//go:linkname libc_sysctl libc_sysctl -//go:cgo_import_dynamic libc_sysctl sysctl "/usr/lib/libSystem.B.dylib" - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func fcntlPtr(fd int, cmd int, arg unsafe.Pointer) (val int, err error) { r0, _, e1 := syscall(funcPC(libc_fcntl_trampoline), uintptr(fd), uintptr(cmd), uintptr(arg)) val = int(r0) diff --git a/src/syscall/zsyscall_darwin_arm.s b/src/syscall/zsyscall_darwin_arm.s index c7cd83d83e..6462a198b0 100644 --- a/src/syscall/zsyscall_darwin_arm.s +++ b/src/syscall/zsyscall_darwin_arm.s @@ -229,8 +229,6 @@ TEXT ·libc_execve_trampoline(SB),NOSPLIT,$0-0 JMP libc_execve(SB) TEXT ·libc_exit_trampoline(SB),NOSPLIT,$0-0 JMP libc_exit(SB) -TEXT ·libc_sysctl_trampoline(SB),NOSPLIT,$0-0 - JMP libc_sysctl(SB) TEXT ·libc_unlinkat_trampoline(SB),NOSPLIT,$0-0 JMP libc_unlinkat(SB) TEXT ·libc_openat_trampoline(SB),NOSPLIT,$0-0 diff --git a/src/syscall/zsyscall_darwin_arm64.go b/src/syscall/zsyscall_darwin_arm64.go index 0b77839869..bbe092cd07 100644 --- a/src/syscall/zsyscall_darwin_arm64.go +++ b/src/syscall/zsyscall_darwin_arm64.go @@ -1870,27 +1870,6 @@ func libc_exit_trampoline() // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT -func sysctl(mib []_C_int, old *byte, oldlen *uintptr, new *byte, newlen uintptr) (err error) { - var _p0 unsafe.Pointer - if len(mib) > 0 { - _p0 = unsafe.Pointer(&mib[0]) - } else { - _p0 = unsafe.Pointer(&_zero) - } - _, _, e1 := syscall6(funcPC(libc_sysctl_trampoline), uintptr(_p0), uintptr(len(mib)), uintptr(unsafe.Pointer(old)), uintptr(unsafe.Pointer(oldlen)), uintptr(unsafe.Pointer(new)), uintptr(newlen)) - if e1 != 0 { - err = errnoErr(e1) - } - return -} - -func libc_sysctl_trampoline() - -//go:linkname libc_sysctl libc_sysctl -//go:cgo_import_dynamic libc_sysctl sysctl "/usr/lib/libSystem.B.dylib" - -// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT - func fcntlPtr(fd int, cmd int, arg unsafe.Pointer) (val int, err error) { r0, _, e1 := syscall(funcPC(libc_fcntl_trampoline), uintptr(fd), uintptr(cmd), uintptr(arg)) val = int(r0) diff --git a/src/syscall/zsyscall_darwin_arm64.s b/src/syscall/zsyscall_darwin_arm64.s index 7b8b3764a8..ec2210f1df 100644 --- a/src/syscall/zsyscall_darwin_arm64.s +++ b/src/syscall/zsyscall_darwin_arm64.s @@ -229,8 +229,6 @@ TEXT ·libc_execve_trampoline(SB),NOSPLIT,$0-0 JMP libc_execve(SB) TEXT ·libc_exit_trampoline(SB),NOSPLIT,$0-0 JMP libc_exit(SB) -TEXT ·libc_sysctl_trampoline(SB),NOSPLIT,$0-0 - JMP libc_sysctl(SB) TEXT ·libc_unlinkat_trampoline(SB),NOSPLIT,$0-0 JMP libc_unlinkat(SB) TEXT ·libc_openat_trampoline(SB),NOSPLIT,$0-0 diff --git a/test/closure2.go b/test/closure2.go index 4d61b45d3f..e4db05d884 100644 --- a/test/closure2.go +++ b/test/closure2.go @@ -74,7 +74,7 @@ func main() { for i := range [2]int{} { if i == 0 { g = func() int { - return i // test that we capture by ref here, i is mutated on every interation + return i // test that we capture by ref here, i is mutated on every interaction } } } @@ -90,7 +90,7 @@ func main() { q++ g = func() int { return q // test that we capture by ref here - // q++ must on a different decldepth than q declaration + // q++ must on a different decldepth than q declaration } } if g() != 2 { @@ -108,7 +108,7 @@ func main() { }()] = range [2]int{} { g = func() int { return q // test that we capture by ref here - // q++ must on a different decldepth than q declaration + // q++ must on a different decldepth than q declaration } } if g() != 2 { diff --git a/test/codegen/mathbits.go b/test/codegen/mathbits.go index 9cdfe0b06a..5adf7f5fcd 100644 --- a/test/codegen/mathbits.go +++ b/test/codegen/mathbits.go @@ -557,6 +557,7 @@ func Mul(x, y uint) (hi, lo uint) { // arm64:"UMULH","MUL" // ppc64:"MULHDU","MULLD" // ppc64le:"MULHDU","MULLD" + // s390x:"MLGR" return bits.Mul(x, y) } @@ -565,6 +566,7 @@ func Mul64(x, y uint64) (hi, lo uint64) { // arm64:"UMULH","MUL" // ppc64:"MULHDU","MULLD" // ppc64le:"MULHDU","MULLD" + // s390x:"MLGR" return bits.Mul64(x, y) } diff --git a/test/codegen/memcombine.go b/test/codegen/memcombine.go index 747e23001d..d5f3af7692 100644 --- a/test/codegen/memcombine.go +++ b/test/codegen/memcombine.go @@ -57,6 +57,7 @@ func load_le16(b []byte) { // amd64:`MOVWLZX\s\(.*\),`,-`MOVB`,-`OR` // ppc64le:`MOVHZ\s`,-`MOVBZ` // arm64:`MOVHU\s\(R[0-9]+\),`,-`MOVB` + // s390x:`MOVHBR\s\(.*\),` sink16 = binary.LittleEndian.Uint16(b) } @@ -64,6 +65,7 @@ func load_le16_idx(b []byte, idx int) { // amd64:`MOVWLZX\s\(.*\),`,-`MOVB`,-`OR` // ppc64le:`MOVHZ\s`,-`MOVBZ` // arm64:`MOVHU\s\(R[0-9]+\)\(R[0-9]+\),`,-`MOVB` + // s390x:`MOVHBR\s\(.*\)\(.*\*1\),` sink16 = binary.LittleEndian.Uint16(b[idx:]) } @@ -103,6 +105,7 @@ func load_be16(b []byte) { // amd64:`ROLW\s\$8`,-`MOVB`,-`OR` // arm64:`REV16W`,`MOVHU\s\(R[0-9]+\),`,-`MOVB` // ppc64le:`MOVHBR` + // s390x:`MOVHZ\s\(.*\),`,-`OR`,-`ORW`,-`SLD`,-`SLW` sink16 = binary.BigEndian.Uint16(b) } @@ -110,6 +113,7 @@ func load_be16_idx(b []byte, idx int) { // amd64:`ROLW\s\$8`,-`MOVB`,-`OR` // arm64:`REV16W`,`MOVHU\s\(R[0-9]+\)\(R[0-9]+\),`,-`MOVB` // ppc64le:`MOVHBR` + // s390x:`MOVHZ\s\(.*\)\(.*\*1\),`,-`OR`,-`ORW`,-`SLD`,-`SLW` sink16 = binary.BigEndian.Uint16(b[idx:]) } @@ -351,6 +355,7 @@ func store_le64(b []byte) { // amd64:`MOVQ\s.*\(.*\)$`,-`SHR.` // arm64:`MOVD`,-`MOV[WBH]` // ppc64le:`MOVD\s`,-`MOV[BHW]\s` + // s390x:`MOVDBR\s.*\(.*\)$` binary.LittleEndian.PutUint64(b, sink64) } @@ -358,6 +363,7 @@ func store_le64_idx(b []byte, idx int) { // amd64:`MOVQ\s.*\(.*\)\(.*\*1\)$`,-`SHR.` // arm64:`MOVD\sR[0-9]+,\s\(R[0-9]+\)\(R[0-9]+\)`,-`MOV[BHW]` // ppc64le:`MOVD\s`,-`MOV[BHW]\s` + // s390x:`MOVDBR\s.*\(.*\)\(.*\*1\)$` binary.LittleEndian.PutUint64(b[idx:], sink64) } @@ -365,6 +371,7 @@ func store_le32(b []byte) { // amd64:`MOVL\s` // arm64:`MOVW`,-`MOV[BH]` // ppc64le:`MOVW\s` + // s390x:`MOVWBR\s.*\(.*\)$` binary.LittleEndian.PutUint32(b, sink32) } @@ -372,6 +379,7 @@ func store_le32_idx(b []byte, idx int) { // amd64:`MOVL\s` // arm64:`MOVW\sR[0-9]+,\s\(R[0-9]+\)\(R[0-9]+\)`,-`MOV[BH]` // ppc64le:`MOVW\s` + // s390x:`MOVWBR\s.*\(.*\)\(.*\*1\)$` binary.LittleEndian.PutUint32(b[idx:], sink32) } @@ -379,6 +387,7 @@ func store_le16(b []byte) { // amd64:`MOVW\s` // arm64:`MOVH`,-`MOVB` // ppc64le:`MOVH\s` + // s390x:`MOVHBR\s.*\(.*\)$` binary.LittleEndian.PutUint16(b, sink16) } @@ -386,6 +395,7 @@ func store_le16_idx(b []byte, idx int) { // amd64:`MOVW\s` // arm64:`MOVH\sR[0-9]+,\s\(R[0-9]+\)\(R[0-9]+\)`,-`MOVB` // ppc64le:`MOVH\s` + // s390x:`MOVHBR\s.*\(.*\)\(.*\*1\)$` binary.LittleEndian.PutUint16(b[idx:], sink16) } @@ -393,6 +403,7 @@ func store_be64(b []byte) { // amd64:`BSWAPQ`,-`SHR.` // arm64:`MOVD`,`REV`,-`MOV[WBH]`,-`REVW`,-`REV16W` // ppc64le:`MOVDBR` + // s390x:`MOVD\s.*\(.*\)$`,-`SRW\s`,-`SRD\s` binary.BigEndian.PutUint64(b, sink64) } @@ -400,6 +411,7 @@ func store_be64_idx(b []byte, idx int) { // amd64:`BSWAPQ`,-`SHR.` // arm64:`REV`,`MOVD\sR[0-9]+,\s\(R[0-9]+\)\(R[0-9]+\)`,-`MOV[BHW]`,-`REV16W`,-`REVW` // ppc64le:`MOVDBR` + // s390x:`MOVD\s.*\(.*\)\(.*\*1\)$`,-`SRW\s`,-`SRD\s` binary.BigEndian.PutUint64(b[idx:], sink64) } @@ -407,6 +419,7 @@ func store_be32(b []byte) { // amd64:`BSWAPL`,-`SHR.` // arm64:`MOVW`,`REVW`,-`MOV[BH]`,-`REV16W` // ppc64le:`MOVWBR` + // s390x:`MOVW\s.*\(.*\)$`,-`SRW\s`,-`SRD\s` binary.BigEndian.PutUint32(b, sink32) } @@ -414,6 +427,7 @@ func store_be32_idx(b []byte, idx int) { // amd64:`BSWAPL`,-`SHR.` // arm64:`REVW`,`MOVW\sR[0-9]+,\s\(R[0-9]+\)\(R[0-9]+\)`,-`MOV[BH]`,-`REV16W` // ppc64le:`MOVWBR` + // s390x:`MOVW\s.*\(.*\)\(.*\*1\)$`,-`SRW\s`,-`SRD\s` binary.BigEndian.PutUint32(b[idx:], sink32) } @@ -421,6 +435,7 @@ func store_be16(b []byte) { // amd64:`ROLW\s\$8`,-`SHR.` // arm64:`MOVH`,`REV16W`,-`MOVB` // ppc64le:`MOVHBR` + // s390x:`MOVH\s.*\(.*\)$`,-`SRW\s`,-`SRD\s` binary.BigEndian.PutUint16(b, sink16) } @@ -428,6 +443,7 @@ func store_be16_idx(b []byte, idx int) { // amd64:`ROLW\s\$8`,-`SHR.` // arm64:`MOVH\sR[0-9]+,\s\(R[0-9]+\)\(R[0-9]+\)`,`REV16W`,-`MOVB` // ppc64le:`MOVHBR` + // s390x:`MOVH\s.*\(.*\)\(.*\*1\)$`,-`SRW\s`,-`SRD\s` binary.BigEndian.PutUint16(b[idx:], sink16) } diff --git a/test/finprofiled.go b/test/finprofiled.go index 0eb801a4bd..ca7e3c81dc 100644 --- a/test/finprofiled.go +++ b/test/finprofiled.go @@ -23,7 +23,7 @@ func main() { // only for middle bytes. The finalizer resurrects that object. // As the result, all allocated memory must stay alive. const ( - N = 1 << 20 + N = 1 << 20 tinyBlockSize = 16 // runtime._TinySize ) hold := make([]*int32, 0, N) @@ -36,7 +36,7 @@ func main() { } } // Finalize as much as possible. - // Note: the sleep only increases probility of bug detection, + // Note: the sleep only increases probability of bug detection, // it cannot lead to false failure. for i := 0; i < 5; i++ { runtime.GC() diff --git a/test/fixedbugs/issue20185.go b/test/fixedbugs/issue20185.go index 00c23f6407..2cbb143ed0 100644 --- a/test/fixedbugs/issue20185.go +++ b/test/fixedbugs/issue20185.go @@ -19,7 +19,7 @@ func F() { const x = 1 func G() { - switch t := x.(type) { // ERROR "cannot type switch on non-interface value x \(type untyped number\)" + switch t := x.(type) { // ERROR "cannot type switch on non-interface value x \(type untyped int\)" default: } } diff --git a/test/fixedbugs/issue21979.go b/test/fixedbugs/issue21979.go new file mode 100644 index 0000000000..1c02f574c3 --- /dev/null +++ b/test/fixedbugs/issue21979.go @@ -0,0 +1,46 @@ +// errorcheck + +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package p + +func f() { + _ = bool("") // ERROR "cannot convert .. \(type untyped string\) to type bool" + _ = bool(1) // ERROR "cannot convert 1 \(type untyped int\) to type bool" + _ = bool(1.0) // ERROR "cannot convert 1 \(type untyped float\) to type bool" + _ = bool(-4 + 2i) // ERROR "cannot convert -4 \+ 2i \(type untyped complex\) to type bool" + + _ = string(true) // ERROR "cannot convert true \(type untyped bool\) to type string" + _ = string(-1) + _ = string(1.0) // ERROR "cannot convert 1 \(type untyped float\) to type string" + _ = string(-4 + 2i) // ERROR "cannot convert -4 \+ 2i \(type untyped complex\) to type string" + + _ = int("") // ERROR "cannot convert .. \(type untyped string\) to type int" + _ = int(true) // ERROR "cannot convert true \(type untyped bool\) to type int" + _ = int(-1) + _ = int(1) + _ = int(1.0) + _ = int(-4 + 2i) // ERROR "truncated to integer" + + _ = uint("") // ERROR "cannot convert .. \(type untyped string\) to type uint" + _ = uint(true) // ERROR "cannot convert true \(type untyped bool\) to type uint" + _ = uint(-1) // ERROR "constant -1 overflows uint" + _ = uint(1) + _ = uint(1.0) + _ = uint(-4 + 2i) // ERROR "constant -4 overflows uint" "truncated to integer" + + _ = float64("") // ERROR "cannot convert .. \(type untyped string\) to type float64" + _ = float64(true) // ERROR "cannot convert true \(type untyped bool\) to type float64" + _ = float64(-1) + _ = float64(1) + _ = float64(1.0) + _ = float64(-4 + 2i) // ERROR "truncated to real" + + _ = complex128("") // ERROR "cannot convert .. \(type untyped string\) to type complex128" + _ = complex128(true) // ERROR "cannot convert true \(type untyped bool\) to type complex128" + _ = complex128(-1) + _ = complex128(1) + _ = complex128(1.0) +} diff --git a/test/fixedbugs/issue22344.go b/test/fixedbugs/issue22344.go new file mode 100644 index 0000000000..9f2a6e8642 --- /dev/null +++ b/test/fixedbugs/issue22344.go @@ -0,0 +1,83 @@ +// compile + +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Test iota inside a function in a ConstSpec is accepted +package main + +import ( + "unsafe" +) + +// iotas are usable inside closures in constant declarations (#22345) +const ( + _ = iota + _ = len([iota]byte{}) + _ = unsafe.Sizeof(iota) + _ = unsafe.Sizeof(func() { _ = iota }) + _ = unsafe.Sizeof(func() { var _ = iota }) + _ = unsafe.Sizeof(func() { const _ = iota }) + _ = unsafe.Sizeof(func() { type _ [iota]byte }) + _ = unsafe.Sizeof(func() { func() int { return iota }() }) +) + +// verify inner and outer const declarations have distinct iotas +const ( + zero = iota + one = iota + _ = unsafe.Sizeof(func() { + var x [iota]int // [2]int + var y [iota]int // [2]int + const ( + Zero = iota + One + Two + _ = unsafe.Sizeof([iota - 1]int{} == x) // assert types are equal + _ = unsafe.Sizeof([iota - 2]int{} == y) // assert types are equal + _ = unsafe.Sizeof([Two]int{} == x) // assert types are equal + ) + var z [iota]int // [2]int + _ = unsafe.Sizeof([2]int{} == z) // assert types are equal + }) + three = iota // the sequence continues +) + +var _ [three]int = [3]int{} // assert 'three' has correct value + +func main() { + + const ( + _ = iota + _ = len([iota]byte{}) + _ = unsafe.Sizeof(iota) + _ = unsafe.Sizeof(func() { _ = iota }) + _ = unsafe.Sizeof(func() { var _ = iota }) + _ = unsafe.Sizeof(func() { const _ = iota }) + _ = unsafe.Sizeof(func() { type _ [iota]byte }) + _ = unsafe.Sizeof(func() { func() int { return iota }() }) + ) + + const ( + zero = iota + one = iota + _ = unsafe.Sizeof(func() { + var x [iota]int // [2]int + var y [iota]int // [2]int + const ( + Zero = iota + One + Two + _ = unsafe.Sizeof([iota - 1]int{} == x) // assert types are equal + _ = unsafe.Sizeof([iota - 2]int{} == y) // assert types are equal + _ = unsafe.Sizeof([Two]int{} == x) // assert types are equal + ) + var z [iota]int // [2]int + _ = unsafe.Sizeof([2]int{} == z) // assert types are equal + }) + three = iota // the sequence continues + ) + + var _ [three]int = [3]int{} // assert 'three' has correct value +} diff --git a/test/fixedbugs/issue24339.go b/test/fixedbugs/issue24339.go index 0670becdfe..502c575ec8 100644 --- a/test/fixedbugs/issue24339.go +++ b/test/fixedbugs/issue24339.go @@ -6,7 +6,7 @@ package p -// Use a diffent line number for each token so we can +// Use a different line number for each token so we can // check that the error message appears at the correct // position. var _ = struct{}{ /*line :20:1*/foo /*line :21:1*/: /*line :22:1*/0 } diff --git a/test/fixedbugs/issue7310.go b/test/fixedbugs/issue7310.go index 5ae0f1f528..6829d5e126 100644 --- a/test/fixedbugs/issue7310.go +++ b/test/fixedbugs/issue7310.go @@ -11,5 +11,5 @@ package main func main() { _ = copy(nil, []int{}) // ERROR "use of untyped nil" _ = copy([]int{}, nil) // ERROR "use of untyped nil" - _ = 1 + true // ERROR "mismatched types untyped number and untyped bool" + _ = 1 + true // ERROR "mismatched types untyped int and untyped bool" } diff --git a/test/index.go b/test/index.go index d73d137dda..91195ad632 100644 --- a/test/index.go +++ b/test/index.go @@ -251,7 +251,7 @@ func main() { if c == "" && (i == "fgood" || i == "fbad") { return } - // Integral float constat is ok. + // Integral float constant is ok. if c == "c" && n == "" && i == "fgood" { if pass == 0 { fmt.Fprintf(b, "\tuse(%s[%s])\n", pae, cni) diff --git a/test/ken/modconst.go b/test/ken/modconst.go index d88cf10032..c27bf64bdf 100644 --- a/test/ken/modconst.go +++ b/test/ken/modconst.go @@ -4,7 +4,7 @@ // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. -// Test integer modulus by contstants. +// Test integer modulus by constants. package main diff --git a/test/opt_branchlikely.go b/test/opt_branchlikely.go index 84de32179f..884c34916e 100644 --- a/test/opt_branchlikely.go +++ b/test/opt_branchlikely.go @@ -1,6 +1,6 @@ // +build amd64 // errorcheck -0 -d=ssa/likelyadjust/debug=1,ssa/insert_resched_checks/off -// rescheduling check insertion is turend off because the inserted conditional branches perturb the errorcheck +// rescheduling check insertion is turned off because the inserted conditional branches perturb the errorcheck // Copyright 2016 The Go Authors. All rights reserved. // Use of this source code is governed by a BSD-style diff --git a/test/strength.go b/test/strength.go index 94d589c240..823d05af08 100644 --- a/test/strength.go +++ b/test/strength.go @@ -5,7 +5,7 @@ // license that can be found in the LICENSE file. // Generate test of strength reduction for multiplications -// with contstants. Especially useful for amd64/386. +// with constants. Especially useful for amd64/386. package main diff --git a/test/typeswitch2.go b/test/typeswitch2.go index 5958b7db8e..62c96c8330 100644 --- a/test/typeswitch2.go +++ b/test/typeswitch2.go @@ -35,13 +35,3 @@ func whatis(x interface{}) string { } return "" } - -func notused(x interface{}) { - // The first t is in a different scope than the 2nd t; it cannot - // be accessed (=> declared and not used error); but it is legal - // to declare it. - switch t := 0; t := x.(type) { // ERROR "declared and not used" - case int: - _ = t // this is using the t of "t := x.(type)" - } -} diff --git a/test/typeswitch2b.go b/test/typeswitch2b.go new file mode 100644 index 0000000000..135ae86cff --- /dev/null +++ b/test/typeswitch2b.go @@ -0,0 +1,20 @@ +// errorcheck + +// Copyright 2019 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +// Verify that various erroneous type switches are caught by the compiler. +// Does not compile. + +package main + +func notused(x interface{}) { + // The first t is in a different scope than the 2nd t; it cannot + // be accessed (=> declared and not used error); but it is legal + // to declare it. + switch t := 0; t := x.(type) { // ERROR "declared and not used" + case int: + _ = t // this is using the t of "t := x.(type)" + } +} diff --git a/test/uintptrescapes2.go b/test/uintptrescapes2.go index b8117b857b..866efd94d8 100644 --- a/test/uintptrescapes2.go +++ b/test/uintptrescapes2.go @@ -18,7 +18,7 @@ func F1(a uintptr) {} // ERROR "escaping uintptr" //go:uintptrescapes //go:noinline -func F2(a ...uintptr) {} // ERROR "escaping ...uintptr" "a does not escape" +func F2(a ...uintptr) {} // ERROR "escaping ...uintptr" //go:uintptrescapes //go:noinline