-
-
Notifications
You must be signed in to change notification settings - Fork 30.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A new tail-calling interpreter for significantly better interpreter performance #128563
Comments
Can you show what a typical tail-calling sequence looks like? Does it combine tail-calling with a computed goto as in this example from the protobuf blog post? MUSTTAIL return op_table[op](ARGS); |
Mark gives a pretty good example here faster-cpython/ideas#642 (comment) |
Neat, thank you! |
How much performance is attributed to |
@diegorusso I did some experiments on WASI and esmcriptem (which do not support |
FYI from Clang docs
|
@WolframAlph I think that doesn't matter because we're only using this on |
Anyway I pinged the GCC team at Arm and a ticket has been created to implement |
This is what I was expecting. The tail call by itself is not enough (actually I was expecting similar performance to computed goto) and you need Have you tested it on AArch64 as well? |
Not yet. I want to test it on the Faster CPython build bot for macOS (that has clang-19, so it's a fair comparison), but I do not have access to it. If you could run some benchmarks for this I would be really grateful! If you want a quick-and-dirty check that it's working, try just running the pystones benchmark. I got a 25% speedup with tail calls and LTO and PGO on (make sure you enable those, because it contributes like half the perf win for some reason). https://gist.github.com/Fidget-Spinner/e7bf204bf605680b0fc1540fe3777acf And pass |
@Fidget-Spinner If I understand correctly, the whole trick is:
Do I get it right? |
Yeah. Also I suspect most of the speedup is there because the current interpreter loop is too big to optimize properly, so all the pre-existing compilers perform not-so-well for it. For example, PGO gives this roughly another 10% speedup over just -O3. LTO roughly another 10% over PGO and -O3. Normally if we compare equally, PGO and LTO should optimize both the old interpreter and new one similarly, but I guess the new interpreter is easier to optimize so it produces better quality code. |
Makes sense. By splitting cases into functions, compiler can optimize each one of them better individually rather than one giant chunk I assume. Same was mentioned in the protobuf article you linked. |
Here is the result for cross-checking bm_pystones with @Fidget-Spinner on macOS aarch64 Baseline: 2228e92Configure
Result
Tail-calling https://github.com/Fidget-Spinner/cpython/tree/tail-callConfigure
Result
cc @diegorusso |
According to Donghee's comment above, Pystones is nearly 50% faster on macOS AArch64 vs 25% faster on my Ubuntu AMD64 machine. I suspect it might be because AArch64 has more registers. However, who knows at this point :)? |
Reading https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118328, I have not tested on GCC trunk. This is pure speculation on my part that perf is bad on there. I can try with GCC eventually after the PR lands and we can test it from there. However, testing with clang with just musttail and no preserve_none, the performance was quite bad. |
EDIT: Here's a new link that has macOS/arm64 and Linux x86_64. 14.5% faster on arm, 8% faster on x86_64. Linux on ARM64 results should who up at the same URL in about 3 hours: https://github.com/faster-cpython/benchmarking-public/tree/main/results/bm-20250107-3.14.0a3+-f1d3190-CLANG
|
I'm confident in boosting x86-64 perf even further than the 7.5%, by freeing up more call registers. However, for the first PR, it shall be just a straight-forward port to minimize bugs. |
Hey @Fidget-Spinner ! Been following your work on the interpreter improvement, and want to say thank you for giving it a shot (you and all the previous work that multiple people did before you)! I wanted to bring to your attention a small discrepancy that I noticed in the benchmark results in the faster cpython project public repo. The Arminc Aarch64 Darwin ARM64 Linux x86_64 There are also some other slowdowns in some individual benchmarks, most of them being GC or python startup. I am sorry if these were already in your radar, I just wanted to note it down, specially that 310.61x slower benchmark, which I kind of hope is just a fluke Again, thank you so much for your work and the entire CPython team! |
@davfsa thanks for bringing those up. The other (rare) benchmarks that slowed down are in the 1-2% range, which usually means it's noise. So I'm ignoring those. The same techniques here can be applied potentially to marshal, pickle, and regex to speed up CPython startup and regex, so I'm not worried.. |
Thanks for bringing this up @davfsa! I hope not to seem argumentative by showing my own results -- I genuinely would like to understand how you are arriving at yours so we can reproduce and get to the bottom of it if there is something going on here. Are you comparing @Fidget-Spinner's branch to its merge base (where it branched off from main) or to some other older version of Python? With my own measurements, when comparing to the merge base, pyperf doesn't find any statistically significant difference (see for example). One thing we know about
As you can see, there is so much variation (on the order of 12x between min and max!) that it's pretty hard to make any conclusions from this benchmark. It does make me wonder why |
Hey @mdboom! I was just looking at the results from the link you sent in your comment above (https://github.com/faster-cpython/benchmarking-public/tree/main/results%2Fbm-20250107-3.14.0a3%2B-f1d3190-CLANG) and was scrolling through the I haven't run the benchmarks myself, I was just looking at the results 🙃 |
Got it. Yeah, I don't routinely look at the ones against older baselines since they don't really show the effect of the specific change at hand. It looks like we have faster-cpython/benchmarking-public#43 (comment) to track the (unrelated) issue with bench_mp_pool so that should hopefully get looked at. |
There are now Windows results available as well. Note this is comparing clang-19 with and without tailcalls, not comparing clang-19 to the default msvc (that comparison is also important, but coming soon). I'm seeing a healthy speedup of 10.8% on x86_64, but a slowdown of 14.4% on x86 (32-bit). I suspect this is not surprising. Also note these Windows/Clang builds don't use PGO since the Windows build scripts don't currently support that configuration. |
Btw, the tail-call interpreter I've found is a much better debugging experience than the default one. There's two QOL improvements:
|
PR up at #128718 |
This is likely because Clang doesn't support
Another cool benefit is that |
FWIW, Edit: not a bug, intended behavior. |
I benched this on GCC and saw a 10% improvement in pystones on -O3. The improvement might be more if we enable PGO but that's broken with |
Before we implement the tailcalling interpreter, there are a few preparatory code changes I think we should make.
|
…129078) Add new frame owner type for interpreter entry frames
Feature or enhancement
Proposal
Experimental branch: main...Fidget-Spinner:cpython:tail-call
Prior discussion at: faster-cpython/ideas#642
I propose adding a tail-calling interpreter to CPython for significantly better performance on compilers that support it.
This idea is not new, and has been implemented by:
CPython currently has a few interpreters:
The tail-calling interpreter will be the 4th that coexists with the rest. This means no compatibility concerns.
Performance
Preliminary benchmarks by me suggest excellent performance improvements --- 10% geometric mean speedup in pyperformance, with up to 40% speedup in Python-heavy benchmarks: https://gist.github.com/Fidget-Spinner/497c664eef389622d146d632990b0d21. These benchmarks were performed with clang-19 on both main and my branch, with ThinLTO and PGO, on AMD64 Ubuntu 22.04. PGO seems especially crucial for the speedups based on my testing. For those outside of CPython development: a 10% speedup is roughly equal to 2 minor CPython releases worth of improvements. For example, CPython 3.12 roughly sped up by 5%.
The speedup is so significant that if accepted, the new interpreter will be faster than the current JIT compiler.
Drawbacks
I will address maintainability by using the interpreter generator that was introduced as part of CPython 3.12. This generator will allow us to automatically generate most of the infrastructure needed for this change. Preliminary estimates suggest the generator will be only 200 lines of Python code, most of which is shared/copied/same conceptually as the other generators.
For portability, this will fix itself (see the next section).
Portability and Precedent
At the moment, this is only supported by clang-19 for AArch64 and AMD64, with partial support on clang-18 and gcc-next, but likely bad performance on those. The reason is that we need both the
__attribute__((musttail))
and__attribute__((preserve_none))
attributes for good performance. GCC only hasgnu::musttail
but notpreserve_none
.There has been prior precedence on adding compiler-specific optimizations for CPython. See for example the original computed goto issue by Antoine Pitrou https://bugs.python.org/issue4753. At the time, it was a new feature only on GCC and not on Clang, but we still added it anyways. Eventually a few years later, Clang also introduced the feature. The key point gcc will likely eventually catch up and add these features.
EDIT: Added that it's only a likely case to have bad perf on GCC. Reading https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118328, I have not tested on GCC trunk. This is pure speculation that perf is bad. I can try with GCC eventually after the PR lands and we can test it from there. However, testing with clang with just
musttail
and nopreserve_none
, the performance was quite bad.Implementation plan
_PyEval_EvalFrameDefault
.DEOPT_IF/EXIT_IF
.Worries about new bugs
Computed goto is well-tested, so worrying about the new interpreter being buggy is fair.
I doubt logic bugs will be the primary one, as we are using the interpreter generator. This means we share common code between the base interpreter and the new one. If the new one has logic bugs, it is likely the base interpreter has it too.
The other one is compiler bugs. However to allay such fears, I point out that the GHC calling convention (the thing behind
preserve_none
has been around for 5 years1, andmusttail
has been around for almost 4 years2.cc @pitrou as the original implementer of computed gotos, and @markshannon
Future Use
Kumar Aditya pointed out this could be used in regex and pickle as well. Likewise, Neil Schemenauer pointed out marshal and pickle might benefit from this for faster Python startup.
Has this already been discussed elsewhere?
https://discuss.python.org/t/a-new-tail-calling-interpreter-for-significantly-better-interpreter-performance/76315
Links to previous discussion of this feature:
No response
Linked PRs
The text was updated successfully, but these errors were encountered: