-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve testing and tracking of performance critical components #1688
Comments
We need an APM for this, lets stall xterm.js development for 3 ys and make the performance tooling first (or gather some bucks and at least 20 highly skilled C++/JS developer to get the job done). 😆 There are a few profiling tools that will cover parts of your list with reliable results (esp. components that could be tested in nodejs with a high res synchronous timer), as soon as the browser engine gets involved we are stuck with the nerfed timer due to Spectre. Since in the end all that counts is the user perceived performance the latter is still testable by doing "full runs" with typical actions (like my current Since you wrote this issue from the canvas perf regression perspective - imho this is even more tricky to test in a reliable manner, it heavily relies on system specifics like the OS, installed GPU and might even be driver version dependent. Under such circumstances a "once and for all" optimal solution does not exists. TL;DR
Edit: This might come handy - https://github.com/ChromeDevTools/timeline-viewer. It even has a compare mode. Edit2: For in browser tests we can use https://github.com/paulirish/automated-chrome-profiling. With this we can run test cases in chrome and grab the profiling data. From there its only a small step to some dashboard thingy tracking changes over time. To get something like this running, we will need decent cloud storage (the profile data tend to get really big). |
Here is a proof of concept perf tool, that gets the timeline data from chrome: https://github.com/jerch/perf-test. To run it, edit the options in |
Current plan:
|
@Tyriar
|
Offtopic: I already found a rather big perf regression in the parser, remember those numbers here: #1399 (comment) - print has dropped to 50 MB/s 😱 . Others also dropped but only slightly. Not sure yet what causes it, Imho there were only small fixes done to the code after those numbers. Which leads to a more ontopic question: I have those benchmark data files and scripts from the parser, also used them to get the numbers here #1731 (comment) - I think we can use those for some first systematic perf regression testing. But where to put it? Into xterm-benchmark? Some subfolder in xterm.js for now until we got xterm-benchmark properly set up and integrated? What are your plans with xterm-benchmark? To get the ball rolling a few ideas from my side:
|
Made some progress: https://github.com/xtermjs/xterm-benchmark
There are a few early xterm.js tests. Those are currently hardlinked against an existing xterm.js (just check out the repo next to the xterm.js repo folder).
More see the https://github.com/xtermjs/xterm-benchmark/blob/master/README.md. |
We've done lots of work on this and can now run benchmarks via npm scripts. |
Rendering performance has regressed over 100% since 3.3.0 #1677, we should improve how we test and track this. I'd love to hear from people on how we could go about doing this is a good way but this is what I think we want:
The text was updated successfully, but these errors were encountered: