Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

improve SuperSleepUntil implementation #198

Merged
merged 3 commits into from
Sep 30, 2024
Merged

Conversation

toxieainc
Copy link
Contributor

  1. as found in lots of experiments done for the VPX and PinMAME projects, Sleep() on windows can oversleep for over 1ms, especially when doing Sleep(>1)
  2. thus loop Sleep(1) and end if its less than 2ms
  3. in the spin to wait for the rest of the time, insert yield(=_mm_pause) or the determined by the Rust devs arm64 equivalent

This actually gets rid of micro-stutter on my AMD based minipc e.g. in Daytona2 (see #178)

Then also use same Sleep() implementation in the network code (to avoid potential sideeffects between the 2 implementations)

1) as found in lots of experiments done for the VPX and PinMAME projects, Sleep() on
windows can oversleep for over 1ms, especially when doing Sleep(>1)
2) thus loop Sleep(1) and end if its less than 2ms
3) in the spin to wait for the rest of the time, insert yield(=_mm_pause) or the determined by the Rust devs arm64 equivalent

this actually gets rid of micro-stutter on my AMD based minipc e.g. in Daytona2

then also use same Sleep() implementation in the network code (to avoid potential sideeffects between the 2 implementations)
@toxieainc
Copy link
Contributor Author

See our battle-tested implementation in PinMAME: https://github.com/vpinball/pinmame/blob/master/src/windows/ticker.c#L253

@dukeeeey
Copy link
Collaborator

I'd much prefer a solution based upon SDL if possible, even if it's slightly sub optimal for different platforms. The original code does assume that windows sleep isn't that accurate, that's why it subtracts 1ms from the sleep time, then just spins for the rest of the time. But it's possible I guess that's not quite enough, and it goes over the time slightly.

The network code really doesn't need any kind of spin lock at all, std::this_thread::sleep_for(16ms); really is enough. The code could be changed to sleep for 1 second and it would still be fine.

@toxieainc
Copy link
Contributor Author

It is still all SDL based, also the network code (CThread::Sleep maps to SDL_Delay, so no spin there).
Its just the additional pause of the processor in the spin loop that is platform dependent that helps with hyper-threading, etc. To my knowledge there is no real equivalent in SDL or std.

@toxieainc
Copy link
Contributor Author

SDL3 will add one though apparently: https://wiki.libsdl.org/SDL3/SDL_CPUPauseInstruction

@toxieainc
Copy link
Contributor Author

Turns out that this was also added later-on for SDL2, just not documented.
So the code is now all SDL, so the PR now is just avoiding the potential additional 1ms too long Sleep (some outliers feature even more, if Sleep() is used with >1, thus the Sleep(1) loop) and the recommended CPU pause in the spin loop.
(and replacing the chrono sleep with the SDL Sleep/Delay).

@toxieainc
Copy link
Contributor Author

And as said, it fixes a real life issue on my mini PC, where with this changes some VERY noticable micro stutter vanishes.

@dukeeeey
Copy link
Collaborator

SDL_CPUPauseInstruction is a nice solution. I think I need to update my sdl though because it's missing my from my version.

@sgpowers
Copy link

What SDL2 version does this function "un-documentedly" appear in?

This will potentially break the the ability for some distributions to upgrade supermodel, particular in Linux.

@toxieainc
Copy link
Contributor Author

@toxieainc
Copy link
Contributor Author

@sgpowers I tweaked it now, so that this part is optional. So this part will function like it did before, if SDL2 is too old.

@dukeeeey dukeeeey merged commit 161f1e3 into trzy:master Sep 30, 2024
@toxieainc toxieainc deleted the sleep_tweak branch September 30, 2024 22:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants