-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consolidate flushes #3785
base: master
Are you sure you want to change the base?
Consolidate flushes #3785
Conversation
( #3393 ) |
Any pros/cons compared to the Janm's? One pro obviously is that this is much simpler |
The functionallity should be the same, i don‘t see any cons and other than its simpler no pros. Only functonality differents is that i disabled consolidating while the player might be in config state. To ensure the server switches are as fast as possible. ( But maybe this could be removed. |
Nah that sounds like a good idea |
good idea to remove it? or to have it disabled while config state? xd |
No keep as is |
Alright |
Here is a direct download link to the compiled binary with this patch applied If someone is intrested in testing it |
@andreasdc would you like to test it? |
// ensure netty context to for less context schedules | ||
// by default we are mostly in netty context anyway, but plugins can use ProxiedPlayer.unsafe().sendPacket() | ||
// in non netty context | ||
if ( !ch.eventLoop().inEventLoop() ) | ||
{ | ||
ch.eventLoop().execute( () -> write( packet ) ); | ||
return; | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would this then be best placed in impl of ProxiedPlayer.Unsafe.sendPacket?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if i remove it there i need to put it at multiple positions as ServerConnection.unsafe.sendPacket InitialHandler.unsafe.sendpacket and ProxiedPlayer.unsafe.sendPacket
It was just the most quick way to do it there for me because i thought there is no disadvantage of putting it there?
Why dou you think it shouldnt be there?
if ( !closed ) | ||
{ | ||
DefinedPacket defined = null; | ||
if ( packet instanceof PacketWrapper ) | ||
{ | ||
PacketWrapper wrapper = (PacketWrapper) packet; | ||
wrapper.setReleased( true ); | ||
ch.writeAndFlush( wrapper.buf, ch.voidPromise() ); | ||
ch.write( wrapper.buf, ch.voidPromise() ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would put in the signalFlush call directly after both write calls instead of putting it below.
Or is there something I didn't notice
Or maybe add in a write0 method in ChannelWrapper taking care of potentially flushing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why?
signalFlush is called guaranteed at the end of the method. Only if the encoder protocol is changed it will forceFlush and return.
Looking at my impl copied form netty, we are not flushing on disconnect/close/exception/writeability change/handlerRemoved here. I think these should be added. |
Also I think a value of 20 is kinda low. We might not even fill up a whole tcp packet of 1500 bytes with 20 of the most common packets being very small (delta move-rot <=14 / single block change <= 13 /block destroy stage <= 14 / animation <= 5 / exporb <=31 / statistic <= 15). As we flush anyway once reading from the other connection is finished temporarily, the counting for flushes during read is only to free up internal buffers of the connection. Therefore it might be more beneficial if we use packet size instead of packet amount for this matter. |
i used 20 because you also did |
as far as i know we don't need to flush on close exception or disconnect. But we should flush on writeability change, also bungee never remove something from the pipe at time i enable consolidation so handlerRemoved is also not required i guess |
isnt there a risk of higher latency if we do so? its very important to me that we ensure no latency increase (20 flushed to 1 flush is already a big performance increase) |
Oh, well that number was pulled out of thin air (i did think only a very short time about that number). Should've done better at that time. Looking at the common packet sizes I mentioned, it might not be optimal.
I would guess that might only be if there is some large chunk packet which we are now waiting to get processed with the new threshold, keeping smaller packets from before to get sent. In general we're both just doing a bunch of guesswork, some measurements are needed, e.g. on write, save nanoTime into a long[22] array (i think threshold 20 leads to up to 21 packets until flush + 1 spare if i messed up), compare with nanoTime just before flush call & print to console or so. |
i understand, i also just put 20 there without much thinking about it, but it seemed using 40 instead of 20 makes no differents in latency (maybe more also has no differents)
I fully agree this needs more testing. |
Inspired by @Janmm14 #3393
I impled a simple flush consolidation without modifing the netty pipeline this change consists of tweaks in PacketHandlers, channelWrapper and HandlerBoss
There is a system variable to set the max consolidate amount
Consolidating is enabled after the backend receives the login packet to ensure all packets in login and config are sent without delay and cannot get lost.
And is disabled while switching to another server
It would be nice if it would be tested.
I will probably test it on network of a friend but more is better (i tested it on my localhost everything was working fine for me)
Closes #3392