-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix large volume test #7
base: main
Are you sure you want to change the base?
Changes from 7 commits
c1da202
dcc08dd
1dff662
0253804
a1c0b0f
1b98b8c
42730dd
9250622
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -559,27 +559,29 @@ where | |
let header_sent = frame_sent.header(); | ||
|
||
// If we finished the active multi frame send, clear it. | ||
let mut cleared_multi_frame = false; | ||
if was_final { | ||
let channel_idx = header_sent.channel().get() as usize; | ||
if let Some(ref active_multi_frame) = | ||
self.active_multi_frame[channel_idx] { | ||
if header_sent == *active_multi_frame { | ||
self.active_multi_frame[channel_idx] = None; | ||
cleared_multi_frame = true; | ||
} | ||
} | ||
} | ||
}; | ||
|
||
if header_sent.is_error() { | ||
// We finished sending an error frame, time to exit. | ||
return Err(CoreError::RemoteProtocolViolation(header_sent)); | ||
} | ||
|
||
// TODO: We should restrict the dirty-queue processing here a little bit | ||
// (only check when completing a multi-frame message). | ||
// A message has completed sending, process the wait queue in case we have | ||
// to start sending a multi-frame message like a response that was delayed | ||
// only because of the one-multi-frame-per-channel restriction. | ||
self.process_wait_queue(header_sent.channel())?; | ||
if cleared_multi_frame { | ||
self.process_wait_queue(header_sent.channel())?; | ||
} | ||
} else { | ||
#[cfg(feature = "tracing")] | ||
tracing::error!("current frame should not disappear"); | ||
|
@@ -719,6 +721,16 @@ where | |
|
||
/// Handles a new item to send out that arrived through the incoming channel. | ||
fn handle_incoming_item(&mut self, item: QueuedItem) -> Result<(), LocalProtocolViolation> { | ||
// Process the wait queue to avoid this new item "jumping the queue". | ||
match &item { | ||
QueuedItem::Request { channel, .. } | QueuedItem::Response { channel, .. } => { | ||
self.process_wait_queue(*channel)? | ||
} | ||
QueuedItem::RequestCancellation { .. } | ||
| QueuedItem::ResponseCancellation { .. } | ||
| QueuedItem::Error { .. } => {} | ||
} | ||
|
||
// Check if the item is sendable immediately. | ||
if let Some(channel) = item_should_wait(&item, &self.juliet, &self.active_multi_frame)? { | ||
#[cfg(feature = "tracing")] | ||
|
@@ -745,6 +757,7 @@ where | |
let id = msg.header().id(); | ||
self.request_map.insert(io_id, (channel, id)); | ||
if msg.is_multi_frame(self.juliet.max_frame_size()) { | ||
debug_assert!(self.active_multi_frame[channel.get() as usize].is_none()); | ||
self.active_multi_frame[channel.get() as usize] = Some(msg.header()); | ||
} | ||
self.ready_queue.push_back(msg.frames()); | ||
|
@@ -771,6 +784,7 @@ where | |
} => { | ||
if let Some(msg) = self.juliet.create_response(channel, id, payload)? { | ||
if msg.is_multi_frame(self.juliet.max_frame_size()) { | ||
debug_assert!(self.active_multi_frame[channel.get() as usize].is_none()); | ||
self.active_multi_frame[channel.get() as usize] = Some(msg.header()); | ||
} | ||
self.ready_queue.push_back(msg.frames()) | ||
|
@@ -835,11 +849,6 @@ where | |
self.wait_queue[channel.get() as usize].push_back(item); | ||
} else { | ||
self.send_to_ready_queue(item)?; | ||
|
||
// No need to look further if we have saturated the channel. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Did 0253804#diff-76866598ce8fd16261a27ac58a84b2825e6e77fc37c163a6afa60f0f4477e569L852-L856 fix an issue? The code was supposed to bring down the potential There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It wasn't an issue exposed via a test. Rather I thought it was a bug while following the logic during debugging. The issue is that the wait queue can have not only requests but responses, so it would be wrong to exit early in the case where a bunch of responses could have been moved out of the wait queue. As an aside, I wonder if it would be worthwhile creating a new enum just for the wait queue, similar to There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
My guess is that the problem is likely best solved with two separate queues, one for requests and one for large messages (although I am not entirely sure yet how to handle the case where a message is both large and a request). Alternatively, we should keep some sort of state to ensure it can distinguish these cases quickly. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. With the code being as-is in this PR, I'm still uncomfortable with the situation. Imagine queuing single-frame messages at a very high rate. Once we have saturated the ready queue, they will all go into the wait queue, and every call will process the now-growing entire queue. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We could consider adding this: struct WaitSubQueue {
single_frame: VecDeque<QueuedItem>,
multi_frame: VecDeque<QueuedItem>,
}
struct WaitQueue {
requests: WaitSubQueue,
other: Vec<QueuedItem>,
prefer_request: bool,
}
impl WaitSubQueue {
#[inline(always)]
fn next_item(&mut self, allow_multi_frame: bool) -> Option<QueuedItem> {
if allow_multi_frame && !self.multi_frame.is_empty() {
self.multi_frame.pop_front()
} else {
self.singe_frame.pop_front()
}
}
}
impl WaitQueue {
pub fn next_item(
&mut self,
request_allowed: bool,
multiframe_allowed: bool,
) -> Option<QueuedItem> {
if request_allowed {
self.next_item_allowing_request(multiframe_allowed)
} else {
self.other.next_item()
}
}
/// Returns the next item, assuming a request is allowed.
// Note: This function is separate out for readability.
#[inline(always)]
fn next_item_allowing_request(&mut self, multiframe_allowed: bool) {
let candidate = if prefer_request {
self.requests
.next_item(multiframe_allowed)
.or_else(|| self.other.next_item(multiframe_allowed))
} else {
self.other
.next_item(multiframe_allowed)
.or_else(|| self.requests.next_item(multiframe_allowed))
}?;
// Alternate, to prevent starvation is receiver is procesing at a rate
// that matches our production rate. This essentially subdivides the
// channel into request/non-request subchannels.
self.prefer_request = !candidate.is_request();
Some(candidate)
}
} Since the logic gets more complex, it would be wise to separate it out. This is just a sketch, at least some comments would need to be filled in. The key idea is to know what kind of item we can produce next by checking the state of our multiframe sends and request limits, then use the separated queue to optimize. This reorders items that weren't reordered before by separating the queues. |
||
if !self.juliet.allowed_to_send_request(channel)? { | ||
break; | ||
} | ||
} | ||
|
||
// Ensure we do not loop endlessly if we cannot find anything. | ||
|
@@ -867,6 +876,8 @@ fn item_should_wait<const N: usize>( | |
} => { | ||
// Check if we cannot schedule due to the message exceeding the request limit. | ||
if !juliet.allowed_to_send_request(*channel)? { | ||
#[cfg(feature = "tracing")] | ||
tracing::trace!(%channel, %item, "item should wait: channel full"); | ||
return Ok(Some(*channel)); | ||
} | ||
|
||
|
@@ -889,6 +900,8 @@ fn item_should_wait<const N: usize>( | |
if active_multi_frame.is_some() { | ||
if let Some(payload) = payload { | ||
if payload_is_multi_frame(juliet.max_frame_size(), payload.len()) { | ||
#[cfg(feature = "tracing")] | ||
tracing::trace!(%channel, %item, "item should wait: multiframe in progress"); | ||
return Ok(Some(*channel)); | ||
} | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1dff662 seems to add starvation protection, i.e. newer data cannot consistently get in front of existing data. Was this behavior observed to be problematic?
My core issue with this is that if we process the wait queue each time anyway, it might be better to not even check if we can bypass it and just put everything in the wait queue every time. However, processing the wait queue is expensive, especially if the previously mentioned change is made. Queuing messages will then result in quadratic complexity!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was problematic in that Alice's requests started timing out, as ones in the wait queue weren't processed since newer ones kept getting preferential treatment.
I did consider just dumping everything in the wait queue, however I had the same reservation as you about the cost (and it also seemed to be somewhat abusing the intent of the wait queue - it would at least need renamed for clarity I think if we did that).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At the very least add
QueuedItem::is_request
:)This may be less of an issue if the "new"
WaitQueue
(see above) is added.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can do, but tbh I don't see why we'd want that or where we'd use it?