-
-
Notifications
You must be signed in to change notification settings - Fork 277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(rabbitmq): adds a message batching mechanism for RabbitMQ handlers #781
feat(rabbitmq): adds a message batching mechanism for RabbitMQ handlers #781
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do like this idea of batching, left some notes but also I'll request test coverage as we need to be certain that this works as expected and is backwards compatible
@underfisk Thanks for taking a look, will do on the logging and tests. Are you able to address some of the other questions that I had? |
I haven't been using this for a while but I'm familiar with the concept. Spring has a really good implementation and I do believe we'll just benefit.
I tried to answer your questions @ckfngod but the "real answer" may come from real use cases or anyone interested in this feature. In theory there is a common ground for designing new API's and I don't think this proposal is far from a good implementation, it is a good starting point for sure |
@underfisk Really appreciate the in-depth answers, that's very helpful.
I'll go ahead with the proposed changes. Let me know if there's any other changes you'd like to see!
Totally agree with this sentiment. For what it's worth, I'm working on this because we have a use case for it and this solution will work fine for us 🙂 |
@underfisk Spent some time thinking about this and this might not work as cleanly as we thought. The types get pretty hairy and will impact external interfaces (would need a discriminated union on What I like about the current solution is that the types are simple (only one handler options interface, no complex discriminated unions), it’ll work for both decorator and module-level configs unambiguously (if Thoughts? |
Sounds fair to me. You're presenting the idea with an use case in mind, I think we can refine it later if people start adhering to it |
@ckfngod Before we get this in, we'll need to polish the PR description as there are open questions possibly resolved and we're going to present a new feature. If you're okay later on to improve the documentation (md files) with some examples, that would be awesome |
@underfisk Added tests, documentation and updated PR description with more detail / summaries of our discussion. Let me know if there are any other tasks or outstanding issues you'd like to see resolved! |
error: any | ||
) => { | ||
if (error.code == PRECONDITION_FAILED_CODE) { | ||
//406 == preconditions failed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can now remove this comment since the code is extracted and easy to read
@ckfngod Thank you for addressing the PR feedback 🙏 |
Head branch was pushed to by a user without write access
@underfisk Cool, appreciate it! |
@underfisk pushed a change that should fix my flaky tests. was working locally but looks like there's a timing issue in ci pipeline. see 503c827 |
@david-pivonka If we were to remove the I think it was added as part of a feature implementation but I'm totally fine by removing the default and letting the 1/2 user(s) that requested/implemented this know in a breaking change that we no longer inject it by default. |
@underfisk Hey, thanks for the comment. I've just set the Let me know if you want me to create a PR for it. This is the working code:
|
Feel free to create the PR |
Message Batching
This implements consumer-side batching (like Spring). It works by accumulating messages until either it hits the batch size limit or the batch timer expires at which time the handler will be presented with the messages as a single array which gets acked or nacked together. A batch error handler may be provided for users that require their error handling logic to be aware of the failed batch as a whole.
This is implemented as a new optional message handler options property
batchOptions
which contains:size
representing the maximum batch length before returning the batchtimeout
representing the maximum length of time allowed between messages before returning a batch anderrorHandler
a custom error handling implementation that receives message batchesAt a high-level the batching mechanism works as follows:
a. Batch timer is started with message handling logic as the callback
b. Store above callback in a separate variable
inflightBatchHandler
a. If it has, clear the batch timer and immediately call
inflightBatchHandler
with the batchb. Otherwise, refresh the batch timer
a.
inflightBatchHandler
is called by the timer with a partial batch