Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extend LATENCY LATEST to add min / sum / cnt stats #1570

Open
wants to merge 1 commit into
base: unstable
Choose a base branch
from

Conversation

enjoy-binbin
Copy link
Member

Currently LATENCY LATEST only has the following fields:

  • Event name.
  • Unix timestamp of the latest latency spike for the event.
  • Latest event latency in millisecond.
  • All-time maximum latency for this event.

This PR introduced these fields:

  • min: the all-time minimum latency for this event (maybe not that useful)
  • sum: the all-time sum latency for this event (i think it is useful for some events)
  • cnt: the event count that trigger the latency, with the sum we can calc the avg

Currently LATENCY LATEST only has the following fields:
- Event name.
- Unix timestamp of the latest latency spike for the event.
- Latest event latency in millisecond.
- All-time maximum latency for this event.

This PR introduced these fields:
- min: the all-time minimum latency for this event (maybe not that useful)
- sum: the all-time sum latency for this event (i think it is useful for some events)
- cnt: the event count that trigger the latency, with the sum we can calc the avg

Signed-off-by: Binbin <[email protected]>
@enjoy-binbin enjoy-binbin requested review from a team January 16, 2025 03:20
Copy link

codecov bot commented Jan 16, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 70.98%. Comparing base (cda9eee) to head (2df98be).
Report is 9 commits behind head on unstable.

Additional details and impacted files
@@             Coverage Diff              @@
##           unstable    #1570      +/-   ##
============================================
+ Coverage     70.96%   70.98%   +0.02%     
============================================
  Files           120      120              
  Lines         65094    65103       +9     
============================================
+ Hits          46191    46211      +20     
+ Misses        18903    18892      -11     
Files with missing lines Coverage Δ
src/latency.c 81.36% <100.00%> (+0.43%) ⬆️

... and 10 files with indirect coverage changes

Copy link
Contributor

@zuiderkwast zuiderkwast left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I think we can add this.

We are adding these in the end of an array reply. I remember we did something like that in CLUSTER SLOTS and we said it is a backward-compatible change, but still it broke some clients. For example, hiredis-cluster was checking the exact array length in the handling of CLUSTER SLOTS.

LATENCY LATEST is not as risky to change as CLUSTER SLOTS, but we should still be aware that some monitoring tools maybe can break. We should mention it in the release notes and we should add some note in the documentation that new fields can be added. A note like that makes it more clear that adding more fields in the reply is a backward compatible change. Now we don't have a note like that on https://valkey.io/commands/latency-latest/.

@zuiderkwast zuiderkwast added release-notes This issue should get a line item in the release notes major-decision-pending Major decision pending by TSC team labels Jan 21, 2025
@zuiderkwast
Copy link
Contributor

@valkey-io/core-team Please vote to extend the multi-bulk reply with three more numbers.

@hwware hwware requested a review from zuiderkwast January 27, 2025 15:51
addReplyBulkCString(c, event);
addReplyLongLong(c, ts->samples[last].time);
addReplyLongLong(c, ts->samples[last].latency);
addReplyLongLong(c, ts->max);
addReplyLongLong(c, ts->min);
Copy link
Member

@madolson madolson Jan 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I vote to remove min. It's basically the lowest value above min, I think sum and cnt are more useful.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess you mean min is the lowest value above the configured latency-monitor-threshold.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will it be useful in scenarios where latency-monitor-threshold may be frequently adjusted? or the real latency is much larger than latency-monitor-threshold?

i am not sure about it actually, i am ok with removing it if you both think it is useless.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only Madelyn thinks it's useless. I don't have an opinion about it. 😆

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm pretty strongly opinionated about only adding fields we are sure are useful. If we aren't sure something is useful, I would prefer to defer adding it until someone brings it up or has a use case. I prefer to lean towards simplicity as much as possible. The fewer the fields exist, the less the end users have to think about.

Will it be useful in scenarios where latency-monitor-threshold may be frequently adjusted? or the real latency is much larger than latency-monitor-threshold?

I'm not sure what the min would tell you in that scenario.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
major-decision-pending Major decision pending by TSC team release-notes This issue should get a line item in the release notes
Projects
Status: In Progress
Development

Successfully merging this pull request may close these issues.

4 participants