-
Notifications
You must be signed in to change notification settings - Fork 854
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Retry on configurable exception #6991
base: main
Are you sure you want to change the base?
Retry on configurable exception #6991
Conversation
Resolves #6962 |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #6991 +/- ##
============================================
+ Coverage 89.97% 90.01% +0.04%
- Complexity 6591 6599 +8
============================================
Files 729 729
Lines 19852 19856 +4
Branches 1953 1954 +1
============================================
+ Hits 17861 17873 +12
+ Misses 1396 1387 -9
- Partials 595 596 +1 ☔ View full report in Codecov by Sentry. |
RetryInterceptor::isRetryableException, | ||
e -> | ||
retryPolicy.getRetryExceptionPredicate().test(e) | ||
|| RetryInterceptor.isRetryableException(e), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The OR here is interesting. It means a user can choose to expand the definition of what is retryable but not reduce it. I wonder if there are any cases when you would not want to retry when the default would retry. 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a user can choose to expand the definition of what is retryable but not reduce it
it's exactly the idea
I wonder if there are any cases when you would not want to retry when the default would retry
I would say no 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would say no 🤔
I think we might..
Suppose we want expose options to give full control to the user over what's retryable (as I alluded to in the end of this comment), we'd probably want to do something like:
- Expose a single configurable predicate option of the form
setRetryPredicate(Predicate<Throwable>)
- Funnel all failed requests through this predicate, whether they resolved a response with a status code or ended with an exception
- This means we'd need to translate requests with a non-200 status code to an equivalent exception to pass to the predicate
- If the user doesn't define their own predicate, default to one that retriable when status is retryable (429, 502, 503, 504) or when the exception is retryable (like one of the SocketTimeoutException we've discussed).
In this case, its possible that a user doesn't want to retry on a particular response status code like 502, even when the default behavior is to retry on it.
Thanks for the PR!
Wondering if you could elaborate on these, since its possible that the errors aren't actually environment-specific and everyone could benefit from them. My initial inclination was that we should just update the static definition of what constitutes a retryable exception, but I'm open to being wrong. |
hey @jack-berg
3 exceptions from me:
Also recently we had network issues(retryable) when using SSL, but it was luckily solved by java upgrade so I can neglect it but it might be useful for some users with some java versions I don't mind to put all of that into "static" definition but the reason I want to have it configurable is the ability to apply a quick fix when a new retryable exception is discovered. Also I don't know all the exceptions that other people encounter in their networks so the list of exceptions is not complete So I can combine it all in static definition in addition to the current retryable exceptions, but I want to preserve the dynamic config as well Tell me your thoughts about it |
there is flaky test in of the checks, not related to my change because it's in metrics product 🙈 |
Hmm.. let's think about these. They are clearly the result of some sort of timeout occurring. We take the arguments for
The callTimeout represents the total allotted time for everything to resolve with the call, including resolving DNS, connecting, writing request body, reading response body, and any additional retry requests. So if this is exceeded, it won't do any good to retry because there's no allotted time left for the additional attempts to resolve. The connectTimeout is a little different. It represents the max amount of time connecting a TCP socket to the target host. If this is less than callTimeout, then there is still time remaining in the allotted callTimeout for a retry to exceed. And so I think its correct to retry if this occurs, and if we look at RetryInterceptor, we see there's the attempt to retry on this type of situation. So I think its appropriate to extend the condition to include the exception you're seeing:
There's two additional OkHttpClient settings that we don't configure: This still doesn't address the
This is the last one that's unaddressed. This exception is thrown when DNS lookup fails for the given host. I know that the java runtime caches DNS results, but wasn't sure what it does with DNS lookup failures. I did some searching and found that negative DNS cache TTL is controlled by a property called networkaddress.cache.negative.ttl which defaults to 10 seconds. This indicates that we should in fact retry when UnknownHostException occurs because there's a chance that the error is transient and succeeds with the next attempt.
This is a good point. One downside I can think of to adding the proposed |
Number of retryable exceptions is very limited in the current logic so we have data loss in case of any other(not mentioned in the current java code) IO exception happen.
As we might have different networks we might experience different exceptions. In my environment I caught a few exceptions that very likely need to be retried and they are not listed as retryable in current code.
since each environment is different I suggest to have an ability to configure retryable exceptions
the change is fully backward compatible and does not change default behaviour of the library.