-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenAI RateLimitError #257
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Sudhakar Singh <[email protected]>
I have read the CLA Document and I hereby sign the CLA You can retrigger this bot by commenting recheck in this Pull Request. Posted by the CLA Assistant Lite bot. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
❌ Changes requested. Reviewed everything up to 1511121 in 1 minute and 43 seconds
More details
- Looked at
48
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
3
drafted comments based on config settings.
1. graphiti_core/llm_client/openai_client.py:149
- Draft comment:
max_delay
is not defined. Definemax_delay
or replace it with a defined variable. - Reason this comment was not posted:
Marked as duplicate.
2. graphiti_core/llm_client/openai_client.py:162
- Draft comment:
ConvertRateLimitError
to string to extract retry time. Usestr(e)
in_parse_retry_after
. - Reason this comment was not posted:
Marked as duplicate.
3. graphiti_core/llm_client/openai_client.py:139
- Draft comment:
Define_parse_retry_after
as a separate function or a static method for better readability and reusability. - Reason this comment was not posted:
Confidence changes required:80%
The_parse_retry_after
function is defined inside thegenerate_response
method, which is not idiomatic in Python. It should be defined as a separate method or a static method if it doesn't use any instance variables.
Workflow ID: wflow_vEEQ61N6ZkfozhXO
Want Ellipsis to fix these issues? Tag @ellipsis-dev
in a comment. You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
return float(match.group(1)) / 1000 # Convert ms to seconds | ||
|
||
# Fallback to exponential backoff if no time found | ||
return min(5 * (2 ** retry_count), max_delay) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
max_delay
is not defined. This will cause a NameError. Define max_delay
or replace it with a valid variable.
# These errors should not trigger retries | ||
raise | ||
# Handle rate limits with backoff | ||
except RateLimitError as e: | ||
retry_after = _parse_retry_after(e) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
_parse_retry_after
expects a string, but e
is an exception object. Pass str(e)
instead.
It wasn't possible to run this tutorial https://help.getzep.com/graphiti/graphiti/lang-graph-agent consistently without adding this rate limiting API.
Important
Adds rate limiting handling with exponential backoff to
generate_response()
inopenai_client.py
.generate_response()
inopenai_client.py
using exponential backoff._parse_retry_after()
to extract retry time from error messages or fallback to exponential backoff.RateLimitError
and retries with delay.RefusalError
without retry.This description was created by for 1511121. It will automatically update as commits are pushed.