-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(chatDraft): rewrite chat draft with AI #49
Conversation
Warning Rate limit exceeded@ArslanSaleem has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 18 minutes and 20 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. WalkthroughThe changes introduce a new Changes
Possibly related PRs
Suggested reviewers
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## release/v0.4.0 #49 +/- ##
=================================================
Coverage ? 59.81%
=================================================
Files ? 37
Lines ? 1772
Branches ? 0
=================================================
Hits ? 1060
Misses ? 712
Partials ? 0 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 9
🧹 Outside diff range and nitpick comments (4)
frontend/src/interfaces/chat.ts (1)
31-34
: Consider adding JSDoc comments for better documentation.
Adding documentation would improve maintainability by clarifying the purpose of each field.
+/**
+ * Interface for requesting AI-generated chat drafts
+ */
export interface ChatDraftRequest {
+ /** The content to be used as context for draft generation */
content: string;
+ /** Instructions for how the AI should generate the draft */
prompt: string;
}
backend/app/requests/__init__.py (1)
225-246
: Add type hints and documentation.
The function would benefit from:
- A more specific type hint for
draft_request
instead of genericdict
- A docstring explaining the purpose, parameters, and return value
Consider applying these improvements:
-def request_draft_with_ai(api_token: str, draft_request: dict) -> dict:
+from typing import TypedDict
+
+class DraftRequest(TypedDict):
+ content: str
+ prompt: str
+
+def request_draft_with_ai(api_token: str, draft_request: DraftRequest) -> dict:
+ """Send a draft request to the AI service.
+
+ Args:
+ api_token (str): The authentication token for the API
+ draft_request (DraftRequest): The draft request containing content and prompt
+
+ Returns:
+ dict: The AI-generated draft response
+
+ Raises:
+ CreditLimitExceededException: If the user has exceeded their credit limit
+ Exception: If the request fails or returns invalid JSON
+ """
backend/app/api/v1/chat.py (2)
27-29
: Consider adding maximum length validation.
While the minimum length validation is good, consider adding maximum length constraints to prevent potential abuse or performance issues with extremely large inputs.
class DraftRequest(BaseModel):
- content: str = Field(..., min_length=1, description="Content cannot be empty")
- prompt: str = Field(..., min_length=1, description="Prompt cannot be empty")
+ content: str = Field(..., min_length=1, max_length=10000, description="Content cannot be empty")
+ prompt: str = Field(..., min_length=1, max_length=1000, description="Prompt cannot be empty")
245-275
: Add rate limiting and authentication middleware.
The /draft
endpoint should implement:
- Rate limiting to prevent abuse
- Authentication middleware to protect the endpoint
- Request size limits
This is especially important since it's making external API calls that may have usage costs.
Consider using FastAPI's built-in security dependencies:
from fastapi import Depends, Security
from fastapi.security import OAuth2PasswordBearer
from app.middleware import rate_limit
@chat_router.post("/draft", status_code=200)
@rate_limit(max_requests=10, window_seconds=60)
def draft_with_ai(
draft_request: DraftRequest,
db: Session = Depends(get_db),
current_user: User = Depends(get_current_user)
):
# ... rest of the implementation
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (5)
- backend/app/api/v1/chat.py (3 hunks)
- backend/app/requests/init.py (1 hunks)
- frontend/src/components/ChatDraftDrawer.tsx (3 hunks)
- frontend/src/interfaces/chat.ts (1 hunks)
- frontend/src/services/chat.tsx (2 hunks)
🧰 Additional context used
🪛 Biome
frontend/src/services/chat.tsx
[error] 66-66: The catch clause that only rethrows the original error is useless.
An unnecessary catch clause can be confusing.
Unsafe fix: Remove the try/catch clause.
(lint/complexity/noUselessCatch)
🔇 Additional comments (7)
frontend/src/interfaces/chat.ts (1)
31-34
: LGTM! Interface design is clean and focused.
The ChatDraftRequest
interface follows the existing naming conventions and properly defines the required fields for AI draft generation.
frontend/src/services/chat.tsx (1)
4-4
: LGTM: Import addition follows existing patterns.
The ChatDraftRequest
import is appropriately placed with related chat interfaces.
frontend/src/components/ChatDraftDrawer.tsx (2)
2-2
: LGTM: Dependencies are well-organized.
The imports are appropriate for the component's functionality, including necessary UI components, icons, and services.
Also applies to: 6-9
86-86
: Verify draft_with_ai service integration.
Let's verify the integration with the draft service to ensure proper error handling and response types.
✅ Verification successful
Integration with draft_with_ai service is properly implemented
The service integration is well-structured with:
- Proper error handling in both frontend and backend
- Type-safe request/response with
ChatDraftRequest
interface - Complete request flow from UI → Frontend service → Backend API → External service
- Error feedback to users via toast notifications
Key implementation points:
- Frontend:
frontend/src/services/chat.tsx
implements the service call - Backend:
backend/app/api/v1/chat.py
handles the API endpoint with proper error handling - Types:
frontend/src/interfaces/chat.ts
defines the request interface
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for draft_with_ai service implementation and usage
echo "Searching for draft_with_ai service implementation..."
rg -t typescript "draft_with_ai" --no-filename
# Search for related interfaces
echo "Searching for related interfaces..."
rg -t typescript "interface.*ChatDraft" --no-filename
Length of output: 370
Script:
#!/bin/bash
# Search for draft_with_ai service implementation and usage without file type restriction
echo "Searching for draft_with_ai service implementation..."
rg "draft_with_ai" -A 5 -B 5
# Search for related interfaces without file type restriction
echo "Searching for related interfaces..."
rg "interface.*ChatDraft"
# Search for files containing draft-related functionality
echo "Searching for draft-related files..."
fd "draft" --type f
Length of output: 5690
backend/app/requests/__init__.py (1)
225-246
: Verify server endpoint compatibility.
The function interacts with a new /v1/draft
endpoint. Let's verify its implementation and usage.
✅ Verification successful
Server endpoint integration appears consistent and properly implemented
The verification shows:
- The function is correctly used in
backend/app/api/v1/chat.py
- The endpoint
/v1/draft
follows the same pattern as other existing pandaetl endpoints (/v1/parse
,/v1/extract
,/v1/chat
, etc.) - Request validation is properly implemented through the
DraftRequest
model with required fields - Error handling follows the established pattern used across other endpoint calls
- The server URL is properly configured in settings
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check server endpoint implementation and function usage
# Test 1: Find the server endpoint implementation
echo "Searching for draft endpoint implementation..."
rg -l "draft.*router\.(post|get)\(.*'/v1/draft'"
# Test 2: Find all usages of the new function
echo "Searching for function usage..."
rg "request_draft_with_ai"
# Test 3: Check for any existing draft-related tests
echo "Searching for related tests..."
fd -g "*test*.py" --exec rg -l "draft"
Length of output: 674
Script:
#!/bin/bash
# Let's check the implementation details and configuration
# Check the settings for pandaetl_server_url
echo "Checking server URL configuration..."
rg "pandaetl_server_url" -A 2
# Check the draft request structure and validation
echo "Checking draft request structure..."
rg -A 10 "class.*Draft.*Request"
# Check error handling patterns
echo "Looking for similar endpoint calls..."
rg "requests\.(post|get)\(.*settings\.pandaetl_server_url"
Length of output: 2685
backend/app/api/v1/chat.py (2)
13-13
: LGTM: Import changes are appropriate.
The new imports are necessary for the added functionality and follow the project's import style.
Also applies to: 17-17
259-259
: Validate serialized request data.
The model_dump_json()
output should be validated before being passed to the external service to ensure data integrity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (1)
backend/app/api/v1/chat.py (1)
28-31
: Add max_length validation to prevent potential DoS attacks.
While the min_length validation is good, consider adding max_length validation to both fields to prevent excessive input sizes.
class DraftRequest(BaseModel):
- content: str = Field(..., min_length=1, description="Content cannot be empty")
- prompt: str = Field(..., min_length=1, description="Prompt cannot be empty")
+ content: str = Field(..., min_length=1, max_length=10000, description="Content cannot be empty")
+ prompt: str = Field(..., min_length=1, max_length=1000, description="Prompt cannot be empty")
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (3)
- backend/app/api/v1/chat.py (3 hunks)
- backend/app/requests/init.py (1 hunks)
- frontend/src/services/chat.tsx (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/app/requests/init.py
🔇 Additional comments (5)
frontend/src/services/chat.tsx (2)
4-4
: LGTM!
The import is correctly placed with related chat interface types.
56-76
: 🛠️ Refactor suggestion
Align error handling with existing patterns and improve type safety.
The function has several inconsistencies with the existing codebase:
- Error handling differs from
chat
andchatStatus
functions which return error messages instead of throwing - Return type annotation is missing
- Timeout value (300000) is duplicated across functions
- Error response structure differs (using
detail
vserror
field)
Apply these changes to maintain consistency:
+const CHAT_TIMEOUT = 300000; // 5 minutes
+
-export const draft_with_ai = async (data: ChatDraftRequest) => {
+export const draft_with_ai = async (
+ data: ChatDraftRequest
+): Promise<ChatResponse['data']> => {
try {
const response = await PostRequest<ChatResponse>(
`${chatApiUrl}/draft`,
{ ...data },
{},
- 300000
+ CHAT_TIMEOUT
);
return response.data.data;
} catch (error) {
- if (axios.isAxiosError(error)) {
- if (error.response?.data) {
- throw new Error(error.response.data.detail);
- } else {
- throw new Error("Failed to generate draft with AI. Please try again.");
- }
- } else {
- throw new Error("Failed to generate draft with AI. Please try again.");
+ if (axios.isAxiosError(error) && error.response?.data?.error) {
+ return {
+ conversation_id: data.conversation_id,
+ response: error.response.data.error,
+ };
}
+ return {
+ conversation_id: data.conversation_id,
+ response: "Failed to generate draft with AI. Please try again.",
+ };
}
};
Consider extracting the error handling logic into a shared utility function since it's used across multiple functions. Would you like me to help create this utility?
Let's verify the error response structure in the backend:
backend/app/api/v1/chat.py (3)
6-6
: LGTM: Imports are well-organized and properly utilized.
The new imports support the draft functionality with proper exception handling and request validation.
Also applies to: 14-14, 18-18
250-255
: User retrieval logic needs improvement.
The current implementation of getting the first user is not production-ready.
268-272
: LGTM: Response structure is consistent with other endpoints.
The success response follows the established pattern with status, message, and data fields.
Summary by CodeRabbit
New Features
/draft
endpoint.Bug Fixes
Documentation