Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(chatDraft): rewrite chat draft with AI #49

Merged
merged 12 commits into from
Oct 30, 2024
Merged

Conversation

ArslanSaleem
Copy link
Collaborator

@ArslanSaleem ArslanSaleem commented Oct 30, 2024

Summary by CodeRabbit

  • New Features

    • Introduced a new AI drafting functionality via a dedicated /draft endpoint.
    • Enhanced the ChatDraftDrawer component for improved user interaction, including input handling and AI draft display.
    • Added a new interface for draft requests to streamline data handling.
  • Bug Fixes

    • Improved error handling for draft requests and AI responses.
  • Documentation

    • Updated documentation to reflect new features and usage instructions for the AI drafting process.

@ArslanSaleem ArslanSaleem requested a review from gventuri October 30, 2024 12:42
Copy link
Contributor

coderabbitai bot commented Oct 30, 2024

Warning

Rate limit exceeded

@ArslanSaleem has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 18 minutes and 20 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Files that changed from the base of the PR and between 59b5815 and 55528ee.

Walkthrough

The changes introduce a new DraftRequest class in the backend to manage draft requests with required fields for content and prompt. A new endpoint /draft is added to process POST requests for AI-generated drafts. The frontend is updated with a ChatDraftDrawer component that manages user input and displays AI-generated drafts. Additionally, a ChatDraftRequest interface is created to define the structure of draft requests. The overall enhancements improve the interaction flow for generating and managing drafts within the chat application.

Changes

File Change Summary
backend/app/api/v1/chat.py - Added DraftRequest class with content and prompt fields.
- Introduced draft_with_ai method.
- Added /draft endpoint for handling draft requests.
backend/app/requests/init.py - Added request_draft_with_ai function to send draft requests to AI service with error handling.
frontend/src/components/ChatDraftDrawer.tsx - Introduced state variables for managing draft process.
- Implemented event handlers for user input.
- Modified rendering logic for draft display.
frontend/src/interfaces/chat.ts - Added ChatDraftRequest interface with content and prompt properties.
frontend/src/services/chat.tsx - Added draft_with_ai function for sending draft requests.
- Imported ChatDraftRequest type.

Possibly related PRs

Suggested reviewers

  • gventuri

Poem

In the burrow deep, where ideas sprout,
A draft was born, of that there's no doubt.
With prompts and content, it hops to the light,
AI whispers softly, crafting words just right.
So gather 'round, dear friends, let's cheer,
For drafts that delight, and bring us near! 🐇✨


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@ArslanSaleem ArslanSaleem changed the title Feat/draft chat feat(chatDraft): rewrite chat draft with AI Oct 30, 2024
@codecov-commenter
Copy link

codecov-commenter commented Oct 30, 2024

Codecov Report

Attention: Patch coverage is 33.33333% with 16 lines in your changes missing coverage. Please review.

Please upload report for BASE (release/v0.4.0@e3739bd). Learn more about missing BASE report.

Files with missing lines Patch % Lines
backend/app/api/v1/chat.py 33.33% 16 Missing ⚠️
Additional details and impacted files
@@                Coverage Diff                @@
##             release/v0.4.0      #49   +/-   ##
=================================================
  Coverage                  ?   59.81%           
=================================================
  Files                     ?       37           
  Lines                     ?     1772           
  Branches                  ?        0           
=================================================
  Hits                      ?     1060           
  Misses                    ?      712           
  Partials                  ?        0           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

🧹 Outside diff range and nitpick comments (4)
frontend/src/interfaces/chat.ts (1)

31-34: Consider adding JSDoc comments for better documentation.

Adding documentation would improve maintainability by clarifying the purpose of each field.

+/**
+ * Interface for requesting AI-generated chat drafts
+ */
 export interface ChatDraftRequest {
+  /** The content to be used as context for draft generation */
   content: string;
+  /** Instructions for how the AI should generate the draft */
   prompt: string;
 }
backend/app/requests/__init__.py (1)

225-246: Add type hints and documentation.

The function would benefit from:

  1. A more specific type hint for draft_request instead of generic dict
  2. A docstring explaining the purpose, parameters, and return value

Consider applying these improvements:

-def request_draft_with_ai(api_token: str, draft_request: dict) -> dict:
+from typing import TypedDict
+
+class DraftRequest(TypedDict):
+    content: str
+    prompt: str
+
+def request_draft_with_ai(api_token: str, draft_request: DraftRequest) -> dict:
+    """Send a draft request to the AI service.
+
+    Args:
+        api_token (str): The authentication token for the API
+        draft_request (DraftRequest): The draft request containing content and prompt
+
+    Returns:
+        dict: The AI-generated draft response
+
+    Raises:
+        CreditLimitExceededException: If the user has exceeded their credit limit
+        Exception: If the request fails or returns invalid JSON
+    """
backend/app/api/v1/chat.py (2)

27-29: Consider adding maximum length validation.

While the minimum length validation is good, consider adding maximum length constraints to prevent potential abuse or performance issues with extremely large inputs.

 class DraftRequest(BaseModel):
-    content: str = Field(..., min_length=1, description="Content cannot be empty")
-    prompt: str = Field(..., min_length=1, description="Prompt cannot be empty")
+    content: str = Field(..., min_length=1, max_length=10000, description="Content cannot be empty")
+    prompt: str = Field(..., min_length=1, max_length=1000, description="Prompt cannot be empty")

245-275: Add rate limiting and authentication middleware.

The /draft endpoint should implement:

  1. Rate limiting to prevent abuse
  2. Authentication middleware to protect the endpoint
  3. Request size limits

This is especially important since it's making external API calls that may have usage costs.

Consider using FastAPI's built-in security dependencies:

from fastapi import Depends, Security
from fastapi.security import OAuth2PasswordBearer
from app.middleware import rate_limit

@chat_router.post("/draft", status_code=200)
@rate_limit(max_requests=10, window_seconds=60)
def draft_with_ai(
    draft_request: DraftRequest,
    db: Session = Depends(get_db),
    current_user: User = Depends(get_current_user)
):
    # ... rest of the implementation
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between e3739bd and a314ef0.

📒 Files selected for processing (5)
  • backend/app/api/v1/chat.py (3 hunks)
  • backend/app/requests/init.py (1 hunks)
  • frontend/src/components/ChatDraftDrawer.tsx (3 hunks)
  • frontend/src/interfaces/chat.ts (1 hunks)
  • frontend/src/services/chat.tsx (2 hunks)
🧰 Additional context used
🪛 Biome
frontend/src/services/chat.tsx

[error] 66-66: The catch clause that only rethrows the original error is useless.

An unnecessary catch clause can be confusing.
Unsafe fix: Remove the try/catch clause.

(lint/complexity/noUselessCatch)

🔇 Additional comments (7)
frontend/src/interfaces/chat.ts (1)

31-34: LGTM! Interface design is clean and focused.

The ChatDraftRequest interface follows the existing naming conventions and properly defines the required fields for AI draft generation.

frontend/src/services/chat.tsx (1)

4-4: LGTM: Import addition follows existing patterns.

The ChatDraftRequest import is appropriately placed with related chat interfaces.

frontend/src/components/ChatDraftDrawer.tsx (2)

2-2: LGTM: Dependencies are well-organized.

The imports are appropriate for the component's functionality, including necessary UI components, icons, and services.

Also applies to: 6-9


86-86: Verify draft_with_ai service integration.

Let's verify the integration with the draft service to ensure proper error handling and response types.

✅ Verification successful

Integration with draft_with_ai service is properly implemented

The service integration is well-structured with:

  • Proper error handling in both frontend and backend
  • Type-safe request/response with ChatDraftRequest interface
  • Complete request flow from UI → Frontend service → Backend API → External service
  • Error feedback to users via toast notifications

Key implementation points:

  • Frontend: frontend/src/services/chat.tsx implements the service call
  • Backend: backend/app/api/v1/chat.py handles the API endpoint with proper error handling
  • Types: frontend/src/interfaces/chat.ts defines the request interface
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for draft_with_ai service implementation and usage
echo "Searching for draft_with_ai service implementation..."
rg -t typescript "draft_with_ai" --no-filename

# Search for related interfaces
echo "Searching for related interfaces..."
rg -t typescript "interface.*ChatDraft" --no-filename

Length of output: 370


Script:

#!/bin/bash
# Search for draft_with_ai service implementation and usage without file type restriction
echo "Searching for draft_with_ai service implementation..."
rg "draft_with_ai" -A 5 -B 5

# Search for related interfaces without file type restriction
echo "Searching for related interfaces..."
rg "interface.*ChatDraft"

# Search for files containing draft-related functionality
echo "Searching for draft-related files..."
fd "draft" --type f

Length of output: 5690

backend/app/requests/__init__.py (1)

225-246: Verify server endpoint compatibility.

The function interacts with a new /v1/draft endpoint. Let's verify its implementation and usage.

✅ Verification successful

Server endpoint integration appears consistent and properly implemented

The verification shows:

  • The function is correctly used in backend/app/api/v1/chat.py
  • The endpoint /v1/draft follows the same pattern as other existing pandaetl endpoints (/v1/parse, /v1/extract, /v1/chat, etc.)
  • Request validation is properly implemented through the DraftRequest model with required fields
  • Error handling follows the established pattern used across other endpoint calls
  • The server URL is properly configured in settings
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check server endpoint implementation and function usage

# Test 1: Find the server endpoint implementation
echo "Searching for draft endpoint implementation..."
rg -l "draft.*router\.(post|get)\(.*'/v1/draft'" 

# Test 2: Find all usages of the new function
echo "Searching for function usage..."
rg "request_draft_with_ai"

# Test 3: Check for any existing draft-related tests
echo "Searching for related tests..."
fd -g "*test*.py" --exec rg -l "draft"

Length of output: 674


Script:

#!/bin/bash
# Let's check the implementation details and configuration

# Check the settings for pandaetl_server_url
echo "Checking server URL configuration..."
rg "pandaetl_server_url" -A 2

# Check the draft request structure and validation
echo "Checking draft request structure..."
rg -A 10 "class.*Draft.*Request"

# Check error handling patterns
echo "Looking for similar endpoint calls..."
rg "requests\.(post|get)\(.*settings\.pandaetl_server_url"

Length of output: 2685

backend/app/api/v1/chat.py (2)

13-13: LGTM: Import changes are appropriate.

The new imports are necessary for the added functionality and follow the project's import style.

Also applies to: 17-17


259-259: Validate serialized request data.

The model_dump_json() output should be validated before being passed to the external service to ensure data integrity.

frontend/src/services/chat.tsx Outdated Show resolved Hide resolved
frontend/src/components/ChatDraftDrawer.tsx Show resolved Hide resolved
frontend/src/components/ChatDraftDrawer.tsx Show resolved Hide resolved
frontend/src/components/ChatDraftDrawer.tsx Outdated Show resolved Hide resolved
backend/app/requests/__init__.py Show resolved Hide resolved
backend/app/requests/__init__.py Outdated Show resolved Hide resolved
backend/app/api/v1/chat.py Outdated Show resolved Hide resolved
backend/app/api/v1/chat.py Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (1)
backend/app/api/v1/chat.py (1)

28-31: Add max_length validation to prevent potential DoS attacks.

While the min_length validation is good, consider adding max_length validation to both fields to prevent excessive input sizes.

 class DraftRequest(BaseModel):
-    content: str = Field(..., min_length=1, description="Content cannot be empty")
-    prompt: str = Field(..., min_length=1, description="Prompt cannot be empty")
+    content: str = Field(..., min_length=1, max_length=10000, description="Content cannot be empty")
+    prompt: str = Field(..., min_length=1, max_length=1000, description="Prompt cannot be empty")
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between a314ef0 and 59b5815.

📒 Files selected for processing (3)
  • backend/app/api/v1/chat.py (3 hunks)
  • backend/app/requests/init.py (1 hunks)
  • frontend/src/services/chat.tsx (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • backend/app/requests/init.py
🔇 Additional comments (5)
frontend/src/services/chat.tsx (2)

4-4: LGTM!

The import is correctly placed with related chat interface types.


56-76: 🛠️ Refactor suggestion

⚠️ Potential issue

Align error handling with existing patterns and improve type safety.

The function has several inconsistencies with the existing codebase:

  1. Error handling differs from chat and chatStatus functions which return error messages instead of throwing
  2. Return type annotation is missing
  3. Timeout value (300000) is duplicated across functions
  4. Error response structure differs (using detail vs error field)

Apply these changes to maintain consistency:

+const CHAT_TIMEOUT = 300000; // 5 minutes
+
-export const draft_with_ai = async (data: ChatDraftRequest) => {
+export const draft_with_ai = async (
+  data: ChatDraftRequest
+): Promise<ChatResponse['data']> => {
   try {
     const response = await PostRequest<ChatResponse>(
       `${chatApiUrl}/draft`,
       { ...data },
       {},
-      300000
+      CHAT_TIMEOUT
     );
     return response.data.data;
   } catch (error) {
-    if (axios.isAxiosError(error)) {
-      if (error.response?.data) {
-        throw new Error(error.response.data.detail);
-      } else {
-        throw new Error("Failed to generate draft with AI. Please try again.");
-      }
-    } else {
-      throw new Error("Failed to generate draft with AI. Please try again.");
+    if (axios.isAxiosError(error) && error.response?.data?.error) {
+      return {
+        conversation_id: data.conversation_id,
+        response: error.response.data.error,
+      };
     }
+    return {
+      conversation_id: data.conversation_id,
+      response: "Failed to generate draft with AI. Please try again.",
+    };
   }
 };

Consider extracting the error handling logic into a shared utility function since it's used across multiple functions. Would you like me to help create this utility?

Let's verify the error response structure in the backend:

backend/app/api/v1/chat.py (3)

6-6: LGTM: Imports are well-organized and properly utilized.

The new imports support the draft functionality with proper exception handling and request validation.

Also applies to: 14-14, 18-18


250-255: User retrieval logic needs improvement.

The current implementation of getting the first user is not production-ready.


268-272: LGTM: Response structure is consistent with other endpoints.

The success response follows the established pattern with status, message, and data fields.

backend/app/api/v1/chat.py Show resolved Hide resolved
backend/app/api/v1/chat.py Outdated Show resolved Hide resolved
@gventuri gventuri merged commit ae8dd96 into release/v0.4.0 Oct 30, 2024
5 checks passed
@gventuri gventuri deleted the feat/draft_chat branch October 30, 2024 17:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants