Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: enforce valid JSON response in interior design assistant #126

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

Anirudh2112
Copy link

@Anirudh2112 Anirudh2112 commented Nov 22, 2024

Description:

Problem:
The interior_design_assistant app expected the model to return a valid JSON output. However, in some cases, the model's output was either invalid JSON or missing expected fields, causing runtime errors and breaking the functionality.

Solution:

Implemented response_format in the turn.create calls for list_items and suggest_alternatives to enforce structured JSON outputs.

Key Changes:
Added JSON schema in response_format for both methods:

  • list_items: Ensures output includes description (string) and items (list of strings).
  • suggest_alternatives: Ensures output is an array of objects with description (string).

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Nov 22, 2024
@@ -10,7 +10,7 @@
import uuid
from pathlib import Path
from typing import List

from examples.interior_design_assistant.utils import enforce_response_format
Copy link
Contributor

@yanxi0830 yanxi0830 Nov 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

broken import here?

Copy link
Contributor

@yanxi0830 yanxi0830 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may have forgot to push changes related to enforce_response_format?

@ashwinb
Copy link
Contributor

ashwinb commented Nov 22, 2024

This is not what the issue really is referring to. You need to use the response_format argument for chat_completion() to ensure that the underlying LLM is generating the response corresponding to the grammar you provided. Simply doing json.loads() and checking against your desired format is NOT sufficient.

@Anirudh2112
Copy link
Author

Thank you for the clarification! I’ll update the implementation accordingly and push the changes soon.

@Anirudh2112
Copy link
Author

I’ve pushed the latest changes to implement response_format to enforce structured JSON outputs. This addresses the original issue by ensuring the LLM generates valid JSON responses directly. Let me know if there’s anything else to address!

@Anirudh2112
Copy link
Author

Hi @yanxi0830, can you please go through the changes I've made and let me know if everything works as intended? Thank you.

@ashwinb ashwinb closed this Dec 13, 2024
@ashwinb ashwinb reopened this Dec 13, 2024
@Anirudh2112
Copy link
Author

Hi @ashwinb @yanxi0830, I've addressed all the review comments on my PRs. Just wanted to check if there are any other issues I should address or if there's anything else needed from my side to help move these forward. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants