Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ENH] LLM integration to make the conversation feel more natural #193

Open
AlexanderNeumann opened this issue Jun 20, 2023 · 2 comments
Open
Assignees
Labels
enhancement New feature or request

Comments

@AlexanderNeumann
Copy link
Member

  1. Motivation - We could send the current conversation path to the llm and with the answer provided by the bot model to generate a personalized response for the user. Every user could have a personalized prompt.
  2. Specification - Store the conversation path, currently we only store the state in the path, right? Then we need to talk with the llm (calling the API + set an OpenAI token). If a token and prompt is provided the bot should use this and replace the current response with the generated one.
  3. Finalised state - The bot can use the OpenAI API and talk with an llm to enhance the conversation.
@AlexanderNeumann AlexanderNeumann added the enhancement New feature or request label Jun 20, 2023
@samuelvkwong samuelvkwong self-assigned this Jul 14, 2023
@samuelvkwong
Copy link
Contributor

image Right now, a OpenAI request is specified in the bot model as a Bot Action. For Incoming Messages that would return a simple text response, to enhance the response, in the bot model we would just need to have the Incoming Message use a Bot Action with OpenAI parameters, and in the backend manager service, we would hold back the bot's response and pass it as an "example assistant" message along with the current conversation path to OpenAI. We would then trigger the chat with the response generated by OpenAI, essentially replacing the bot's response. The issue with the above approach would be, when in the bot model, an Incoming Messsage is already using another service in a Bot Action to generate the response. It seems like an Incoming Message is meant to only have one trigger function/Bot Action. Another issue mentioned would be the redundant uses edges if we want to apply this enhancement to every Incoming Message as shown in the sample Bot Model image above.

@samuelvkwong
Copy link
Contributor

samuelvkwong commented Jul 18, 2023

Perhaps for this particular enhancement, we could add a toggle field to the Messenger class that indicates whether or not all Incoming Messages should be personalized with openAI. There would be no need to make a Bot Action node in the Bot Model and connect Incoming Messages to it.
Or a toggle field for individual Incoming Messages?

We would still be able make OpenAI requests using the Bot Action in other enhancements/use cases. Or if we only want to enhance one incoming Message this could still be done.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants