-
Notifications
You must be signed in to change notification settings - Fork 61
AI: Check for Duplicate Specs #608
AI: Check for Duplicate Specs #608
Conversation
✅ Deploy Preview for ubiquibot-staging ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
Great reviews, team! |
@ByteBallet please be sure to "Resolve conversation" for each after you implement requested changes, and have all of your questions answered. |
Yes, sure. |
I was thinking about the rate limit problem. What if we "preserve state" by passing in the progress to the database. For example: {
checking: { url: "ubiquity/ubiquibot/520", startTime: "1234123412" },
queue: [
{ pending: false, similarity: 0.7, issueUrl: "ubiquity/ubiquibot/510" },
{ pending: true, issueUrl: "ubiquity/ubiquibot/511" },
{ pending: true, issueUrl: "ubiquity/ubiquibot/512" },
],
} If we hit rate limits, this job can continue the next time an issue is posted? I want to experiment with the current implementation to see how obtrusive the rate limits are. If they are a problem I would be more motivated to include these capabilities to deal with rate limits. (Sorry for the sloppy example I wrote this on my phone.) |
But the problem is we can also hit rate limit while checking the next issue. If this process is repeated, I think we will have a lot of delay. |
In the original specification I explicitly stated that slow is fine. Even if it takes a day its still immensely helpful for repositories with hundreds of open issues and community contributors opening new ones.
|
https://github.com/ayaka14732/ChatGPTAPIFree |
Exponential backoff is pretty standard. We can use it. |
ubiquibot/staging#141 |
…into ai_duplicate_spec
I am not sure if it is necessary to increase time limit for more than 10 seconds as it seems working fine... |
I have tested it at 93818b4 that was latest when I was doing it. I think the point of concern in the QA was that you are always expecting a number from AI, and if it responds with some text, you are parsing any number from the answer as a similarity. |
Since we don't want 100% correctness from AI, I think it's reasonable to do like that. It's just the problem that we concern more on number of % or bot's answering rate. And in my case, I didn't experience the bot answering with number that is irrelevant to the similarity. I made some changes (revert string literal) after 93818b4 and you will also see no problem there. ^^ |
You shouldn't have undone the string literal. Also consider the prompt I wrote because I got really good results for the keywords in my limited testing: #488 (comment) |
I couldn't use string literal as I used the prompt from .env file. First, I used exactly same prompt from yours but gpt did not give me satisfactory result. It always gave me words from the prompt, not content. (e.g. It always gave me "important" as important word) |
If we want, we can just change prompts from env variable. |
…into ai_duplicate_spec
What do you think about it? @pavlovcik |
…into ai_duplicate_spec
as I understand it currently we have 2 options for QA on previews
or 3rd option do local testing like @web4er and trust me bro it will run fine once we figure a solution |
Or just try again on https://github.com/ubiquibot/staging/ |
…into ai_duplicate_spec
I gave you card info to set it up its fine for now. |
I guess it is not needed for QA. |
…into ai_duplicate_spec
Resolves #488
QA for kamaalsultan/santa-bringyouwishes#39