-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce OpenAI action #621
Conversation
💭 A similar idea could work with a Dangermattic plugin to auto-review PRs. Of course it would take some fine tuning and it would be nice to clearly separate this generated feedback from the rest of the comments, but perhaps we could get to something useful. |
I'm not sure I follow your idea with Dangermattic auto-reviewing PRs in this context…
If that was about (1), I'd actually vote against it.
If that was (2), I'm not sure what type of comments you had in mind; but we might also want to be mindful of our OpenAPI usage and the billing that goes with it, having it used on every single run of Danger (run 1 or more times for each PR) might become a bit much. And we might also be mindful of what we send to OpenAI and its model (e.g. code and context from private repos…) |
8ebd09f
to
e0e5f6b
Compare
And syntax error in example code
e0e5f6b
to
b0e55f0
Compare
nice_changelog = openai_generate( | ||
prompt: :release_notes, # Uses the pre-crafted prompt for App Store / Play Store release notes | ||
question: "Help me write release notes for the following items:\n#{items}", | ||
api_token: get_required_env('OPENAI_API_TOKEN') | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alternative naming idea:
- Name the action
openai_prompt(context_prompt:, message:)
or something similar, to highlight that this is focused on text prompts (as opposed to "generating" as in Dall-E and generating images or similar). The thing is,prompt
here would be used as a verb, while it could be confusing for the name of the action if someone were interpreting it as a noun instead… (is it supposed to return the prompt?!) - Name the action
openai_ask(prompt:, question:)
instead ofopenai_generate
- Use
message:
instead ofquestion:
…
I think I like openai_ask(prompt:, question:)
after all, but open to brainstorming and bikeshedding 😅
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm I'd also vote for openai_ask(prompt:, question:)
! IMO it sounds more natural and better reflects the "conversation" aspect of the API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sold! Action renamed in c829b8d
More like this -- sorry I wasn't very specific and just thinking out loud, but I wasn't thinking in implementation details (e.g. how often to run it to save on OpenAI API calls) and more on the concept (and perhaps as a potential experiment) of having a LLM review a diff, PR description, title and so on and post back its comments. |
[ | ||
FastlaneCore::ConfigItem.new(key: :prompt, | ||
description: 'The internal top-level instructions to give to the model to tell it how to behave. ' \ | ||
+ "Use a Ruby Symbol from one of [#{available_prompt_symbols}] to use a predefined prompt instead of writing your own", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Giving the option of using predefined prompts as symbols is a neat solution 👍
lib/fastlane/plugin/wpmreleasetoolkit/actions/common/openai_generate_action.rb
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking great 👍
Just posted a couple of small non-blocking comments.
Co-authored-by: Ian Guedes Maia <[email protected]>
About newly-introduced action in #621 for which I forgot to update the CHANGELOG after renaming the action
@@ -10,7 +10,7 @@ _None_ | |||
|
|||
### New Features | |||
|
|||
_None_ | |||
- Introduce new `openai_generate` action to get responses to a prompt/question from OpenAI API. [#621] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI: I forgot to update the CHANGELOG before merging this PR, but I've fixed it directly in trunk
afterwards, via 1a58ba0
What does it do?
Introduce an
openai_generate
openai_ask
action to allow some CI automation tasks in particular to generate release notes blurbs for new versions.This came up as part of internal feedback from p91TBi-cBb-p2:
The main goal of this PR is thus to introduce an action that could take the content of the
CHANGELOG.md
/RELEASE-NOTES.txt
file of one of our mobile apps repos, call this action, and use the result to generate a PR with the response saved into the properchangelogs/default.txt
file(s).Status: Proof-of-Concept
TBH this was just a proof of concept I wanted to quickly test today to answer the Release Managers' feedback on WCAndroid from their last rotations and validate if this was easily possible. Now that I've seen this is possible, I'll consider creating a Project Thread at some point to start discussing about it and plan on adopting it for apps that might be interested.
For the case of WooCommerce this will still require to split the items starting with
[WEAR]
from the rest and make 2 different calls to store the 2 separate blurbs in the two differentchangelogs/default.txt
files, so there'll probably still be a bit of tweaking before we actually start to use that action. Probably similar for PocketCasts which also have mobile+automotive+wear apps. For other mobile apps which only have mobile and for which there'd be no need to split the entries based on app variant, this should be as straightforward as theself.example
provided in the action's code though 🙂To Test
I've asked access to the a8c org in OpenAI (ref: p1732898401785999-slack-C06CJ3U621X) and have generated an API token for us to experiment with this action (stored in SecretStore,
?secret_id=12591
).You can thus experiment with pointing a repo to this branch of the
release-toolkit
(or even hack this repo'sFastfile
directly) then add a call to thisopenai_ask
method, using theapi_token:
found in the Secret Store, and validate it gives nice single <650 character paragraph as a response for prompts using:release_notes
This is how the response stubs used in the unit tests have been generated btw.
Checklist before requesting a review
bundle exec rubocop
to test for code style violations and recommendationsspecs/*_spec.rb
) if applicablebundle exec rspec
to run the whole test suite and ensure all your tests passCHANGELOG.md
file to describe your changes under the appropriate existing###
subsection of the existing## Trunk
section.If applicable, add an entry in theMIGRATION.md
file to describe how the changes will affect the migration from the previous major version and what the clients will need to change and consider.