Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce OpenAI action #621

Merged
merged 8 commits into from
Dec 4, 2024
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ _None_

### New Features

_None_
- Introduce new `openai_generate` action to get responses to a prompt/question from OpenAI API. [#621]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI: I forgot to update the CHANGELOG before merging this PR, but I've fixed it directly in trunk afterwards, via 1a58ba0


### Bug Fixes

Expand All @@ -24,7 +24,7 @@ _None_

### Bug Fixes

- `DateVersionCalculator`: move next year calculation decision to the clients [#619]
- `DateVersionCalculator`: move next year calculation decision to the clients. [#619]

### Internal Changes

Expand All @@ -34,8 +34,8 @@ _None_

### Bug Fixes

- Fix `check_fonts_installed` step in `create_promo_screenshots` [#615]
- Fix broken `draw_text_to_canvas` method for `create_promo_screenshots` [#614]
- Fix `check_fonts_installed` step in `create_promo_screenshots`. [#615]
- Fix broken `draw_text_to_canvas` method for `create_promo_screenshots`. [#614]

## 12.3.2

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,141 @@
require 'fastlane/action'
require 'net/http'
require 'json'

module Fastlane
module Actions
class OpenaiGenerateAction < Action
OPENAI_API_ENDPOINT = URI('https://api.openai.com/v1/chat/completions').freeze

PREDEFINED_PROMPTS = {
release_notes: <<~PROMPT.freeze
Act like a mobile app marketer who wants to prepare release notes for Google Play and App Store.
Do not write it point by point and keep it under 350 characters. It should be a unique paragraph.

When provided a list, use the number of any potential "*" in brackets at the start of each item as indicator of importance.
Ignore items starting with "[Internal]", and ignore links to GitHub.
PROMPT
}.freeze

def self.run(params)
api_token = params[:api_token]
prompt = params[:prompt]
prompt = PREDEFINED_PROMPTS[prompt] if PREDEFINED_PROMPTS.key?(prompt)
question = params[:question]

headers = {
'Content-Type': 'application/json',
Authorization: "Bearer #{api_token}"
}
body = request_body(prompt: prompt, question: question)

response = Net::HTTP.post(OPENAI_API_ENDPOINT, body, headers)

case response
when Net::HTTPOK
json = JSON.parse(response.body)
json['choices']&.first&.dig('message', 'content')
else
UI.user_error!("Error in OpenAI API response: #{response}. #{response.body}")
end
end

def self.request_body(prompt:, question:)
{
model: 'gpt-4o',
response_format: { type: 'text' },
temperature: 1,
max_tokens: 2048,
top_p: 1,
messages: [
format_message(role: 'system', text: prompt),
format_message(role: 'user', text: question),
].compact
}.to_json
end

def self.format_message(role:, text:)
return nil if text.nil? || text.empty?

{
role: role,
content: [{ type: 'text', text: text }]
}
end

#####################################################
# @!group Documentation
#####################################################

def self.description
'Use OpenAI API to generate response to a prompt'
end

def self.authors
['Automattic']
end

def self.return_value
'The response text from the prompt as returned by OpenAI API'
end

def self.details
<<~DETAILS
Uses the OpenAI API to generate response to a prompt.
Can be used to e.g. ask it to generate Release Notes based on a bullet point technical changelog or similar.
DETAILS
end

def self.examples
[
<<~EXEMPLE,
items = extract_release_notes_for_version(version: app_version, release_notes_file_path: 'RELEASE-NOTES.txt')
nice_changelog = openai_generate(
prompt: :release_notes, # Uses the pre-crafted prompt for App Store / Play Store release notes
question: "Help me write release notes for the following items:\n#{items}",
api_token: get_required_env('OPENAI_API_TOKEN')
)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternative naming idea:

  • Name the action openai_prompt(context_prompt:, message:) or something similar, to highlight that this is focused on text prompts (as opposed to "generating" as in Dall-E and generating images or similar). The thing is, prompt here would be used as a verb, while it could be confusing for the name of the action if someone were interpreting it as a noun instead… (is it supposed to return the prompt?!)
  • Name the action openai_ask(prompt:, question:) instead of openai_generate
  • Use message: instead of question:

I think I like openai_ask(prompt:, question:) after all, but open to brainstorming and bikeshedding 😅

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm I'd also vote for openai_ask(prompt:, question:)! IMO it sounds more natural and better reflects the "conversation" aspect of the API.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sold! Action renamed in c829b8d

File.write(File.join('fastlane', 'metadata', 'android', en-US', 'changelogs', 'default.txt'), nice_changelog)
AliSoftware marked this conversation as resolved.
Show resolved Hide resolved
EXEMPLE
]
end

def self.available_prompt_symbols
PREDEFINED_PROMPTS.keys.map { |v| "`:#{v}`" }.join(',')
end

def self.available_options
[
FastlaneCore::ConfigItem.new(key: :prompt,
description: 'The internal top-level instructions to give to the model to tell it how to behave. ' \
+ "Use a Ruby Symbol from one of [#{available_prompt_symbols}] to use a predefined prompt instead of writing your own",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Giving the option of using predefined prompts as symbols is a neat solution 👍

optional: true,
default_value: nil,
type: String,
skip_type_validation: true,
verify_block: proc do |value|
next if value.is_a?(String)
next if PREDEFINED_PROMPTS.include?(value)

UI.user_error!("Parameter `prompt` can only be a String or one of the following Symbols: [#{available_prompt_symbols}]")
end),
FastlaneCore::ConfigItem.new(key: :question,
description: 'The user message to ask the question to the OpenAI model',
optional: false,
default_value: nil,
type: String),
FastlaneCore::ConfigItem.new(key: :api_token,
description: 'The OpenAI API Token to use for the request',
env_name: 'OPENAI_API_TOKEN',
optional: false,
sensitive: true,
type: String),
]
end

def self.is_supported?(_platform)
true
end
end
end
end
117 changes: 117 additions & 0 deletions spec/openai_generate_action_spec.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,117 @@
require 'spec_helper'

describe Fastlane::Actions::OpenaiGenerateAction do
let(:fake_token) { 'sk-proj-faketok' }
let(:endpoint) { Fastlane::Actions::OpenaiGenerateAction::OPENAI_API_ENDPOINT }
let(:release_notes_prompt) { Fastlane::Actions::OpenaiGenerateAction::PREDEFINED_PROMPTS[:release_notes] }

def stubbed_response(text)
<<~JSON
{
"id": "chatcmpl-Aa2NPY4sSWF5eKoW1aFBJmfc78y9p",
"object": "chat.completion",
"created": 1733152307,
"model": "gpt-4o-2024-08-06",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": #{text.to_json},
"refusal": null
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 91,
"completion_tokens": 68,
"total_tokens": 159,
"prompt_tokens_details": {
"cached_tokens": 0,
"audio_tokens": 0
},
"completion_tokens_details": {
"reasoning_tokens": 0,
"audio_tokens": 0,
"accepted_prediction_tokens": 0,
"rejected_prediction_tokens": 0
}
},
"system_fingerprint": "fp_831e067d82"
}
JSON
end

def run_test(prompt_param:, question_param:, expected_prompt:, expected_response:)
expected_req_body = described_class.request_body(prompt: expected_prompt, question: question_param)

stub = stub_request(:post, endpoint)
.with(body: expected_req_body)
.to_return(status: 200, body: stubbed_response(expected_response))

result = run_described_fastlane_action(
api_token: fake_token,
prompt: prompt_param,
question: question_param
)

# Ensure the body of the request contains the expected JSON data
messages = JSON.parse(expected_req_body)['messages']
if expected_prompt.nil? || expected_prompt.empty?
expect(messages.length).to eq(1)
expect(messages[0]['role']).to eq('user')
expect(messages[0]['content']).to eq(['type' => 'text', 'text' => question_param])
else
expect(messages.length).to eq(2)
expect(messages[0]['role']).to eq('system')
expect(messages[0]['content']).to eq(['type' => 'text', 'text' => expected_prompt])
expect(messages[1]['role']).to eq('user')
expect(messages[1]['content']).to eq(['type' => 'text', 'text' => question_param])
end

# Ensure the request has been made and the response is as expected
expect(stub).to have_been_requested
result
end

it 'calls the API with no prompt' do
result = run_test(
prompt_param: '',
question_param: 'Say Hi',
expected_prompt: nil,
expected_response: 'Hello! How can I assist you today?'
)

expect(result).to eq('Hello! How can I assist you today?')
end

it 'calls the API with :release_notes prompt' do
changelog = <<~CHANGELOG
- [Internal] Fetch remote FF on site change [https://github.com/woocommerce/woocommerce-android/pull/12751]
- [**] Improve barcode scanner reading accuracy [https://github.com/woocommerce/woocommerce-android/pull/12673]
- [Internal] AI product creation banner is removed [https://github.com/woocommerce/woocommerce-android/pull/12705]
- [*] [Login] Fix an issue where the app doesn't show the correct error screen when application passwords are disabled [https://github.com/woocommerce/woocommerce-android/pull/12717]
- [**] Fixed bug with coupons disappearing from the order creation screen unexpectedly [https://github.com/woocommerce/woocommerce-android/pull/12724]
- [Internal] Fixes crash [https://github.com/woocommerce/woocommerce-android/issues/12715]
- [*] Fixed incorrect instructions on "What is Tap to Pay" screen in the Payments section [https://github.com/woocommerce/woocommerce-android/pull/12709]
- [***] Merchants can now view and edit custom fields of their products and orders from the app [https://github.com/woocommerce/woocommerce-android/issues/12207]
- [*] Fix size of the whats new announcement dialog [https://github.com/woocommerce/woocommerce-android/pull/12692]
- [*] Enables Blaze survey [https://github.com/woocommerce/woocommerce-android/pull/12761]
CHANGELOG

expected_response = <<~TEXT
Exciting updates are here! We've enhanced the barcode scanner for optimal accuracy and resolved the issue with coupons vanishing during order creation. Most significantly, merchants can now effortlessly view and edit custom fields for products and orders directly within the app. Additionally, we've improved error handling on login and fixed various UI inconsistencies. Enjoy a smoother experience!
TEXT

result = run_test(
prompt_param: :release_notes,
question_param: "Help me write release notes for the following items:\n#{changelog}",
expected_prompt: release_notes_prompt,
expected_response: expected_response
)

expect(result).to eq(expected_response)
end
end
Loading