Skip to content

Commit

Permalink
Refactor: Separate out prompt generation from model. (#118)
Browse files Browse the repository at this point in the history
The logic was getting a bit too entangled and difficult to extend.

This breaks out the prompt generation into its a separate prompt_builder
module, as well as introduces Prompt classes (with both TextPrompt and
OpenAIPrompt as two implementations).

This also changes the model interfaces to take in a Prompt object
instead of `prompt_path`. `prompt_path` was only really needed for
logging and AIBinaryModel.

Ref: #62.
  • Loading branch information
oliverchang authored Feb 22, 2024
1 parent 51e7384 commit 1c315f5
Show file tree
Hide file tree
Showing 6 changed files with 532 additions and 400 deletions.
10 changes: 6 additions & 4 deletions llm_toolkit/code_fixer.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
from experiment import benchmark as benchmarklib
from llm_toolkit import models
from llm_toolkit import output_parser as parser
from llm_toolkit import prompt_builder

ERROR_LINES = 20

Expand Down Expand Up @@ -334,15 +335,16 @@ def apply_llm_fix(ai_binary: str,
"""Queries LLM to fix the code."""
fixer_model = models.LLM.setup(
ai_binary=ai_binary,
prompt_path=prompt_path,
name=fixer_model_name,
num_samples=1,
temperature=temperature,
)

fixer_model.prompt_path = fixer_model.prepare_fix_prompt(
benchmark, prompt_path, raw_code, errors)
fixer_model.generate_code(response_dir)
builder = prompt_builder.DefaultTemplateBuilder(fixer_model)
prompt = builder.build_fixer_prompt(benchmark, raw_code, errors)
prompt.save(prompt_path)

fixer_model.generate_code(prompt, response_dir)


def main():
Expand Down
Loading

0 comments on commit 1c315f5

Please sign in to comment.