Skip to content

Commit

Permalink
llm: Set n by default in gen_kwargs
Browse files Browse the repository at this point in the history
Prior to converting to yaml format, we were setting `n` to the value
of `num_instructions_to_generate`. It was dropped from the yaml since
it's a runtime configuration value. We need to set it here so it's set
like it was before.

Co-authored-by: Mark McLoughlin <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
  • Loading branch information
russellb and markmc committed Jul 12, 2024
1 parent 82adb4a commit 5a0b7a6
Showing 1 changed file with 1 addition and 0 deletions.
1 change: 1 addition & 0 deletions src/instructlab/sdg/llmblock.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,7 @@ def __init__(
"model": self.ctx.model_id,
"temperature": 0,
"max_tokens": 12000,
"n": self.ctx.num_instructions_to_generate,
}

# Whether the LLM server supports a list of input prompts
Expand Down

0 comments on commit 5a0b7a6

Please sign in to comment.