Skip to content

Commit

Permalink
ragas: bump ragas version, pass old rubric in RubricScore
Browse files Browse the repository at this point in the history
Before ragas v0.2.11 RubricScores.rubrics wasn't
being applied properly. This commit sets that
as the minimum version for this library.

A change in v0.2.11 from previous versions was a
change in the prompt for domain specific knowledge
evaluation with reference.

The new prompt is hardcoded in case ragas makes
any changes to their prompts again in the future.

Signed-off-by: Ali Maredia <[email protected]>
  • Loading branch information
alimaredia committed Jan 18, 2025
1 parent 03afb6c commit a69e557
Show file tree
Hide file tree
Showing 2 changed files with 13 additions and 5 deletions.
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,4 @@ pandas
pandas-stubs
lm-eval>=0.4.4
httpx
ragas
ragas>=0.2.11
16 changes: 12 additions & 4 deletions src/instructlab/eval/ragas.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,7 @@
from pydantic import BaseModel, ConfigDict, Field
from ragas.evaluation import EvaluationDataset, EvaluationResult, RunConfig, evaluate
from ragas.metrics import Metric
from ragas.metrics._domain_specific_rubrics import ( # the rubrics we must instantiate are located inside of a file marked as private
DEFAULT_WITH_REFERENCE_RUBRICS,
from ragas.metrics._domain_specific_rubrics import (
RubricsScore,
)

Expand All @@ -22,6 +21,16 @@

logger = setup_logger(__name__)

# DEFAULT_WITH_REFERENCE_RUBRICS from ragas v0.2.11.
# This rubric is hardcoded in case ragas makes any changes to their DEFAULT_WITH_REFERENCE_RUBRICS in the future
SCORING_RUBRICS = {
"score1_description": "The response is entirely incorrect, irrelevant, or does not align with the reference in any meaningful way.",
"score2_description": "The response partially matches the reference but contains major errors, significant omissions, or irrelevant information.",
"score3_description": "The response aligns with the reference overall but lacks sufficient detail, clarity, or contains minor inaccuracies.",
"score4_description": "The response is mostly accurate, aligns closely with the reference, and contains only minor issues or omissions.",
"score5_description": "The response is fully accurate, completely aligns with the reference, and is clear, thorough, and detailed.",
}


class Sample(TypedDict):
"""
Expand Down Expand Up @@ -256,9 +265,8 @@ def _generate_answers_from_model(

@staticmethod
def _get_metrics() -> List[Metric]:
# default set of metrics
return [
RubricsScore(
rubrics=DEFAULT_WITH_REFERENCE_RUBRICS,
rubrics=SCORING_RUBRICS,
)
]

0 comments on commit a69e557

Please sign in to comment.