Skip to content

Releases: IBM/unitxt

Unitxt 1.8.0

05 May 14:50
ece6ff9
Compare
Choose a tag to compare

What's Changed

In this release, the main improvement focuses on introducing type checking within Unitxt tasks. Tasks are fundamental to the Unitxt protocol, acting as standardized blueprints for those integrating new datasets into Unitxt. They facilitate the use of task-specific templates and metrics. To guarantee precise dataset processing in line with the task schema, we've introduced explicit types to the task fields.

For example, consider the NER task in Unitxt, previously defined as follows:

add_to_catalog(
     FormTask(
         inputs=["text", "entity_types"],
         outputs=["spans_starts", "spans_ends", "text", "labels"],
         metrics=["metrics.ner"],
     ),
     "tasks.ner",
)

Now, the NER task definition includes explicit types:

add_to_catalog(
     FormTask(
         inputs={"text": "str", "entity_types": "List[str]"},
         outputs={
             "spans_starts": "List[int]",
             "spans_ends": "List[int]",
             "text": "List[str]",
             "labels": "List[str]",
         },
         prediction_type="List[Tuple[str,str]]",
         metrics=["metrics.ner"],
     ),
     "tasks.ner",
)

This enhancement aligns with Unitxt's goal that definitions should be easily understandable and capable of facilitating validation processes with appropriate error messages to guide developers in identifying and solving issues.

Right now , using the original definition format without typing , will continue to work but generate a warning message. You should begin to adapt your tasks definition by adding types.

'inputs' field of Task should be a dictionary of field names and their types. For example, {'text': 'str', 'classes': 'List[str]'}. Instead only '['question', 'question_id', 'topic']' was passed. All types will be assumed to be 'Any'. In future version of unitxt this will raise an exception.
'outputs' field of Task should be a dictionary of field names and their types. For example, {'text': 'str', 'classes': 'List[str]'}. Instead only '['reference_answers', 'reference_contexts', 'reference_context_ids', 'is_answerable_label']' was passed. All types will be assumed to be 'Any'. In future version of unitxt this will raise an exception.

Special thanks to @pawelknes who implemented this important feature. It truly demonstrates the collective power of the Unitxt community and the invaluable contributions made by Unitxt users beyond the core development team. Such contributions are highly appreciated and encouraged.

  • For more detailed information, please refer to #710

Breaking Changes

"metrics.spearman", "metrics.kendalltau_b", "metrics.roc_auc": prediction type is float.
"metrics.f1_binary","metrics.accuracy_binary", "metrics.precision_binary", "metrics.recall_binary", "metrics.max_f1_binary", "metrics.max_accuracy_binary": prediction type is Union[float, int], references must be equal to 0 or 1

Bug Fixes

New Assets

New Features

  • Type checking for task definition by @pawelknes in #710
  • Add open and ibm_genai to llm as judge inference engine by @OfirArviv in #782
  • Add negative class score for binary precision, recall, f1 and max f1 by @lilacheden in #788
    1. Add negative class score for binary precision, recall, f1 and max f1, e.g. f1_binary now returns also "f1_binary_neg".
    2. Support Unions in metric prediction_type
    3. Add processor cast_to_float_return_nan_if_failed
    4. Breaking change: Make prediction_type of metrics numeric:
      A. "metrics.kendalltau_b", "metrics.roc_auc": prediction type is float.
      B. "metrics.f1_binary","metrics.accuracy_binary", "metrics.precision_binary", "metrics.recall_binary", "metrics.max_f1_binary", "metrics.max_accuracy_binary": prediction type is Union[float, int], references must be equal to 0 or 1
  • Group shuffle by @sam-data-guy-iam in #639

Documentation

Full Changelog: 1.7.7...1.8.0

Full Changelog: 1.8.1...1.8.0

Unitxt 1.7.9

05 May 12:43
ef01b8d
Compare
Choose a tag to compare

What's Changed

Full Changelog: 1.7.7...1.7.9

Unitxt 1.7.8

05 May 11:31
b0a0015
Compare
Choose a tag to compare

What's Changed

Full Changelog: 1.7.7...1.7.8

1.7.7

17 Apr 07:22
0c64a9d
Compare
Choose a tag to compare

What's Changed

Full Changelog: 1.7.6...1.7.7

Unitxt 1.7.6

08 Apr 17:49
b76022c
Compare
Choose a tag to compare

What's Changed

The most significat change in this release is the addition of the notion of \N (slash capital N) to formats. With \N you can define places where you want a single new line removing all newlines ahead.

A very detailed explanation if you want to go deeper:

The Capital New Line Notation (\N) transforms a given string by applying the Capital New Line Notation.
The Capital New Line Notation (\N) is designed to manage newline behavior in a string efficiently.
This custom notation aims to consolidate multiple newline characters (\n) into a single newline under
specific conditions, with tailored handling based on whether there's preceding text. The function
distinguishes between two primary scenarios:
1. If there's text (referred to as a prefix) followed by any number of \n characters and then one or
more \N, the entire sequence is replaced with a single \n. This effectively simplifies multiple
newlines and notation characters into a single newline when there's preceding text.
2. If the string starts with \n characters followed by \N without any text before this sequence, or if
\N is at the very beginning of the string, the sequence is completely removed. This case is
applicable when the notation should not introduce any newlines due to the absence of preceding text.

This allows us two things:
First define system formats that are not having unnecassry new lines when instruciton of system prompt are missing.
Second, to ignore any new lines created by the template ensuring the number of new lines will be set by the format only.

For example if we defined the system format in the following way:

from unitxt.formats import SystemFormat

format = SystemFormat(model_input_format="{system_prompt}\n{instruction}\n|user|\n{source}\n|assistant|\n{target_prefix}")

We faced two issues:

  1. If the system prompt is empty or the instruction is empty we have two trailing new lines for no reason.
  2. If the source finished with new line (mostly due to template structre) we would have unnecassry empty line before the "|user|"

Both problems are solved with \N notation:

from unitxt.formats import SystemFormat

format = SystemFormat(model_input_format="{system_prompt}\\N{instruction}\\N|user|\n{source}\\N|assistant|\n{target_prefix}")

Breaking changes

  • Fix typo in MultipleChoiceTemplate field choices_seperator -> choices_separator
  • Deprecation of use_query option in all operators , for now it is just raising warning but will be removed in the next major release. The new default behavior is equivalent to use_query=True.

All Changes

Bug Fixes:

Assets Fixes:

New Features:

  • Add notion of \N to formats, to fix format new line clashes by @elronbandel in #751
  • Ability to dynamically change InstanceMetric inputs + grammar metrics by @arielge in #736
  • Add DeprecatedFIeld for more informative procedure for deprecating fields of artifacts by @dafnapension in #741

New Assets:

  • Add rerank recall metric to unitxt by @jlqibm in #662
  • Add many selection and human preference tasks and datasets by @elronbandel in #746
  • Adding Detector metric for running any classifier from huggingface as a metric by @mnagired in #745
  • Add operators: RegexSplit, TokensSplit, Chunk by @elronbandel in #749
  • Add bert score large and base versions by @assaftibm in #748

Enhancments:

New Contributors

Full Changelog: 1.7.4...1.7.6

Unitxt 1.7.4

28 Mar 15:16
b2fc3f8
Compare
Choose a tag to compare

In the 1.7.4 release, we've made significant improvements to unitxt, further enhancing its developer friendliness. This update marks a step towards our goal of offering a well-documented and user-friendly library. A key feature of this release is the introduction of a type verification mechanism, designed to enhance the developer experience by increasing transparency and preemptively addressing errors.

4 Most Important Changes:

Add Description and Tags to unitxt Artifacts (1/4)

You can now enrich unitxt artifacts with descriptions and tags. These additions aim to enhance the upcoming online catalog, enabling users to search and filter artifacts by tags for an improved browsing experience.

For instance, to add context to a TaskCard:

TaskCard(
       ...,
      __description__="This is the WNLI dataset",
      __tags__={"task":"nli", "license":"apache2.0"}
)

See more in #725

Metrics and Postrprocess Override Through Recipe (2/4)

Now metrics and postprocessors can specified directly through the recipe and override those in the dataset card.
For exmaple if we want to use "metrics.rouge" instead of "metrics.accuracy" for WNLI we can now achieve this with:

load_dataset("card=cards.wnli, ... , metrics=[metrics.rouge]")

See more in #663

Metrics Type Validation (3/4: ⚠️ Breaking Change ⚠️)

Context: The initiative to enhance developer friendliness at unitxt, especially through type checking, aims to guide developers more effectively and preemptively identify issues.

Previously, metrics individually determined if predictions and references were correctly typed, with many lacking such checks.

Now, Metric incorporates universal code to verify the types of predictions/references and to determine if a metric supports single or multiple references per prediction.

Introducing new parameters for each metric:

# Set 'prediction_type' to the expected types of predictions and references, e.g., "List[str]", "List[Dict]", "string".
# Defaults to "None", triggering a warning for now, but future versions of unitxt will treat this as an error.
prediction_type: str = None

# Indicates if a metric allows multiple references per prediction; otherwise, it supports only one reference per prediction.
single_reference_per_prediction: bool = False

Incompatibility Notice: If any existing post-processing pipeline violates the type schema, it will emit an error.

Important: unitxt's default behavior is to handle multiple references per prediction, as seen in the HF dataset (predictions as strings, references as lists of strings), with post-processing applied accordingly. For some metrics, like those measuring retrieval, predictions and references are lists of document IDs. In scenarios like few-shot learning, this adjustment ensures metrics correctly handle lists of lists.

See more in #667

Dialog Processing Capabilities (4/4)

Dialog data is essential for tasks like dialog completion, dialog summarization, etc. Thus, we've made an initial attempt to support dialog processing in unitxt. The challenges were twofold: (1) dialog is influenced by the system format, and (2) dialog consists of multiple turns, each potentially considered as the final turn for evaluation. To address these, we've introduced a new class of dialog processing operators, which you can review here:
https://unitxt.readthedocs.io/en/latest/unitxt.dialog_operators.html.
You can review an example of card construction utilizing a few dialog processing tools here: https://github.com/IBM/unitxt/blob/main/prepare/cards/coqa.py

This card's usage can be demonstrated with the following recipe:
card=cards.coqa.completion,template=templates.completion.abstractive.standard,format=formats.textual_assistant

Resulting in this input data:

Write the best response to the dialog.
<|user|>
The Vatican Apostolic Library.... The Vatican Library is a research library for history, law, philosophy, science and theology. The Vatican Library is open to anyone who can document their qualifications and research needs. Photocopies for private study of pages from books published between 1801 and 1990 can be requested in person or by mail....from this period, though some are very significant.
When was the Vat formally opened?
<|assistant|>
It was formally established in 1475
<|user|>
what is the library for?
<|assistant|>
research
<|user|>
for what subjects?

And this target:

history, and law

See more in #640

All Changes In Unitxt 1.7.4

Breaking Changes

  • Add generic mechanism to check prediction and reference types in metrics by @yoavkatz in #667 See explaination in the previoues sections for why this change is breaking.

New Features

  • Add ability to fuse sources with disjoint splits by @yoavkatz in #707
  • Allow max reduction type in metric to find the best overall score over all instances by @yoavkatz in #709
  • Add string operators module with many standard string operaotrs by @elronbandel in #721
  • Allow disabling per group f1 scores in customF1 by @yoavkatz in #719
  • Add improved type inference capabilities, inferring type_string from a given object, and infer_type therefrom via parse_type_string by @dafnapension in #706
  • Add description and tags to every catalog artifact by @elronbandel in #725
  • allow contexts not to be entered to metric by @perlitz in #653
  • Add control over metrics and postprocessors through the recipe by @elronbandel in #663
  • Add coqa and dialog processing capabilites by @elronbandel in #640
  • Add pandas_load_args for LoadCSV by @elronbandel in #696
  • Add safe and complete type parsing function to type_utils, for allowing better type checking. by @elronbandel in #688
  • Add deprecation decorator for warning and errors for deprecation of functions and classes by @elronbandel in #689
  • Add choices shuffling to MultipleChoiceTemplate by @elronbandel in #678
  • Make settings utils type sensetive by @elronbandel in #674

New Assets

Asset Fixes

Bug Fixes

New Contributors

Full Changelog: 1.7.1...1.7.4

Unitxt 1.7.3

28 Mar 15:02
e352dd4
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 1.7.2...1.7.3

Unitxt 1.7.2

24 Mar 07:03
bbef23c
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 1.7.1...1.7.2

1.7.1

13 Mar 09:08
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 1.7.0...1.7.1

Unitxt 1.7.0

05 Mar 14:11
Compare
Choose a tag to compare

What Changed in Unitxt 1.7.0

This release introduces a few significant changes that modify existing conventions:

  1. Instructions renamed to system_prompts

This means that from now on, to define a new system-level instruction, you can use this code:

system_prompt = TextualSystemPrompt( # <<<< Class name has changed
    "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n"
)

add_to_catalog(system_prompt, "system_prompts.models.alpaca", overwrite=True) # <<<< Catalog name has changed

It also means that all the system-level instructions were moved to the catalog under system_prompts instead of instructions.
This change is breaking old instruction but was necassry to enable the next very useful change.

  1. Templates can now (1) generate task specific instruction once at the head of the example, and (2) can add few words the model will say before the models' final prediction

This change was requested by many pepole.

For example here in this COLA dataset example:

User: Classify the grammatical acceptability of the following text to one of these options: unacceptable, acceptable. text: Fred watered the plants flat.
Agent: acceptable

User: Classify the grammatical acceptability of the following text to one of these options: unacceptable, acceptable. text: The pond froze solid.
Agent:

The instruction "Classify the ..." is reapted for every demonstration. Also with the current template there is no way to put few words that the agent will say before the prediciton for instance: "Agent: The class is ". With the new changes both of these important features are enabled.

If the old way for defining tempaltes for classification was:

add_to_catalog(
    InputOutputTemplate(
        input_format="Classify the {type_of_class} of the following {text_type} to one of these options: {classes}. {text_type}: {text}",
        output_format="{label}",
    ),
    "templates.classification.multi_class.default_no_instruction",
    overwrite=True,
)

It is now defined this way:

add_to_catalog(
    InputOutputTemplate(
        input_format="{text_type}: {text}", # <<<< Changed
        output_format="{label}",
        target_prefix="The {type_of_class} is ", # <<<< Added
        instruction="Classify the {type_of_class} of the following {text_type} to one of these options: {classes}.\n", # <<<< Added
    ),
    "templates.classification.multi_class.instruction",
    overwrite=True,
)

The new template fields instruction and target_prefix will produce this example:

Classify the grammatical acceptability of the following text to one of these options: unacceptable, acceptable.

User: text: Fred watered the plants flat.
Agent: The grammatical acceptability is acceptable

User: text: The pond froze solid.
Agent: The grammatical acceptability is

Notice how the instruction appears only once, and the target prefix is appearing after the 'Agent:'.

Read more in the tutorial on preparing templates.

  1. Loading from catalog with modifications

Now you can load an item from the catalog and change its fields. For example, if you want to use a task but with a different metric, you can use this syntax:

card = TaskCard(
    loader=LoadHF(path="glue", name="cola"),
    preprocess_steps=[...],
    task="tasks.classification.multi_class[metrics=[metrics.matthews_correlation]]", # <<<< Modified
    templates="templates.classification.multi_class.all",
)

add_to_catalog(card, "cards.cola", overwrite=True)

Read more in the tutorial on loading from the catalog.

  1. Renaming of additional_inputs to task_data

In an effort to more accurately represent the origin of certain fields within our system, we've renamed the additional_inputs parameter to task_data. This modification underscores the fact that these fields are derived directly from the task definition itself. This change is crucial for maintaining the integrity and reliability of metrics, as it ensures these fields are validated against the task schema. Consequently, developers crafting metrics for specific tasks can effortlessly ascertain which fields are accessible to them by simply referring to the task schema. This alignment between task definitions and metrics development fosters a more intuitive and efficient workflow for unitxt contributors.

Release Changes

BugFixes:

New Assets:

Enhancments

  • Tests can be done now also on PRs from forks. by @elronbandel in #537 #538
  • Show artifact class details in the documentation. by @dafnapension in #528
  • UI improvements by @Roni-Friedman in #541
  • Update README.md by @eltociear in #540
  • Add artifact_identifier to Artifact objects loaded from the catalog, linking them to their catalog name. by @matanor in #545 #547 #546
  • allow imports list for executequery and filterbyquery and rename to ExecuteExpression and FilterByExpression by @dafnapension in #542
  • Add tests for api is presented in the unitxt paper. by @elronbandel in #558
  • Extend the function that evaluate with unitxt metric on external data to new types of data by @assaftibm in #557
  • Add Kendall's tau metric by @lilacheden in #535
  • Add new table operators for serialization & truncation by @csrajmohan in #567
  • Unitxt should operate with no package requirements by default. This adds some tools to do so. by @elronbandel in #570
  • Seperate library tests and catalog preperation by @elronbandel in #572
  • Add class for constants handling by @elronbandel in #575
  • Add code needed for evaluating metrics as models by @lilacheden in #573
  • Improved error message when using TemplateDict ...
Read more