Skip to content

Commit

Permalink
Polish:
Browse files Browse the repository at this point in the history
- Add new test adding section to README and correct typos
- Decide on more expressive file names
- Save naming script for now, can be removed during Review
  • Loading branch information
TimoImhof committed Jan 8, 2025
1 parent 87c0998 commit 2c80a5c
Show file tree
Hide file tree
Showing 50 changed files with 164 additions and 24 deletions.
125 changes: 101 additions & 24 deletions tests/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
# Testing the adapters library
# Testing the Adapters Library

This README gives an overview of how the test directory is organized and the possibilities of grouping and executing different kinds of tests.
## Overview test directory structure
This README provides a comprehensive overview of the test directory organization and explains how to execute different types of tests within the adapters library.

## Test Directory Structure Overview

```
tests/
Expand All @@ -15,40 +16,45 @@ tests/
│ │ ├── core/
│ │ ├── composition/
│ │ └── ...
│ ├── base.py # Base from which model test bases inherit from
│ ├── base.py # Base from which model test bases inherit
│ ├── generator.py # Testcase generation and registration
│ ├── test_albert.py # Example model test base testing adapter methods on the model
│ ├── test_beit.py
│ ├── test_on_albert.py # Example model test base for testing adapter methods on albert adapter model
│ ├── test_on_beit.py
│ └── ...
├── test_misc/ # Miscellaneous adapter method tests (single model)
│ ├── test_adapter_config.py
│ └── ...
├── test_models/ # Adapter model tests with Hugging Face test suite
│ └── __init__.py
│ │ ├── base.py
│ │ ├── test_albert.py
│ │ ├── test_albert_model.py
│ │ └── ...
```

We differentiate between three kinds of tests:
## Test Categories

The testing framework encompasses three distinct categories of tests:

1. Dynamic Adapter Method Tests: These tests cover core functionalities of the adapters library, including individual adapter methods (such as LoRA and prompt tuning) and head functionalities. These tests are executed across all supported models.

2. Miscellaneous Adapter Method Tests: These supplementary tests cover scenarios not included in the dynamic tests. To optimize resources, they are executed on a single model, as repeated execution across multiple models would not provide additional value.

1. Dynamic adapter method tests: These tests cover most functionalities of the adapters library, e.g. the individual adapter methods (LoRA, prompt tuning) or head functionalities and **are executed on every model**
2. Miscellaneous adapter method tests: These are the remaining tests not covered by the dynamic tests and are **only executed on a single model** to spare ressources as repeated execution on every model would not provide additional value
3. Adapter model tests: These tests **check the implementation of the adapter models** themselves, by applying the Hugging Face model test suite
3. Adapter Model Tests: These tests verify the implementation of the adapter models themselves using the Hugging Face model test suite.

## Test Generator $ Pytest Markers
## Test Generator and Pytest Markers

This chapter zooms in on the test_methods directory. The main actor here is the file `generator.py` which is used by every model test base to generate the appropriate set of adapter method tests. Those tests are then registered in the respective model test file, like this:
The test_methods directory contains the central component `generator.py`, which generates appropriate sets of adapter method tests. Each model test base registers these tests using the following pattern:

``` python
```python
method_tests = generate_method_tests(AlbertAdapterTestBase)

for test_class_name, test_class in method_tests.items():
globals()[test_class_name] = test_class
```

Each generatable class in `tests/test_methods` is decorated with a marker of a certain type, e.g.:
``` python
Each generated test class is decorated with a specific marker type. For example:

```python
@require_torch
@pytest.mark.lora
class LoRA(
Expand All @@ -59,15 +65,86 @@ class LoRA(
pass
```

These markers can be used to execute a certain type of test **for every model**. To use them you have two options:
1. Use `make` command:
These markers enable the execution of specific test types across all models. You can run these tests using either of these methods:

1. Using the make command:
```bash
make test-adapter-method-subset subset=lora
```

2. Directly executing from the test directory:
```bash
cd tests/test_methods
pytest -m lora
```

Both approaches will execute all LoRA tests across every model in the adapters library.

## Adding a New Adapter Method to the Test Suite

The modular design of the test base simplifies the process of adding tests for new adapter methods. To add tests for a new adapter method "X", follow these steps:

1. Create the Test Implementation:
Create a new file `tests/test_methods/method_test_impl/peft/test_X.py` and implement the test mixin class:

```python
@require_torch
class XTestMixin(AdapterMethodBaseTestMixin):

default_config = XConfig()

def test_add_X(self):
model = self.get_model()
self.run_add_test(model, self.default_config, ["adapters.{name}."])

def ...
```

2. Register the Test Mixin:
Add the new test mixin class to `tests/test_methods/generator.py`:

```python
from tests.test_methods.method_test_impl.peft.test_X import XTestMixin

def generate_method_tests(model_test_base, ...):
""" Generate method tests for the given model test base """
test_classes = {}

@require_torch
@pytest.mark.core
class Core(
model_test_base,
CompabilityTestMixin,
AdapterFusionModelTestMixin,
unittest.TestCase,
):
pass

if "Core" not in excluded_tests:
test_classes["Core"] = Core

@require_torch
@pytest.mark.X
class X(
model_test_base,
XTestMixin,
unittest.TestCase,
):
pass

if "X" not in excluded_tests:
test_classes["X"] = X
```

The pytest marker enables execution of the new method's tests across all adapter models using:
```bash
make test-adapter-method-subset subset=lora
make test-adapter-method-subset subset=X
```

2. Navigate to directory and directly execute:
```bash
cd tests/test_methods
pytest -m lora
If the new method is incompatible with specific adapter models, you can exclude the tests in the respective `test_on_xyz.py` file:

```python
method_tests = generate_method_tests(BartAdapterTestBase, excluded_tests=["PromptTuning", "X"])
```
Both versions will execute all LoRA tests for every model in the adapters library.

Note: It is recommended to design new methods to work with the complete library whenever possible. Only exclude tests when there are unavoidable compatibility issues and make them clear in the documenation.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
63 changes: 63 additions & 0 deletions utils/rename_script.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
import os
import re


def rename_test_files(directory):
"""
Renames test files in the given directory from pattern 'test_name.py' to 'test_on_name.py'
Args:
directory (str): The directory containing the test files to rename
Returns:
dict: A mapping of old filenames to new filenames for successfully renamed files
"""
# Store the mapping of old to new names
renamed_files = {}

# Regular expression to match test files
pattern = r"^test_([^on_].+)\.py$"

# List all files in the directory
for filename in os.listdir(directory):
match = re.match(pattern, filename)

# Check if the file matches our pattern and doesn't already have 'on' in it
if match and "test_on_" not in filename:
base_name = match.group(1)
new_filename = f"test_{base_name}_model.py"

# Construct full file paths
old_path = os.path.join(directory, filename)
new_path = os.path.join(directory, new_filename)

# Check if the new filename already exists
if os.path.exists(new_path):
print(f"Warning: {new_filename} already exists, skipping {filename}")
continue

try:
# Rename the file
os.rename(old_path, new_path)
renamed_files[filename] = new_filename
print(f"Renamed: {filename} -> {new_filename}")
except OSError as e:
print(f"Error renaming {filename}: {e}")

return renamed_files


# Example usage
if __name__ == "__main__":
# Get the current directory or specify your test directory path
current_dir = os.path.dirname(os.path.abspath(__file__))

print("Starting file rename operation...")
renamed = rename_test_files(current_dir)

print("\nSummary of renamed files:")
if renamed:
for old_name, new_name in renamed.items():
print(f"- {old_name}{new_name}")
else:
print("No files were renamed.")

0 comments on commit 2c80a5c

Please sign in to comment.