Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Regression CI #546

Merged
merged 4 commits into from
Dec 17, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
124 changes: 105 additions & 19 deletions .github/workflows/regression_tests.yml
Original file line number Diff line number Diff line change
@@ -1,23 +1,61 @@
# .github/workflows/regression_tests.yml
# .github/workflows/update_regression_tests.yml

# for details on triggering a workflow from a comment, see:
# https://dev.to/zirkelc/trigger-github-workflow-for-comment-on-pull-request-45l2
name: Regression Tests

on:
# pull_request:
# branches:
# - main
issue_comment: # trigger from comment; event runs on the default branch
types: [created]

jobs:
regression_tests:
name: regression_tests
update_regression_tests:
name: update_regression_tests
runs-on: ubuntu-20.04
# Trigger from a comment that contains '/test_regression'
if: github.event.issue.pull_request && contains(github.event.comment.body, '/test_regression')
# workflow needs permissions to write to the PR
permissions:
contents: write
pull-requests: write
issues: read

steps:
- name: Create initial status comment
uses: actions/github-script@v7
id: initial-comment
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const response = await github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: '## Regression Test\n⏳ Workflow is currently running...'
});
return response.data.id;

- name: Check if PR is from fork
id: check-fork
uses: actions/github-script@v7
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const pr = await github.rest.pulls.get({
owner: context.repo.owner,
repo: context.repo.repo,
pull_number: context.issue.number
});
return pr.data.head.repo.fork;

- uses: actions/checkout@v3
with:
ref: main
lfs: true
fetch-depth: 0 # This ensures we can checkout main branch too

- uses: actions/setup-python@v4
fetch-depth: 0

- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
architecture: 'x64'
Expand All @@ -26,14 +64,62 @@ jobs:
run: |
python -m pip install --upgrade pip
pip install -e ".[dev]"

- name: Run benchmarks and compare to baseline
if: github.event.pull_request.base.ref == 'main'

- name: Update baseline
id: update-baseline
run: |
NEW_BASELINE=1 pytest -m regression
cp tests/regression_test_baselines.json /tmp/regression_test_baselines.json

- name: Get PR branch
uses: xt0rted/pull-request-comment-branch@v3
id: comment-branch
with:
repo_token: ${{ secrets.GITHUB_TOKEN }}

- name: Checkout PR branch
uses: actions/checkout@v3
with:
ref: ${{ steps.comment-branch.outputs.head_sha }} # using head_sha vs. head_ref makes this work for forks
lfs: true
fetch-depth: 0 # This ensures we can checkout main branch too

- name: Run comparison
id: comparison
run: |
# Check if regression test results exist in main branch
if [ -f 'git cat-file -e main:tests/regression_test_baselines.json' ]; then
git checkout main tests/regression_test_baselines.json
else
echo "No regression test results found in main branch"
fi
pytest -m regression
cp /tmp/regression_test_baselines.json tests/regression_test_baselines.json
pytest -m regression

- name: Update comment with results
uses: actions/github-script@v7
if: always() # Run this step even if previous steps fail
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
const fs = require('fs');
let status = '${{ steps.comparison.outcome }}' === 'success' ? '✅' : '❌';
let message = '## Regression Baseline Update\n' + status + ' Process completed\n\n';

try {
const TestReport = fs.readFileSync('tests/regression_test_report.txt', 'utf8');
message += '```\n' + TestReport + '\n```\n\n';

// Add information about where the changes were pushed
if ('${{ steps.comparison.outcome }}' === 'success') {
if (!${{ fromJson(steps.check-fork.outputs.result) }}) {
message += '✨ Changes have been pushed directly to this PR\n';
} else {
const prNumber = '${{ steps.create-pr.outputs.pull-request-number }}';
message += `✨ Changes have been pushed to a new PR #${prNumber} because this PR is from a fork\n`;
}
}
} catch (error) {
message += '⚠️ No test report was generated\n';
}

await github.rest.issues.updateComment({
comment_id: ${{ steps.initial-comment.outputs.result }},
owner: context.repo.owner,
repo: context.repo.repo,
body: message
});
141 changes: 0 additions & 141 deletions .github/workflows/update_regression_baseline.yml

This file was deleted.

6 changes: 2 additions & 4 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,9 @@
```python
net.record("i_IonotropicSynapse")
```
- Add regression tests and supporting workflows for maintaining baselines (#475, @jnsbck).
- PRs now trigger both tests and regression tests.
- Baselines are maintained in the main branch.
- Add regression tests and supporting workflows for maintaining baselines (#475, #546, @jnsbck).
- Regression tests can be triggered by commenting on a PR.
- Regression tests can be done locally by running `NEW_BASELINE=1 pytest -m regression` i.e. on `main` and then `pytest -m regression` on `feature`, which will produce a test report (printed to the console and saved to .txt).
- If a PR introduces new baseline tests or reduces runtimes, then a new baseline can be created by commenting "/update_regression_baselines" on the PR.

- refactor plotting (#539, @jnsbck).
- rm networkx dependency
Expand Down
30 changes: 30 additions & 0 deletions tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -206,6 +206,36 @@ def get_or_compute_swc2jaxley_params(
params = {}


def pytest_collection_modifyitems(config, items):
if config.getoption("--runslow"):
# --runslow given in cli: do not skip slow tests
return
skip_slow = pytest.mark.skip(reason="need --runslow option to run")
for item in items:
if "slow" in item.keywords:
item.add_marker(skip_slow)


def pytest_collection_modifyitems(config, items):
NEW_BASELINE = (
int(os.environ["NEW_BASELINE"]) if "NEW_BASELINE" in os.environ else 0
)

dirname = os.path.dirname(__file__)
baseline_fname = os.path.join(dirname, "regression_test_baselines.json")

def should_skip_regression():
return not NEW_BASELINE and not os.path.exists(baseline_fname)

if should_skip_regression():
for item in items:
if "regression" in item.keywords:
skip_regression = pytest.mark.skip(
reason="need NEW_BASELINE env to run"
)
item.add_marker(skip_regression)


@pytest.fixture(scope="session", autouse=True)
def print_session_report(request, pytestconfig):
"""Cleanup a testing directory once we are finished."""
Expand Down
Loading