Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hotfix/fix automatic batch size onnx #123

Merged
merged 2 commits into from
Jun 24, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,12 @@
# Changelog
All notable changes to this project will be documented in this file.

### [2.1.11]

#### Fixed

- Fix sklearn automatic batch finder not working properly with ONNX backbones

### [2.1.10]

#### Fixed
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[tool.poetry]
name = "quadra"
version = "2.1.10"
version = "2.1.11"
description = "Deep Learning experiment orchestration library"
authors = [
"Federico Belotti <[email protected]>",
Expand Down
2 changes: 1 addition & 1 deletion quadra/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
__version__ = "2.1.10"
__version__ = "2.1.11"


def get_version():
Expand Down
3 changes: 2 additions & 1 deletion quadra/utils/classification.py
Original file line number Diff line number Diff line change
Expand Up @@ -601,11 +601,12 @@ def automatic_batch_size_computation(
log.info("Trying batch size: %d", datamodule.batch_size)
_ = get_feature(feature_extractor=backbone, dl=base_dataloader, iteration_over_training=1, limit_batches=1)
except RuntimeError as e:
if "CUDA out of memory" in str(e):
if batch_size > 1:
batch_size = batch_size // 2
optimal = False
continue

log.error("Unable to run the model with batch size 1")
raise e

log.info("Found optimal batch size: %d", datamodule.batch_size)
Expand Down
Loading