Skip to content

Commit

Permalink
Merge branch 'mkstd_model_space' into fix_128
Browse files Browse the repository at this point in the history
  • Loading branch information
dilpath authored Jan 6, 2025
2 parents e3219c3 + cfaf7c4 commit 4620bab
Show file tree
Hide file tree
Showing 8 changed files with 36 additions and 5 deletions.
2 changes: 1 addition & 1 deletion doc/analysis.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Analysis
After using PEtab Select to perform model selection, you may want to operate on all "good" calibrated models.
The PEtab Select Python library provides some methods to help with this. Please request any missing methods.

See the Python API docs for the ``Models`` class, which provides some methods. In particular, ``Models.df`` can be used
See the Python API docs for the :class:`petab_select.Models` class, which provides some methods. In particular, :attr:`petab_select.Models.df` can be used
to get a quick overview over all models, as a pandas dataframe.

Additionally, see the Python API docs for the ``petab_select.analysis`` module, which contains some methods to subset and group models,
Expand Down
1 change: 1 addition & 0 deletions doc/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ petab-select Python API
:toctree: generated

petab_select
petab_select.analyze
petab_select.candidate_space
petab_select.constants
petab_select.criteria
Expand Down
15 changes: 12 additions & 3 deletions petab_select/analyze.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
"""Methods to analyze results of model selection."""

import warnings
from collections.abc import Callable

from .constants import Criterion
Expand Down Expand Up @@ -107,10 +108,18 @@ def get_best(
for model in models:
if compute_criterion and not model.has_criterion(criterion):
model.get_criterion(criterion)
if not model.has_criterion(criterion):
warnings.warn(
f"The model `{model.hash}` has no value set for criterion "
f"`{criterion}`. Consider using `compute_criterion=True` "
"if there is sufficient information already stored in the "
"model (e.g. the likelihood).",
RuntimeWarning,
stacklevel=2,
)
continue
if best_model is None:
if model.has_criterion(criterion):
best_model = model
# TODO warn if criterion is not available?
best_model = model
continue
if compare(best_model, model, criterion=criterion):
best_model = model
Expand Down
7 changes: 7 additions & 0 deletions petab_select/constants.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,13 @@ class Criterion(str, Enum):
string.digits + string.ascii_uppercase + string.ascii_lowercase
)
PREDECESSOR_MODEL_HASH = "predecessor_model_hash"
ITERATION = "iteration"
PETAB_PROBLEM = "petab_problem"
PETAB_YAML = "petab_yaml"
HASH = "hash"

# MODEL_SPACE_FILE_NON_PARAMETER_COLUMNS = [MODEL_ID, PETAB_YAML]
MODEL_SPACE_FILE_NON_PARAMETER_COLUMNS = [MODEL_SUBSPACE_ID, PETAB_YAML]

# PEtab
PETAB_ESTIMATE_TRUE = 1
Expand Down
3 changes: 3 additions & 0 deletions petab_select/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,9 @@

from .constants import (
ESTIMATE,
CRITERIA,
ESTIMATED_PARAMETERS,
ITERATION,
MODEL_HASH,
MODEL_HASH_DELIMITER,
MODEL_ID,
Expand Down
10 changes: 10 additions & 0 deletions test/cli/input/models.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,11 @@
k2: 0.15
k3: 0.0
model_id: model_1
model_subspace_id: M
model_subspace_indices:
- 0
- 1
- 1
parameters:
k1: 0.2
k2: estimate
Expand All @@ -23,6 +28,11 @@
model_hash: M-110
model_subspace_petab_yaml: ../../../doc/examples/model_selection/petab_problem.yaml
model_id: model_2
model_subspace_id: M
model_subspace_indices:
- 1
- 1
- 0
parameters:
k1: estimate
k2: estimate
Expand Down
1 change: 1 addition & 0 deletions test/pypesto/generate_expected_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ def objective_customizer(obj):
"objective_customizer": objective_customizer,
}


for test_case_path in test_cases_path.glob("*"):
if test_cases and test_case_path.stem not in test_cases:
continue
Expand Down
2 changes: 1 addition & 1 deletion test_cases/0009/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@ N.B. This original Blasi et al. problem is difficult to solve with a stepwise me
1. performing 100 FAMoS starts, initialized at random models. Usually <5% of the starts ended at the best model.
2. assessing reproducibility. Most of the starts that end at the best model are not reproducible. Instead, the path through model space can differ a lot despite "good" calibration, because many pairs of models differ in AICc by less than numerical noise.

1 start was found that reproducibly ends at the best model. The initial model of that start is the predecessor model in this test case. However, the path through model space is not reproducible -- there are at least two possibilities, perhaps more, depending on simulation tolerances. Hence, you should expect to produce a similar `expected_summary.tsv`, but perhaps with a few rows swapped. If you see a different summary.tsv, please report (or retry a few times). In particular, a different summary.tsv file will have a different sequence of values in the `current model criterion` column (accounting for numerical noise).
1 start was found that reproducibly ends at the best model. The initial model of that start is the predecessor model in this test case. However, the path through model space is not reproducible -- there are at least two possibilities, perhaps more, depending on simulation tolerances. Hence, you should expect to produce a similar `expected_summary.tsv`, but perhaps with a few rows swapped. If you see a different `summary.tsv`, please report (or retry a few times). In particular, a different `summary.tsv` file will have a different sequence of values in the `current model criterion` column (accounting for numerical noise).

0 comments on commit 4620bab

Please sign in to comment.