Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[INFRA] Expanding supported Python versions #101

Merged
merged 3 commits into from
Nov 5, 2024

Conversation

emdupre
Copy link
Collaborator

@emdupre emdupre commented Nov 4, 2024

Addresses #93

  • Tests py3.12 support
  • Tests py3.13 support
  • Documents supported versions in README

@emdupre
Copy link
Collaborator Author

emdupre commented Nov 4, 2024

The two errors for 3.13 are from our fugw dependency :

FAILED fmralign/tests/test_alignment_methods.py::test_fugw_alignment[dense] - _pickle.PicklingError: Could not pickle the task to send it to the workers.
FAILED fmralign/tests/test_alignment_methods.py::test_fugw_alignment[coarse-to-fine] - _pickle.PicklingError: Could not pickle the task to send it to the workers.

And the full traceback for one of these errors :

_____________________ test_fugw_alignment[coarse-to-fine] ______________________
joblib.externals.loky.process_executor._RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/opt/hostedtoolcache/Python/3.13.0/x64/lib/python3.13/site-packages/joblib/externals/loky/backend/queues.py", line 159, in _feed
    obj_ = dumps(obj, reducers=reducers)
  File "/opt/hostedtoolcache/Python/3.13.0/x64/lib/python3.13/site-packages/joblib/externals/loky/backend/reduction.py", line 215, in dumps
    dump(obj, buf, reducers=reducers, protocol=protocol)
    ~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.13.0/x64/lib/python3.13/site-packages/joblib/externals/loky/backend/reduction.py", line 208, in dump
    _LokyPickler(file, reducers=reducers, protocol=protocol).dump(obj)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^
  File "/opt/hostedtoolcache/Python/3.13.0/x64/lib/python3.13/site-packages/joblib/externals/cloudpickle/cloudpickle.py", line 1245, in dump
    return super().dump(obj)
           ~~~~~~~~~~~~^^^^^
  File "/opt/hostedtoolcache/Python/3.13.0/x64/lib/python3.13/site-packages/torch/storage.py", line 1219, in __reduce__
    torch.save(self, b, _use_new_zipfile_serialization=False)
    ~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.13.0/x64/lib/python3.13/site-packages/torch/serialization.py", line 865, in save
    _legacy_save(obj, opened_file, pickle_module, pickle_protocol)
    ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/hostedtoolcache/Python/3.13.0/x64/lib/python3.13/site-packages/torch/serialization.py", line 1009, in _legacy_save
    pickler.persistent_id = persistent_id
    ^^^^^^^^^^^^^^^^^^^^^
AttributeError: '_pickle.Pickler' object attribute 'persistent_id' is read-only
"""
The above exception was the direct cause of the following exception:
method = 'coarse-to-fine'
    @pytest.mark.parametrize("method", ["dense", "coarse-to-fine"])
    def test_fugw_alignment(method):
        # Create a fake segmentation
        segmentation = np.ones((10, 10, 10))
        n_features = 3
        n_samples = int(segmentation.sum())
        X = np.random.randn(n_samples, n_features).T
        Y = np.random.randn(n_samples, n_features).T
    
>       fugw_alignment = FugwAlignment(segmentation, method=method)
fmralign/tests/test_alignment_methods.py:208: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
fmralign/alignment_methods.py:832: in __init__
    ) = self._prepare_geometry_embedding(
fmralign/alignment_methods.py:857: in _prepare_geometry_embedding
    geometry_embedding = lmds.compute_lmds_volume(
/opt/hostedtoolcache/Python/3.13.0/x64/lib/python3.13/site-packages/fugw/scripts/lmds.py:331: in compute_lmds_volume
    Parallel(n_jobs=n_jobs)(
/opt/hostedtoolcache/Python/3.13.0/x64/lib/python3.13/site-packages/joblib/parallel.py:2007: in __call__
    return output if self.return_generator else list(output)
/opt/hostedtoolcache/Python/3.13.0/x64/lib/python3.13/site-packages/joblib/parallel.py:1650: in _get_outputs
    yield from self._retrieve()
/opt/hostedtoolcache/Python/3.13.0/x64/lib/python3.13/site-packages/joblib/parallel.py:1754: in _retrieve
    self._raise_error_fast()
/opt/hostedtoolcache/Python/3.13.0/x64/lib/python3.13/site-packages/joblib/parallel.py:1789: in _raise_error_fast
    error_job.get_result(self.timeout)
/opt/hostedtoolcache/Python/3.13.0/x64/lib/python3.13/site-packages/joblib/parallel.py:745: in get_result
    return self._return_or_raise()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
self = <fugw.scripts.lmds.rich_progress_joblib.<locals>.BatchCompletionCallback object at 0x7f1468071a90>
    def _return_or_raise(self):
        try:
            if self.status == TASK_ERROR:
>               raise self._result
E               _pickle.PicklingError: Could not pickle the task to send it to the workers.
/opt/hostedtoolcache/Python/3.13.0/x64/lib/python3.13/site-packages/joblib/parallel.py:763: PicklingError

I'll see if I can suggest a patch to fugw to unblock this.

@emdupre
Copy link
Collaborator Author

emdupre commented Nov 4, 2024

PyTorch stable releases do not yet support Py3.13 ; you can find their intended Py3.13 support matrix here (with some discussion) : pytorch/pytorch#130249

It looks like full support is planned for release 2.6 , though 2.5.1 (which is the latest, stable release) does include partial py3.13 support.

I see our options as :

  1. Not supporting 3.13 for now
  2. Building torch on fugw from source, which would then have fugw support 3.13 (and by extension, fmralign)

I'm leaning towards not supporting 3.13 and letting fugw upgrading its support on its own timeline, which we can then follow. Since 3.13 is quite new, I don't think it will be a huge hindrance if we hold off on support for a bit longer. WDYT ?

@bthirion
Copy link
Contributor

bthirion commented Nov 4, 2024

PyTorch stable releases do not yet support Py3.13 ; you can find their intended Py3.13 support matrix here (with some discussion) : pytorch/pytorch#130249

It looks like full support is planned for release 2.6 , though 2.5.1 (which is the latest, stable release) does include partial py3.13 support.

I see our options as :

1. Not supporting 3.13 for now

2. Building torch on fugw from source, which would then have fugw support 3.13 (and by extension, fmralign)

I'm leaning towards not supporting 3.13 and letting fugw upgrading its support on its own timeline, which we can then follow. Since 3.13 is quite new, I don't think it will be a huge hindrance if we hold off on support for a bit longer. WDYT ?

Not supporting 3.13 for now is fine IMHO

@emdupre emdupre marked this pull request as ready for review November 5, 2024 00:14
README.md Outdated Show resolved Hide resolved
Co-authored-by: Pierre-Louis Barbarant <[email protected]>
@emdupre emdupre merged commit 131367e into Parietal-INRIA:main Nov 5, 2024
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants