diff --git a/CHANGELOG.md b/CHANGELOG.md
index 1561fa6e..74e2bb7c 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,3 +1,15 @@
+## [1.0.10] - 2024-04-08
+
+### Added
+- CosMX reader with image stitching (experimental)
+
+### Changed
+- Default `min_transcripts` set in snakemake configs
+- Minimum number of transcripts per patch set to 4000 (#41)
+- Config files refactoring (configs added or renamed)
+- Readers refactoring
+- Section with error during report are not displayed (instead of throwing an error)
+
## [1.0.9] - 2024-04-03
### Added:
diff --git a/docs/api/io.md b/docs/api/io.md
index e939eda2..8beb0005 100644
--- a/docs/api/io.md
+++ b/docs/api/io.md
@@ -35,3 +35,11 @@
::: sopa.io.wsi
options:
show_root_heading: true
+
+::: sopa.io.uniform
+ options:
+ show_root_heading: true
+
+::: sopa.io.blobs
+ options:
+ show_root_heading: true
diff --git a/docs/api/utils/data.md b/docs/api/utils/data.md
deleted file mode 100644
index 3e4013e9..00000000
--- a/docs/api/utils/data.md
+++ /dev/null
@@ -1,7 +0,0 @@
-::: sopa.utils.data.uniform
- options:
- show_root_heading: true
-
-::: sopa.utils.data.blobs
- options:
- show_root_heading: true
diff --git a/docs/faq.md b/docs/faq.md
index 75356d83..4a01e143 100644
--- a/docs/faq.md
+++ b/docs/faq.md
@@ -13,7 +13,7 @@ In this documentation, `data_path` denotes the path to your raw data. Select the
=== "MERSCOPE"
`data_path` is the "region" directory containing a `detected_transcripts.csv` file and an `image` directory. For instance, the directory can be called `region_0`.
=== "CosMX"
- (The CosMX data requires stitching the FOVs. It will be added soon, see [this issue](https://github.com/gustaveroussy/sopa/issues/5))
+ `data_path` is the directory containing (i) the transcript file (ending with `_tx_file.csv` or `_tx_file.csv.gz`), (ii) the FOV locations file, and (iii) a `Morphology2D` directory containing the images.
=== "MACSima"
`data_path` is the directory containing multiple `.ome.tif` files (one file per channel)
=== "PhenoCycler"
@@ -21,6 +21,12 @@ In this documentation, `data_path` denotes the path to your raw data. Select the
=== "Hyperion"
`data_path` is the directory containing multiple `.ome.tiff` files (one file per channel)
+## I have small artifact cells, how do remove them?
+
+You may have small cells that were segmented but that should be removed. For that, `Sopa` offers three filtering approaches: using their area, their transcript count, or their fluorescence intensity. Refer to the following config parameters from this [example config](https://github.com/gustaveroussy/sopa/blob/master/workflow/config/example_commented.yaml): `min_area`, `min_transcripts`, and `min_intensity_ratio`.
+
+If using the CLI, `--min-area` can be provided to `sopa segmentation cellpose` or `sopa resolve baysor`, and `--min-transcripts`/`--min-intensity-ratio` can be provided to `sopa aggregate`.
+
## Cellpose is not segmenting enough cells; what should I do?
- The main Cellpose parameter to check is `diameter`, i.e. a typical cell diameter **in pixels**. Note that this is highly specific to the technology you're using since the micron-to-pixel ratio can differ. We advise you to start with the default parameter for your technology of interest (see the `diameter` parameter inside our config files [here](https://github.com/gustaveroussy/sopa/tree/master/workflow/config)).
diff --git a/docs/tutorials/api_usage.ipynb b/docs/tutorials/api_usage.ipynb
index f8e13927..209476b3 100644
--- a/docs/tutorials/api_usage.ipynb
+++ b/docs/tutorials/api_usage.ipynb
@@ -6,7 +6,6 @@
"metadata": {},
"outputs": [],
"source": [
- "from sopa.utils.data import uniform\n",
"import sopa.segmentation\n",
"import sopa.io"
]
@@ -60,9 +59,9 @@
],
"source": [
"# The line below creates a toy dataset for this tutorial\n",
- "# Instead, use sopa.io to read your own data as a SpatialData object: see https://gustaveroussy.github.io/sopa/api/io/\n",
- "# For instance, if you have MERSCOPE data, you can do `sdata = sopa.io.merscope(\"/path/to/region_0\")`\n",
- "sdata = uniform()\n",
+ "# To load your own data, such as MERSCOPE data, you can do `sdata = sopa.io.merscope(\"/path/to/region_0\")`\n",
+ "# For more details, see https://gustaveroussy.github.io/sopa/api/io/\n",
+ "sdata = sopa.io.uniform()\n",
"\n",
"sdata.write(\"tuto.zarr\", overwrite=True)\n",
"sdata"
diff --git a/docs/tutorials/cli_usage.md b/docs/tutorials/cli_usage.md
index aadd766e..3e83d4e9 100644
--- a/docs/tutorials/cli_usage.md
+++ b/docs/tutorials/cli_usage.md
@@ -7,7 +7,7 @@ Here, we provide a minimal example of command line usage. For more details and t
For this tutorial, we use a generated dataset. You can expect a total runtime of a few minutes.
-The command below will generate and save it on disk (you can change the path `tuto.zarr` to save it somewhere else). If you want to load your own data: choose the right panel below, or see the [`sopa read` CLI documentation](`../../cli/#sopa-read`).
+The command below will generate and save it on disk (you can change the path `tuto.zarr` to save it somewhere else). If you want to load your own data: choose the right panel below. For more information, refer to this [FAQ](../../faq/#what-kind-of-inputs-do-i-need-to-run-sopa) describing which data input you need, or see the [`sopa read`](`../../cli/#sopa-read`) documentation.
=== "Tutorial"
```sh
@@ -29,9 +29,6 @@ The command below will generate and save it on disk (you can change the path `tu
# it will generate a '/path/to/sample/directory.zarr' directory
sopa read /path/to/sample/directory --technology cosmx
```
-
- !!! warning
- The CosMX data requires stitching the FOVs. It will be added soon, see [this issue](https://github.com/gustaveroussy/sopa/issues/5).
=== "PhenoCycler"
```sh
# it will generate a '/path/to/sample.zarr' directory
@@ -149,7 +146,7 @@ For this tutorial, we will use the config below. Save this in a `config.toml` fi
```toml
[data]
force_2d = true
-min_molecules_per_cell = 10 # min number of transcripts per cell
+min_molecules_per_cell = 10
x = "x"
y = "y"
z = "z"
@@ -229,11 +226,11 @@ This **mandatory** step turns the data into an `AnnData` object. We can count th
=== "Count transcripts + average intensities"
```sh
- sopa aggregate tuto.zarr --gene-column genes --average-intensities
+ sopa aggregate tuto.zarr --gene-column genes --average-intensities --min-transcripts 10
```
=== "Count transcripts"
```sh
- sopa aggregate tuto.zarr --gene-column genes
+ sopa aggregate tuto.zarr --gene-column genes --min-transcripts 10
```
=== "Average intensities"
```sh
diff --git a/mkdocs.yml b/mkdocs.yml
index 6d6c0d04..cac50b3f 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -37,7 +37,6 @@ nav:
- sopa.annotation.tangram: api/annotation/tangram.md
- sopa.annotation.fluorescence: api/annotation/fluorescence.md
- sopa.utils:
- - sopa.utils.data: api/utils/data.md
- sopa.utils.image: api/utils/image.md
- sopa.utils.polygon_crop: api/utils/polygon_crop.md
- sopa.embedding: api/embedding.md
diff --git a/poetry.lock b/poetry.lock
index a3777548..1a1125ac 100644
--- a/poetry.lock
+++ b/poetry.lock
@@ -718,64 +718,64 @@ files = [
[[package]]
name = "contourpy"
-version = "1.2.0"
+version = "1.2.1"
description = "Python library for calculating contours of 2D quadrilateral grids"
optional = false
python-versions = ">=3.9"
files = [
- {file = "contourpy-1.2.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:0274c1cb63625972c0c007ab14dd9ba9e199c36ae1a231ce45d725cbcbfd10a8"},
- {file = "contourpy-1.2.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:ab459a1cbbf18e8698399c595a01f6dcc5c138220ca3ea9e7e6126232d102bb4"},
- {file = "contourpy-1.2.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6fdd887f17c2f4572ce548461e4f96396681212d858cae7bd52ba3310bc6f00f"},
- {file = "contourpy-1.2.0-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5d16edfc3fc09968e09ddffada434b3bf989bf4911535e04eada58469873e28e"},
- {file = "contourpy-1.2.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1c203f617abc0dde5792beb586f827021069fb6d403d7f4d5c2b543d87edceb9"},
- {file = "contourpy-1.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b69303ceb2e4d4f146bf82fda78891ef7bcd80c41bf16bfca3d0d7eb545448aa"},
- {file = "contourpy-1.2.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:884c3f9d42d7218304bc74a8a7693d172685c84bd7ab2bab1ee567b769696df9"},
- {file = "contourpy-1.2.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:4a1b1208102be6e851f20066bf0e7a96b7d48a07c9b0cfe6d0d4545c2f6cadab"},
- {file = "contourpy-1.2.0-cp310-cp310-win32.whl", hash = "sha256:34b9071c040d6fe45d9826cbbe3727d20d83f1b6110d219b83eb0e2a01d79488"},
- {file = "contourpy-1.2.0-cp310-cp310-win_amd64.whl", hash = "sha256:bd2f1ae63998da104f16a8b788f685e55d65760cd1929518fd94cd682bf03e41"},
- {file = "contourpy-1.2.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:dd10c26b4eadae44783c45ad6655220426f971c61d9b239e6f7b16d5cdaaa727"},
- {file = "contourpy-1.2.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:5c6b28956b7b232ae801406e529ad7b350d3f09a4fde958dfdf3c0520cdde0dd"},
- {file = "contourpy-1.2.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ebeac59e9e1eb4b84940d076d9f9a6cec0064e241818bcb6e32124cc5c3e377a"},
- {file = "contourpy-1.2.0-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:139d8d2e1c1dd52d78682f505e980f592ba53c9f73bd6be102233e358b401063"},
- {file = "contourpy-1.2.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:1e9dc350fb4c58adc64df3e0703ab076f60aac06e67d48b3848c23647ae4310e"},
- {file = "contourpy-1.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:18fc2b4ed8e4a8fe849d18dce4bd3c7ea637758c6343a1f2bae1e9bd4c9f4686"},
- {file = "contourpy-1.2.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:16a7380e943a6d52472096cb7ad5264ecee36ed60888e2a3d3814991a0107286"},
- {file = "contourpy-1.2.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:8d8faf05be5ec8e02a4d86f616fc2a0322ff4a4ce26c0f09d9f7fb5330a35c95"},
- {file = "contourpy-1.2.0-cp311-cp311-win32.whl", hash = "sha256:67b7f17679fa62ec82b7e3e611c43a016b887bd64fb933b3ae8638583006c6d6"},
- {file = "contourpy-1.2.0-cp311-cp311-win_amd64.whl", hash = "sha256:99ad97258985328b4f207a5e777c1b44a83bfe7cf1f87b99f9c11d4ee477c4de"},
- {file = "contourpy-1.2.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:575bcaf957a25d1194903a10bc9f316c136c19f24e0985a2b9b5608bdf5dbfe0"},
- {file = "contourpy-1.2.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:9e6c93b5b2dbcedad20a2f18ec22cae47da0d705d454308063421a3b290d9ea4"},
- {file = "contourpy-1.2.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:464b423bc2a009088f19bdf1f232299e8b6917963e2b7e1d277da5041f33a779"},
- {file = "contourpy-1.2.0-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:68ce4788b7d93e47f84edd3f1f95acdcd142ae60bc0e5493bfd120683d2d4316"},
- {file = "contourpy-1.2.0-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3d7d1f8871998cdff5d2ff6a087e5e1780139abe2838e85b0b46b7ae6cc25399"},
- {file = "contourpy-1.2.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6e739530c662a8d6d42c37c2ed52a6f0932c2d4a3e8c1f90692ad0ce1274abe0"},
- {file = "contourpy-1.2.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:247b9d16535acaa766d03037d8e8fb20866d054d3c7fbf6fd1f993f11fc60ca0"},
- {file = "contourpy-1.2.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:461e3ae84cd90b30f8d533f07d87c00379644205b1d33a5ea03381edc4b69431"},
- {file = "contourpy-1.2.0-cp312-cp312-win32.whl", hash = "sha256:1c2559d6cffc94890b0529ea7eeecc20d6fadc1539273aa27faf503eb4656d8f"},
- {file = "contourpy-1.2.0-cp312-cp312-win_amd64.whl", hash = "sha256:491b1917afdd8638a05b611a56d46587d5a632cabead889a5440f7c638bc6ed9"},
- {file = "contourpy-1.2.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:5fd1810973a375ca0e097dee059c407913ba35723b111df75671a1976efa04bc"},
- {file = "contourpy-1.2.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:999c71939aad2780f003979b25ac5b8f2df651dac7b38fb8ce6c46ba5abe6ae9"},
- {file = "contourpy-1.2.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b7caf9b241464c404613512d5594a6e2ff0cc9cb5615c9475cc1d9b514218ae8"},
- {file = "contourpy-1.2.0-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:266270c6f6608340f6c9836a0fb9b367be61dde0c9a9a18d5ece97774105ff3e"},
- {file = "contourpy-1.2.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dbd50d0a0539ae2e96e537553aff6d02c10ed165ef40c65b0e27e744a0f10af8"},
- {file = "contourpy-1.2.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:11f8d2554e52f459918f7b8e6aa20ec2a3bce35ce95c1f0ef4ba36fbda306df5"},
- {file = "contourpy-1.2.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:ce96dd400486e80ac7d195b2d800b03e3e6a787e2a522bfb83755938465a819e"},
- {file = "contourpy-1.2.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:6d3364b999c62f539cd403f8123ae426da946e142312a514162adb2addd8d808"},
- {file = "contourpy-1.2.0-cp39-cp39-win32.whl", hash = "sha256:1c88dfb9e0c77612febebb6ac69d44a8d81e3dc60f993215425b62c1161353f4"},
- {file = "contourpy-1.2.0-cp39-cp39-win_amd64.whl", hash = "sha256:78e6ad33cf2e2e80c5dfaaa0beec3d61face0fb650557100ee36db808bfa6843"},
- {file = "contourpy-1.2.0-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:be16975d94c320432657ad2402f6760990cb640c161ae6da1363051805fa8108"},
- {file = "contourpy-1.2.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b95a225d4948b26a28c08307a60ac00fb8671b14f2047fc5476613252a129776"},
- {file = "contourpy-1.2.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:0d7e03c0f9a4f90dc18d4e77e9ef4ec7b7bbb437f7f675be8e530d65ae6ef956"},
- {file = "contourpy-1.2.0.tar.gz", hash = "sha256:171f311cb758de7da13fc53af221ae47a5877be5a0843a9fe150818c51ed276a"},
-]
-
-[package.dependencies]
-numpy = ">=1.20,<2.0"
+ {file = "contourpy-1.2.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:bd7c23df857d488f418439686d3b10ae2fbf9bc256cd045b37a8c16575ea1040"},
+ {file = "contourpy-1.2.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:5b9eb0ca724a241683c9685a484da9d35c872fd42756574a7cfbf58af26677fd"},
+ {file = "contourpy-1.2.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4c75507d0a55378240f781599c30e7776674dbaf883a46d1c90f37e563453480"},
+ {file = "contourpy-1.2.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:11959f0ce4a6f7b76ec578576a0b61a28bdc0696194b6347ba3f1c53827178b9"},
+ {file = "contourpy-1.2.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:eb3315a8a236ee19b6df481fc5f997436e8ade24a9f03dfdc6bd490fea20c6da"},
+ {file = "contourpy-1.2.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:39f3ecaf76cd98e802f094e0d4fbc6dc9c45a8d0c4d185f0f6c2234e14e5f75b"},
+ {file = "contourpy-1.2.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:94b34f32646ca0414237168d68a9157cb3889f06b096612afdd296003fdd32fd"},
+ {file = "contourpy-1.2.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:457499c79fa84593f22454bbd27670227874cd2ff5d6c84e60575c8b50a69619"},
+ {file = "contourpy-1.2.1-cp310-cp310-win32.whl", hash = "sha256:ac58bdee53cbeba2ecad824fa8159493f0bf3b8ea4e93feb06c9a465d6c87da8"},
+ {file = "contourpy-1.2.1-cp310-cp310-win_amd64.whl", hash = "sha256:9cffe0f850e89d7c0012a1fb8730f75edd4320a0a731ed0c183904fe6ecfc3a9"},
+ {file = "contourpy-1.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:6022cecf8f44e36af10bd9118ca71f371078b4c168b6e0fab43d4a889985dbb5"},
+ {file = "contourpy-1.2.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:ef5adb9a3b1d0c645ff694f9bca7702ec2c70f4d734f9922ea34de02294fdf72"},
+ {file = "contourpy-1.2.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6150ffa5c767bc6332df27157d95442c379b7dce3a38dff89c0f39b63275696f"},
+ {file = "contourpy-1.2.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:4c863140fafc615c14a4bf4efd0f4425c02230eb8ef02784c9a156461e62c965"},
+ {file = "contourpy-1.2.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:00e5388f71c1a0610e6fe56b5c44ab7ba14165cdd6d695429c5cd94021e390b2"},
+ {file = "contourpy-1.2.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d4492d82b3bc7fbb7e3610747b159869468079fe149ec5c4d771fa1f614a14df"},
+ {file = "contourpy-1.2.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:49e70d111fee47284d9dd867c9bb9a7058a3c617274900780c43e38d90fe1205"},
+ {file = "contourpy-1.2.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:b59c0ffceff8d4d3996a45f2bb6f4c207f94684a96bf3d9728dbb77428dd8cb8"},
+ {file = "contourpy-1.2.1-cp311-cp311-win32.whl", hash = "sha256:7b4182299f251060996af5249c286bae9361fa8c6a9cda5efc29fe8bfd6062ec"},
+ {file = "contourpy-1.2.1-cp311-cp311-win_amd64.whl", hash = "sha256:2855c8b0b55958265e8b5888d6a615ba02883b225f2227461aa9127c578a4922"},
+ {file = "contourpy-1.2.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:62828cada4a2b850dbef89c81f5a33741898b305db244904de418cc957ff05dc"},
+ {file = "contourpy-1.2.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:309be79c0a354afff9ff7da4aaed7c3257e77edf6c1b448a779329431ee79d7e"},
+ {file = "contourpy-1.2.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2e785e0f2ef0d567099b9ff92cbfb958d71c2d5b9259981cd9bee81bd194c9a4"},
+ {file = "contourpy-1.2.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1cac0a8f71a041aa587410424ad46dfa6a11f6149ceb219ce7dd48f6b02b87a7"},
+ {file = "contourpy-1.2.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:af3f4485884750dddd9c25cb7e3915d83c2db92488b38ccb77dd594eac84c4a0"},
+ {file = "contourpy-1.2.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9ce6889abac9a42afd07a562c2d6d4b2b7134f83f18571d859b25624a331c90b"},
+ {file = "contourpy-1.2.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:a1eea9aecf761c661d096d39ed9026574de8adb2ae1c5bd7b33558af884fb2ce"},
+ {file = "contourpy-1.2.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:187fa1d4c6acc06adb0fae5544c59898ad781409e61a926ac7e84b8f276dcef4"},
+ {file = "contourpy-1.2.1-cp312-cp312-win32.whl", hash = "sha256:c2528d60e398c7c4c799d56f907664673a807635b857df18f7ae64d3e6ce2d9f"},
+ {file = "contourpy-1.2.1-cp312-cp312-win_amd64.whl", hash = "sha256:1a07fc092a4088ee952ddae19a2b2a85757b923217b7eed584fdf25f53a6e7ce"},
+ {file = "contourpy-1.2.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:bb6834cbd983b19f06908b45bfc2dad6ac9479ae04abe923a275b5f48f1a186b"},
+ {file = "contourpy-1.2.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:1d59e739ab0e3520e62a26c60707cc3ab0365d2f8fecea74bfe4de72dc56388f"},
+ {file = "contourpy-1.2.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bd3db01f59fdcbce5b22afad19e390260d6d0222f35a1023d9adc5690a889364"},
+ {file = "contourpy-1.2.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a12a813949e5066148712a0626895c26b2578874e4cc63160bb007e6df3436fe"},
+ {file = "contourpy-1.2.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:fe0ccca550bb8e5abc22f530ec0466136379c01321fd94f30a22231e8a48d985"},
+ {file = "contourpy-1.2.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e1d59258c3c67c865435d8fbeb35f8c59b8bef3d6f46c1f29f6123556af28445"},
+ {file = "contourpy-1.2.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:f32c38afb74bd98ce26de7cc74a67b40afb7b05aae7b42924ea990d51e4dac02"},
+ {file = "contourpy-1.2.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:d31a63bc6e6d87f77d71e1abbd7387ab817a66733734883d1fc0021ed9bfa083"},
+ {file = "contourpy-1.2.1-cp39-cp39-win32.whl", hash = "sha256:ddcb8581510311e13421b1f544403c16e901c4e8f09083c881fab2be80ee31ba"},
+ {file = "contourpy-1.2.1-cp39-cp39-win_amd64.whl", hash = "sha256:10a37ae557aabf2509c79715cd20b62e4c7c28b8cd62dd7d99e5ed3ce28c3fd9"},
+ {file = "contourpy-1.2.1-pp39-pypy39_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a31f94983fecbac95e58388210427d68cd30fe8a36927980fab9c20062645609"},
+ {file = "contourpy-1.2.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ef2b055471c0eb466033760a521efb9d8a32b99ab907fc8358481a1dd29e3bd3"},
+ {file = "contourpy-1.2.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:b33d2bc4f69caedcd0a275329eb2198f560b325605810895627be5d4b876bf7f"},
+ {file = "contourpy-1.2.1.tar.gz", hash = "sha256:4d8908b3bee1c889e547867ca4cdc54e5ab6be6d3e078556814a22457f49423c"},
+]
+
+[package.dependencies]
+numpy = ">=1.20"
[package.extras]
bokeh = ["bokeh", "selenium"]
docs = ["furo", "sphinx (>=7.2)", "sphinx-copybutton"]
-mypy = ["contourpy[bokeh,docs]", "docutils-stubs", "mypy (==1.6.1)", "types-Pillow"]
+mypy = ["contourpy[bokeh,docs]", "docutils-stubs", "mypy (==1.8.0)", "types-Pillow"]
test = ["Pillow", "contourpy[test-no-images]", "matplotlib"]
test-no-images = ["pytest", "pytest-cov", "pytest-xdist", "wurlitzer"]
@@ -1216,53 +1216,53 @@ pyflakes = ">=3.2.0,<3.3.0"
[[package]]
name = "fonttools"
-version = "4.50.0"
+version = "4.51.0"
description = "Tools to manipulate font files"
optional = false
python-versions = ">=3.8"
files = [
- {file = "fonttools-4.50.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:effd303fb422f8ce06543a36ca69148471144c534cc25f30e5be752bc4f46736"},
- {file = "fonttools-4.50.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:7913992ab836f621d06aabac118fc258b9947a775a607e1a737eb3a91c360335"},
- {file = "fonttools-4.50.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8e0a1c5bd2f63da4043b63888534b52c5a1fd7ae187c8ffc64cbb7ae475b9dab"},
- {file = "fonttools-4.50.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d40fc98540fa5360e7ecf2c56ddf3c6e7dd04929543618fd7b5cc76e66390562"},
- {file = "fonttools-4.50.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:9fff65fbb7afe137bac3113827855e0204482727bddd00a806034ab0d3951d0d"},
- {file = "fonttools-4.50.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:b1aeae3dd2ee719074a9372c89ad94f7c581903306d76befdaca2a559f802472"},
- {file = "fonttools-4.50.0-cp310-cp310-win32.whl", hash = "sha256:e9623afa319405da33b43c85cceb0585a6f5d3a1d7c604daf4f7e1dd55c03d1f"},
- {file = "fonttools-4.50.0-cp310-cp310-win_amd64.whl", hash = "sha256:778c5f43e7e654ef7fe0605e80894930bc3a7772e2f496238e57218610140f54"},
- {file = "fonttools-4.50.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:3dfb102e7f63b78c832e4539969167ffcc0375b013080e6472350965a5fe8048"},
- {file = "fonttools-4.50.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:9e58fe34cb379ba3d01d5d319d67dd3ce7ca9a47ad044ea2b22635cd2d1247fc"},
- {file = "fonttools-4.50.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2c673ab40d15a442a4e6eb09bf007c1dda47c84ac1e2eecbdf359adacb799c24"},
- {file = "fonttools-4.50.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9b3ac35cdcd1a4c90c23a5200212c1bb74fa05833cc7c14291d7043a52ca2aaa"},
- {file = "fonttools-4.50.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:8844e7a2c5f7ecf977e82eb6b3014f025c8b454e046d941ece05b768be5847ae"},
- {file = "fonttools-4.50.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:f849bd3c5c2249b49c98eca5aaebb920d2bfd92b3c69e84ca9bddf133e9f83f0"},
- {file = "fonttools-4.50.0-cp311-cp311-win32.whl", hash = "sha256:39293ff231b36b035575e81c14626dfc14407a20de5262f9596c2cbb199c3625"},
- {file = "fonttools-4.50.0-cp311-cp311-win_amd64.whl", hash = "sha256:c33d5023523b44d3481624f840c8646656a1def7630ca562f222eb3ead16c438"},
- {file = "fonttools-4.50.0-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:b4a886a6dbe60100ba1cd24de962f8cd18139bd32808da80de1fa9f9f27bf1dc"},
- {file = "fonttools-4.50.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:b2ca1837bfbe5eafa11313dbc7edada79052709a1fffa10cea691210af4aa1fa"},
- {file = "fonttools-4.50.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a0493dd97ac8977e48ffc1476b932b37c847cbb87fd68673dee5182004906828"},
- {file = "fonttools-4.50.0-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:77844e2f1b0889120b6c222fc49b2b75c3d88b930615e98893b899b9352a27ea"},
- {file = "fonttools-4.50.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:3566bfb8c55ed9100afe1ba6f0f12265cd63a1387b9661eb6031a1578a28bad1"},
- {file = "fonttools-4.50.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:35e10ddbc129cf61775d58a14f2d44121178d89874d32cae1eac722e687d9019"},
- {file = "fonttools-4.50.0-cp312-cp312-win32.whl", hash = "sha256:cc8140baf9fa8f9b903f2b393a6c413a220fa990264b215bf48484f3d0bf8710"},
- {file = "fonttools-4.50.0-cp312-cp312-win_amd64.whl", hash = "sha256:0ccc85fd96373ab73c59833b824d7a73846670a0cb1f3afbaee2b2c426a8f931"},
- {file = "fonttools-4.50.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:e270a406219af37581d96c810172001ec536e29e5593aa40d4c01cca3e145aa6"},
- {file = "fonttools-4.50.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:ac2463de667233372e9e1c7e9de3d914b708437ef52a3199fdbf5a60184f190c"},
- {file = "fonttools-4.50.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:47abd6669195abe87c22750dbcd366dc3a0648f1b7c93c2baa97429c4dc1506e"},
- {file = "fonttools-4.50.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:074841375e2e3d559aecc86e1224caf78e8b8417bb391e7d2506412538f21adc"},
- {file = "fonttools-4.50.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:0743fd2191ad7ab43d78cd747215b12033ddee24fa1e088605a3efe80d6984de"},
- {file = "fonttools-4.50.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:3d7080cce7be5ed65bee3496f09f79a82865a514863197ff4d4d177389e981b0"},
- {file = "fonttools-4.50.0-cp38-cp38-win32.whl", hash = "sha256:a467ba4e2eadc1d5cc1a11d355abb945f680473fbe30d15617e104c81f483045"},
- {file = "fonttools-4.50.0-cp38-cp38-win_amd64.whl", hash = "sha256:f77e048f805e00870659d6318fd89ef28ca4ee16a22b4c5e1905b735495fc422"},
- {file = "fonttools-4.50.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:b6245eafd553c4e9a0708e93be51392bd2288c773523892fbd616d33fd2fda59"},
- {file = "fonttools-4.50.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:a4062cc7e8de26f1603323ef3ae2171c9d29c8a9f5e067d555a2813cd5c7a7e0"},
- {file = "fonttools-4.50.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:34692850dfd64ba06af61e5791a441f664cb7d21e7b544e8f385718430e8f8e4"},
- {file = "fonttools-4.50.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:678dd95f26a67e02c50dcb5bf250f95231d455642afbc65a3b0bcdacd4e4dd38"},
- {file = "fonttools-4.50.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:4f2ce7b0b295fe64ac0a85aef46a0f2614995774bd7bc643b85679c0283287f9"},
- {file = "fonttools-4.50.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:d346f4dc2221bfb7ab652d1e37d327578434ce559baf7113b0f55768437fe6a0"},
- {file = "fonttools-4.50.0-cp39-cp39-win32.whl", hash = "sha256:a51eeaf52ba3afd70bf489be20e52fdfafe6c03d652b02477c6ce23c995222f4"},
- {file = "fonttools-4.50.0-cp39-cp39-win_amd64.whl", hash = "sha256:8639be40d583e5d9da67795aa3eeeda0488fb577a1d42ae11a5036f18fb16d93"},
- {file = "fonttools-4.50.0-py3-none-any.whl", hash = "sha256:48fa36da06247aa8282766cfd63efff1bb24e55f020f29a335939ed3844d20d3"},
- {file = "fonttools-4.50.0.tar.gz", hash = "sha256:fa5cf61058c7dbb104c2ac4e782bf1b2016a8cf2f69de6e4dd6a865d2c969bb5"},
+ {file = "fonttools-4.51.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:84d7751f4468dd8cdd03ddada18b8b0857a5beec80bce9f435742abc9a851a74"},
+ {file = "fonttools-4.51.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:8b4850fa2ef2cfbc1d1f689bc159ef0f45d8d83298c1425838095bf53ef46308"},
+ {file = "fonttools-4.51.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b5b48a1121117047d82695d276c2af2ee3a24ffe0f502ed581acc2673ecf1037"},
+ {file = "fonttools-4.51.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:180194c7fe60c989bb627d7ed5011f2bef1c4d36ecf3ec64daec8302f1ae0716"},
+ {file = "fonttools-4.51.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:96a48e137c36be55e68845fc4284533bda2980f8d6f835e26bca79d7e2006438"},
+ {file = "fonttools-4.51.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:806e7912c32a657fa39d2d6eb1d3012d35f841387c8fc6cf349ed70b7c340039"},
+ {file = "fonttools-4.51.0-cp310-cp310-win32.whl", hash = "sha256:32b17504696f605e9e960647c5f64b35704782a502cc26a37b800b4d69ff3c77"},
+ {file = "fonttools-4.51.0-cp310-cp310-win_amd64.whl", hash = "sha256:c7e91abdfae1b5c9e3a543f48ce96013f9a08c6c9668f1e6be0beabf0a569c1b"},
+ {file = "fonttools-4.51.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:a8feca65bab31479d795b0d16c9a9852902e3a3c0630678efb0b2b7941ea9c74"},
+ {file = "fonttools-4.51.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:8ac27f436e8af7779f0bb4d5425aa3535270494d3bc5459ed27de3f03151e4c2"},
+ {file = "fonttools-4.51.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0e19bd9e9964a09cd2433a4b100ca7f34e34731e0758e13ba9a1ed6e5468cc0f"},
+ {file = "fonttools-4.51.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b2b92381f37b39ba2fc98c3a45a9d6383bfc9916a87d66ccb6553f7bdd129097"},
+ {file = "fonttools-4.51.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:5f6bc991d1610f5c3bbe997b0233cbc234b8e82fa99fc0b2932dc1ca5e5afec0"},
+ {file = "fonttools-4.51.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:9696fe9f3f0c32e9a321d5268208a7cc9205a52f99b89479d1b035ed54c923f1"},
+ {file = "fonttools-4.51.0-cp311-cp311-win32.whl", hash = "sha256:3bee3f3bd9fa1d5ee616ccfd13b27ca605c2b4270e45715bd2883e9504735034"},
+ {file = "fonttools-4.51.0-cp311-cp311-win_amd64.whl", hash = "sha256:0f08c901d3866a8905363619e3741c33f0a83a680d92a9f0e575985c2634fcc1"},
+ {file = "fonttools-4.51.0-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:4060acc2bfa2d8e98117828a238889f13b6f69d59f4f2d5857eece5277b829ba"},
+ {file = "fonttools-4.51.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:1250e818b5f8a679ad79660855528120a8f0288f8f30ec88b83db51515411fcc"},
+ {file = "fonttools-4.51.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:76f1777d8b3386479ffb4a282e74318e730014d86ce60f016908d9801af9ca2a"},
+ {file = "fonttools-4.51.0-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8b5ad456813d93b9c4b7ee55302208db2b45324315129d85275c01f5cb7e61a2"},
+ {file = "fonttools-4.51.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:68b3fb7775a923be73e739f92f7e8a72725fd333eab24834041365d2278c3671"},
+ {file = "fonttools-4.51.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:8e2f1a4499e3b5ee82c19b5ee57f0294673125c65b0a1ff3764ea1f9db2f9ef5"},
+ {file = "fonttools-4.51.0-cp312-cp312-win32.whl", hash = "sha256:278e50f6b003c6aed19bae2242b364e575bcb16304b53f2b64f6551b9c000e15"},
+ {file = "fonttools-4.51.0-cp312-cp312-win_amd64.whl", hash = "sha256:b3c61423f22165541b9403ee39874dcae84cd57a9078b82e1dce8cb06b07fa2e"},
+ {file = "fonttools-4.51.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:1621ee57da887c17312acc4b0e7ac30d3a4fb0fec6174b2e3754a74c26bbed1e"},
+ {file = "fonttools-4.51.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:e9d9298be7a05bb4801f558522adbe2feea1b0b103d5294ebf24a92dd49b78e5"},
+ {file = "fonttools-4.51.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ee1af4be1c5afe4c96ca23badd368d8dc75f611887fb0c0dac9f71ee5d6f110e"},
+ {file = "fonttools-4.51.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c18b49adc721a7d0b8dfe7c3130c89b8704baf599fb396396d07d4aa69b824a1"},
+ {file = "fonttools-4.51.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:de7c29bdbdd35811f14493ffd2534b88f0ce1b9065316433b22d63ca1cd21f14"},
+ {file = "fonttools-4.51.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:cadf4e12a608ef1d13e039864f484c8a968840afa0258b0b843a0556497ea9ed"},
+ {file = "fonttools-4.51.0-cp38-cp38-win32.whl", hash = "sha256:aefa011207ed36cd280babfaa8510b8176f1a77261833e895a9d96e57e44802f"},
+ {file = "fonttools-4.51.0-cp38-cp38-win_amd64.whl", hash = "sha256:865a58b6e60b0938874af0968cd0553bcd88e0b2cb6e588727117bd099eef836"},
+ {file = "fonttools-4.51.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:60a3409c9112aec02d5fb546f557bca6efa773dcb32ac147c6baf5f742e6258b"},
+ {file = "fonttools-4.51.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:f7e89853d8bea103c8e3514b9f9dc86b5b4120afb4583b57eb10dfa5afbe0936"},
+ {file = "fonttools-4.51.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:56fc244f2585d6c00b9bcc59e6593e646cf095a96fe68d62cd4da53dd1287b55"},
+ {file = "fonttools-4.51.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0d145976194a5242fdd22df18a1b451481a88071feadf251221af110ca8f00ce"},
+ {file = "fonttools-4.51.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:c5b8cab0c137ca229433570151b5c1fc6af212680b58b15abd797dcdd9dd5051"},
+ {file = "fonttools-4.51.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:54dcf21a2f2d06ded676e3c3f9f74b2bafded3a8ff12f0983160b13e9f2fb4a7"},
+ {file = "fonttools-4.51.0-cp39-cp39-win32.whl", hash = "sha256:0118ef998a0699a96c7b28457f15546815015a2710a1b23a7bf6c1be60c01636"},
+ {file = "fonttools-4.51.0-cp39-cp39-win_amd64.whl", hash = "sha256:599bdb75e220241cedc6faebfafedd7670335d2e29620d207dd0378a4e9ccc5a"},
+ {file = "fonttools-4.51.0-py3-none-any.whl", hash = "sha256:15c94eeef6b095831067f72c825eb0e2d48bb4cea0647c1b05c981ecba2bf39f"},
+ {file = "fonttools-4.51.0.tar.gz", hash = "sha256:dc0673361331566d7a663d7ce0f6fdcbfbdc1f59c6e3ed1165ad7202ca183c68"},
]
[package.extras]
@@ -2069,17 +2069,21 @@ dev = ["nbproject_test", "nox", "pandas", "pre-commit", "pytest (>=6.0)", "pytes
[[package]]
name = "lazy-loader"
-version = "0.3"
-description = "lazy_loader"
+version = "0.4"
+description = "Makes it easy to load subpackages and functions on demand."
optional = false
python-versions = ">=3.7"
files = [
- {file = "lazy_loader-0.3-py3-none-any.whl", hash = "sha256:1e9e76ee8631e264c62ce10006718e80b2cfc74340d17d1031e0f84af7478554"},
- {file = "lazy_loader-0.3.tar.gz", hash = "sha256:3b68898e34f5b2a29daaaac172c6555512d0f32074f147e2254e4a6d9d838f37"},
+ {file = "lazy_loader-0.4-py3-none-any.whl", hash = "sha256:342aa8e14d543a154047afb4ba8ef17f5563baad3fc610d7b15b213b0f119efc"},
+ {file = "lazy_loader-0.4.tar.gz", hash = "sha256:47c75182589b91a4e1a85a136c074285a5ad4d9f39c63e0d7fb76391c4574cd1"},
]
+[package.dependencies]
+packaging = "*"
+
[package.extras]
-lint = ["pre-commit (>=3.3)"]
+dev = ["changelist (==0.5)"]
+lint = ["pre-commit (==3.7.0)"]
test = ["pytest (>=7.4)", "pytest-cov (>=4.1)"]
[[package]]
@@ -2269,39 +2273,39 @@ files = [
[[package]]
name = "matplotlib"
-version = "3.8.3"
+version = "3.8.4"
description = "Python plotting package"
optional = false
python-versions = ">=3.9"
files = [
- {file = "matplotlib-3.8.3-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:cf60138ccc8004f117ab2a2bad513cc4d122e55864b4fe7adf4db20ca68a078f"},
- {file = "matplotlib-3.8.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:5f557156f7116be3340cdeef7f128fa99b0d5d287d5f41a16e169819dcf22357"},
- {file = "matplotlib-3.8.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f386cf162b059809ecfac3bcc491a9ea17da69fa35c8ded8ad154cd4b933d5ec"},
- {file = "matplotlib-3.8.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b3c5f96f57b0369c288bf6f9b5274ba45787f7e0589a34d24bdbaf6d3344632f"},
- {file = "matplotlib-3.8.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:83e0f72e2c116ca7e571c57aa29b0fe697d4c6425c4e87c6e994159e0c008635"},
- {file = "matplotlib-3.8.3-cp310-cp310-win_amd64.whl", hash = "sha256:1c5c8290074ba31a41db1dc332dc2b62def469ff33766cbe325d32a3ee291aea"},
- {file = "matplotlib-3.8.3-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:5184e07c7e1d6d1481862ee361905b7059f7fe065fc837f7c3dc11eeb3f2f900"},
- {file = "matplotlib-3.8.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:d7e7e0993d0758933b1a241a432b42c2db22dfa37d4108342ab4afb9557cbe3e"},
- {file = "matplotlib-3.8.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:04b36ad07eac9740fc76c2aa16edf94e50b297d6eb4c081e3add863de4bb19a7"},
- {file = "matplotlib-3.8.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7c42dae72a62f14982f1474f7e5c9959fc4bc70c9de11cc5244c6e766200ba65"},
- {file = "matplotlib-3.8.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:bf5932eee0d428192c40b7eac1399d608f5d995f975cdb9d1e6b48539a5ad8d0"},
- {file = "matplotlib-3.8.3-cp311-cp311-win_amd64.whl", hash = "sha256:40321634e3a05ed02abf7c7b47a50be50b53ef3eaa3a573847431a545585b407"},
- {file = "matplotlib-3.8.3-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:09074f8057917d17ab52c242fdf4916f30e99959c1908958b1fc6032e2d0f6d4"},
- {file = "matplotlib-3.8.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:5745f6d0fb5acfabbb2790318db03809a253096e98c91b9a31969df28ee604aa"},
- {file = "matplotlib-3.8.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b97653d869a71721b639714b42d87cda4cfee0ee74b47c569e4874c7590c55c5"},
- {file = "matplotlib-3.8.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:242489efdb75b690c9c2e70bb5c6550727058c8a614e4c7716f363c27e10bba1"},
- {file = "matplotlib-3.8.3-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:83c0653c64b73926730bd9ea14aa0f50f202ba187c307a881673bad4985967b7"},
- {file = "matplotlib-3.8.3-cp312-cp312-win_amd64.whl", hash = "sha256:ef6c1025a570354297d6c15f7d0f296d95f88bd3850066b7f1e7b4f2f4c13a39"},
- {file = "matplotlib-3.8.3-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:c4af3f7317f8a1009bbb2d0bf23dfaba859eb7dd4ccbd604eba146dccaaaf0a4"},
- {file = "matplotlib-3.8.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:4c6e00a65d017d26009bac6808f637b75ceade3e1ff91a138576f6b3065eeeba"},
- {file = "matplotlib-3.8.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e7b49ab49a3bea17802df6872f8d44f664ba8f9be0632a60c99b20b6db2165b7"},
- {file = "matplotlib-3.8.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6728dde0a3997396b053602dbd907a9bd64ec7d5cf99e728b404083698d3ca01"},
- {file = "matplotlib-3.8.3-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:813925d08fb86aba139f2d31864928d67511f64e5945ca909ad5bc09a96189bb"},
- {file = "matplotlib-3.8.3-cp39-cp39-win_amd64.whl", hash = "sha256:cd3a0c2be76f4e7be03d34a14d49ded6acf22ef61f88da600a18a5cd8b3c5f3c"},
- {file = "matplotlib-3.8.3-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:fa93695d5c08544f4a0dfd0965f378e7afc410d8672816aff1e81be1f45dbf2e"},
- {file = "matplotlib-3.8.3-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e9764df0e8778f06414b9d281a75235c1e85071f64bb5d71564b97c1306a2afc"},
- {file = "matplotlib-3.8.3-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:5e431a09e6fab4012b01fc155db0ce6dccacdbabe8198197f523a4ef4805eb26"},
- {file = "matplotlib-3.8.3.tar.gz", hash = "sha256:7b416239e9ae38be54b028abbf9048aff5054a9aba5416bef0bd17f9162ce161"},
+ {file = "matplotlib-3.8.4-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:abc9d838f93583650c35eca41cfcec65b2e7cb50fd486da6f0c49b5e1ed23014"},
+ {file = "matplotlib-3.8.4-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:8f65c9f002d281a6e904976007b2d46a1ee2bcea3a68a8c12dda24709ddc9106"},
+ {file = "matplotlib-3.8.4-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ce1edd9f5383b504dbc26eeea404ed0a00656c526638129028b758fd43fc5f10"},
+ {file = "matplotlib-3.8.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ecd79298550cba13a43c340581a3ec9c707bd895a6a061a78fa2524660482fc0"},
+ {file = "matplotlib-3.8.4-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:90df07db7b599fe7035d2f74ab7e438b656528c68ba6bb59b7dc46af39ee48ef"},
+ {file = "matplotlib-3.8.4-cp310-cp310-win_amd64.whl", hash = "sha256:ac24233e8f2939ac4fd2919eed1e9c0871eac8057666070e94cbf0b33dd9c338"},
+ {file = "matplotlib-3.8.4-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:72f9322712e4562e792b2961971891b9fbbb0e525011e09ea0d1f416c4645661"},
+ {file = "matplotlib-3.8.4-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:232ce322bfd020a434caaffbd9a95333f7c2491e59cfc014041d95e38ab90d1c"},
+ {file = "matplotlib-3.8.4-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6addbd5b488aedb7f9bc19f91cd87ea476206f45d7116fcfe3d31416702a82fa"},
+ {file = "matplotlib-3.8.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cc4ccdc64e3039fc303defd119658148f2349239871db72cd74e2eeaa9b80b71"},
+ {file = "matplotlib-3.8.4-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:b7a2a253d3b36d90c8993b4620183b55665a429da8357a4f621e78cd48b2b30b"},
+ {file = "matplotlib-3.8.4-cp311-cp311-win_amd64.whl", hash = "sha256:8080d5081a86e690d7688ffa542532e87f224c38a6ed71f8fbed34dd1d9fedae"},
+ {file = "matplotlib-3.8.4-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:6485ac1f2e84676cff22e693eaa4fbed50ef5dc37173ce1f023daef4687df616"},
+ {file = "matplotlib-3.8.4-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:c89ee9314ef48c72fe92ce55c4e95f2f39d70208f9f1d9db4e64079420d8d732"},
+ {file = "matplotlib-3.8.4-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50bac6e4d77e4262c4340d7a985c30912054745ec99756ce213bfbc3cb3808eb"},
+ {file = "matplotlib-3.8.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f51c4c869d4b60d769f7b4406eec39596648d9d70246428745a681c327a8ad30"},
+ {file = "matplotlib-3.8.4-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:b12ba985837e4899b762b81f5b2845bd1a28f4fdd1a126d9ace64e9c4eb2fb25"},
+ {file = "matplotlib-3.8.4-cp312-cp312-win_amd64.whl", hash = "sha256:7a6769f58ce51791b4cb8b4d7642489df347697cd3e23d88266aaaee93b41d9a"},
+ {file = "matplotlib-3.8.4-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:843cbde2f0946dadd8c5c11c6d91847abd18ec76859dc319362a0964493f0ba6"},
+ {file = "matplotlib-3.8.4-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:1c13f041a7178f9780fb61cc3a2b10423d5e125480e4be51beaf62b172413b67"},
+ {file = "matplotlib-3.8.4-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:fb44f53af0a62dc80bba4443d9b27f2fde6acfdac281d95bc872dc148a6509cc"},
+ {file = "matplotlib-3.8.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:606e3b90897554c989b1e38a258c626d46c873523de432b1462f295db13de6f9"},
+ {file = "matplotlib-3.8.4-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:9bb0189011785ea794ee827b68777db3ca3f93f3e339ea4d920315a0e5a78d54"},
+ {file = "matplotlib-3.8.4-cp39-cp39-win_amd64.whl", hash = "sha256:6209e5c9aaccc056e63b547a8152661324404dd92340a6e479b3a7f24b42a5d0"},
+ {file = "matplotlib-3.8.4-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:c7064120a59ce6f64103c9cefba8ffe6fba87f2c61d67c401186423c9a20fd35"},
+ {file = "matplotlib-3.8.4-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a0e47eda4eb2614300fc7bb4657fced3e83d6334d03da2173b09e447418d499f"},
+ {file = "matplotlib-3.8.4-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:493e9f6aa5819156b58fce42b296ea31969f2aab71c5b680b4ea7a3cb5c07d94"},
+ {file = "matplotlib-3.8.4.tar.gz", hash = "sha256:8aac397d5e9ec158960e31c381c5ffc52ddd52bd9a47717e2a694038167dffea"},
]
[package.dependencies]
@@ -2310,7 +2314,7 @@ cycler = ">=0.10"
fonttools = ">=4.22.0"
importlib-resources = {version = ">=3.2.0", markers = "python_version < \"3.10\""}
kiwisolver = ">=1.3.1"
-numpy = ">=1.21,<2"
+numpy = ">=1.21"
packaging = ">=20.0"
pillow = ">=8"
pyparsing = ">=2.3.1"
@@ -2843,19 +2847,19 @@ webpdf = ["playwright"]
[[package]]
name = "nbformat"
-version = "5.10.3"
+version = "5.10.4"
description = "The Jupyter Notebook format"
optional = true
python-versions = ">=3.8"
files = [
- {file = "nbformat-5.10.3-py3-none-any.whl", hash = "sha256:d9476ca28676799af85385f409b49d95e199951477a159a576ef2a675151e5e8"},
- {file = "nbformat-5.10.3.tar.gz", hash = "sha256:60ed5e910ef7c6264b87d644f276b1b49e24011930deef54605188ddeb211685"},
+ {file = "nbformat-5.10.4-py3-none-any.whl", hash = "sha256:3b48d6c8fbca4b299bf3982ea7db1af21580e4fec269ad087b9e81588891200b"},
+ {file = "nbformat-5.10.4.tar.gz", hash = "sha256:322168b14f937a5d11362988ecac2a4952d3d8e3a2cbeb2319584631226d5b3a"},
]
[package.dependencies]
-fastjsonschema = "*"
+fastjsonschema = ">=2.15"
jsonschema = ">=2.6"
-jupyter-core = "*"
+jupyter-core = ">=4.12,<5.0.dev0 || >=5.1.dev0"
traitlets = ">=5.1"
[package.extras]
@@ -3149,14 +3153,13 @@ files = [
[[package]]
name = "nvidia-nvjitlink-cu12"
-version = "12.4.99"
+version = "12.4.127"
description = "Nvidia JIT LTO Library"
optional = true
python-versions = ">=3"
files = [
- {file = "nvidia_nvjitlink_cu12-12.4.99-py3-none-manylinux2014_aarch64.whl", hash = "sha256:75d6498c96d9adb9435f2bbdbddb479805ddfb97b5c1b32395c694185c20ca57"},
- {file = "nvidia_nvjitlink_cu12-12.4.99-py3-none-manylinux2014_x86_64.whl", hash = "sha256:c6428836d20fe7e327191c175791d38570e10762edc588fb46749217cd444c74"},
- {file = "nvidia_nvjitlink_cu12-12.4.99-py3-none-win_amd64.whl", hash = "sha256:991905ffa2144cb603d8ca7962d75c35334ae82bf92820b6ba78157277da1ad2"},
+ {file = "nvidia_nvjitlink_cu12-12.4.127-py3-none-manylinux2014_x86_64.whl", hash = "sha256:06b3b9b25bf3f8af351d664978ca26a16d2c5127dbd53c0497e28d1fb9611d57"},
+ {file = "nvidia_nvjitlink_cu12-12.4.127-py3-none-win_amd64.whl", hash = "sha256:fd9020c501d27d135f983c6d3e244b197a7ccad769e34df53a42e276b0e25fa1"},
]
[[package]]
@@ -3369,18 +3372,18 @@ tests-full = ["cloudpickle", "gmpy", "ipython", "jsonschema", "nest-asyncio", "n
[[package]]
name = "parso"
-version = "0.8.3"
+version = "0.8.4"
description = "A Python Parser"
optional = true
python-versions = ">=3.6"
files = [
- {file = "parso-0.8.3-py2.py3-none-any.whl", hash = "sha256:c001d4636cd3aecdaf33cbb40aebb59b094be2a74c556778ef5576c175e19e75"},
- {file = "parso-0.8.3.tar.gz", hash = "sha256:8c07be290bb59f03588915921e29e8a50002acaf2cdc5fa0e0114f91709fafa0"},
+ {file = "parso-0.8.4-py2.py3-none-any.whl", hash = "sha256:a418670a20291dacd2dddc80c377c5c3791378ee1e8d12bffc35420643d43f18"},
+ {file = "parso-0.8.4.tar.gz", hash = "sha256:eb3a7b58240fb99099a345571deecc0f9540ea5f4dd2fe14c2a99d6b281ab92d"},
]
[package.extras]
-qa = ["flake8 (==3.8.3)", "mypy (==0.782)"]
-testing = ["docopt", "pytest (<6.0.0)"]
+qa = ["flake8 (==5.0.4)", "mypy (==0.971)", "types-setuptools (==67.2.0.1)"]
+testing = ["docopt", "pytest"]
[[package]]
name = "partd"
@@ -4684,45 +4687,45 @@ tests = ["black (>=23.3.0)", "matplotlib (>=3.3.4)", "mypy (>=1.3)", "numpydoc (
[[package]]
name = "scipy"
-version = "1.12.0"
+version = "1.13.0"
description = "Fundamental algorithms for scientific computing in Python"
optional = false
python-versions = ">=3.9"
files = [
- {file = "scipy-1.12.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:78e4402e140879387187f7f25d91cc592b3501a2e51dfb320f48dfb73565f10b"},
- {file = "scipy-1.12.0-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:f5f00ebaf8de24d14b8449981a2842d404152774c1a1d880c901bf454cb8e2a1"},
- {file = "scipy-1.12.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e53958531a7c695ff66c2e7bb7b79560ffdc562e2051644c5576c39ff8efb563"},
- {file = "scipy-1.12.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5e32847e08da8d895ce09d108a494d9eb78974cf6de23063f93306a3e419960c"},
- {file = "scipy-1.12.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:4c1020cad92772bf44b8e4cdabc1df5d87376cb219742549ef69fc9fd86282dd"},
- {file = "scipy-1.12.0-cp310-cp310-win_amd64.whl", hash = "sha256:75ea2a144096b5e39402e2ff53a36fecfd3b960d786b7efd3c180e29c39e53f2"},
- {file = "scipy-1.12.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:408c68423f9de16cb9e602528be4ce0d6312b05001f3de61fe9ec8b1263cad08"},
- {file = "scipy-1.12.0-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:5adfad5dbf0163397beb4aca679187d24aec085343755fcdbdeb32b3679f254c"},
- {file = "scipy-1.12.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c3003652496f6e7c387b1cf63f4bb720951cfa18907e998ea551e6de51a04467"},
- {file = "scipy-1.12.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8b8066bce124ee5531d12a74b617d9ac0ea59245246410e19bca549656d9a40a"},
- {file = "scipy-1.12.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:8bee4993817e204d761dba10dbab0774ba5a8612e57e81319ea04d84945375ba"},
- {file = "scipy-1.12.0-cp311-cp311-win_amd64.whl", hash = "sha256:a24024d45ce9a675c1fb8494e8e5244efea1c7a09c60beb1eeb80373d0fecc70"},
- {file = "scipy-1.12.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:e7e76cc48638228212c747ada851ef355c2bb5e7f939e10952bc504c11f4e372"},
- {file = "scipy-1.12.0-cp312-cp312-macosx_12_0_arm64.whl", hash = "sha256:f7ce148dffcd64ade37b2df9315541f9adad6efcaa86866ee7dd5db0c8f041c3"},
- {file = "scipy-1.12.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9c39f92041f490422924dfdb782527a4abddf4707616e07b021de33467f917bc"},
- {file = "scipy-1.12.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a7ebda398f86e56178c2fa94cad15bf457a218a54a35c2a7b4490b9f9cb2676c"},
- {file = "scipy-1.12.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:95e5c750d55cf518c398a8240571b0e0782c2d5a703250872f36eaf737751338"},
- {file = "scipy-1.12.0-cp312-cp312-win_amd64.whl", hash = "sha256:e646d8571804a304e1da01040d21577685ce8e2db08ac58e543eaca063453e1c"},
- {file = "scipy-1.12.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:913d6e7956c3a671de3b05ccb66b11bc293f56bfdef040583a7221d9e22a2e35"},
- {file = "scipy-1.12.0-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:bba1b0c7256ad75401c73e4b3cf09d1f176e9bd4248f0d3112170fb2ec4db067"},
- {file = "scipy-1.12.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:730badef9b827b368f351eacae2e82da414e13cf8bd5051b4bdfd720271a5371"},
- {file = "scipy-1.12.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6546dc2c11a9df6926afcbdd8a3edec28566e4e785b915e849348c6dd9f3f490"},
- {file = "scipy-1.12.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:196ebad3a4882081f62a5bf4aeb7326aa34b110e533aab23e4374fcccb0890dc"},
- {file = "scipy-1.12.0-cp39-cp39-win_amd64.whl", hash = "sha256:b360f1b6b2f742781299514e99ff560d1fe9bd1bff2712894b52abe528d1fd1e"},
- {file = "scipy-1.12.0.tar.gz", hash = "sha256:4bf5abab8a36d20193c698b0f1fc282c1d083c94723902c447e5d2f1780936a3"},
-]
-
-[package.dependencies]
-numpy = ">=1.22.4,<1.29.0"
+ {file = "scipy-1.13.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:ba419578ab343a4e0a77c0ef82f088238a93eef141b2b8017e46149776dfad4d"},
+ {file = "scipy-1.13.0-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:22789b56a999265431c417d462e5b7f2b487e831ca7bef5edeb56efe4c93f86e"},
+ {file = "scipy-1.13.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:05f1432ba070e90d42d7fd836462c50bf98bd08bed0aa616c359eed8a04e3922"},
+ {file = "scipy-1.13.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b8434f6f3fa49f631fae84afee424e2483289dfc30a47755b4b4e6b07b2633a4"},
+ {file = "scipy-1.13.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:dcbb9ea49b0167de4167c40eeee6e167caeef11effb0670b554d10b1e693a8b9"},
+ {file = "scipy-1.13.0-cp310-cp310-win_amd64.whl", hash = "sha256:1d2f7bb14c178f8b13ebae93f67e42b0a6b0fc50eba1cd8021c9b6e08e8fb1cd"},
+ {file = "scipy-1.13.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:0fbcf8abaf5aa2dc8d6400566c1a727aed338b5fe880cde64907596a89d576fa"},
+ {file = "scipy-1.13.0-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:5e4a756355522eb60fcd61f8372ac2549073c8788f6114449b37e9e8104f15a5"},
+ {file = "scipy-1.13.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b5acd8e1dbd8dbe38d0004b1497019b2dbbc3d70691e65d69615f8a7292865d7"},
+ {file = "scipy-1.13.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9ff7dad5d24a8045d836671e082a490848e8639cabb3dbdacb29f943a678683d"},
+ {file = "scipy-1.13.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:4dca18c3ffee287ddd3bc8f1dabaf45f5305c5afc9f8ab9cbfab855e70b2df5c"},
+ {file = "scipy-1.13.0-cp311-cp311-win_amd64.whl", hash = "sha256:a2f471de4d01200718b2b8927f7d76b5d9bde18047ea0fa8bd15c5ba3f26a1d6"},
+ {file = "scipy-1.13.0-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:d0de696f589681c2802f9090fff730c218f7c51ff49bf252b6a97ec4a5d19e8b"},
+ {file = "scipy-1.13.0-cp312-cp312-macosx_12_0_arm64.whl", hash = "sha256:b2a3ff461ec4756b7e8e42e1c681077349a038f0686132d623fa404c0bee2551"},
+ {file = "scipy-1.13.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6bf9fe63e7a4bf01d3645b13ff2aa6dea023d38993f42aaac81a18b1bda7a82a"},
+ {file = "scipy-1.13.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1e7626dfd91cdea5714f343ce1176b6c4745155d234f1033584154f60ef1ff42"},
+ {file = "scipy-1.13.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:109d391d720fcebf2fbe008621952b08e52907cf4c8c7efc7376822151820820"},
+ {file = "scipy-1.13.0-cp312-cp312-win_amd64.whl", hash = "sha256:8930ae3ea371d6b91c203b1032b9600d69c568e537b7988a3073dfe4d4774f21"},
+ {file = "scipy-1.13.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:5407708195cb38d70fd2d6bb04b1b9dd5c92297d86e9f9daae1576bd9e06f602"},
+ {file = "scipy-1.13.0-cp39-cp39-macosx_12_0_arm64.whl", hash = "sha256:ac38c4c92951ac0f729c4c48c9e13eb3675d9986cc0c83943784d7390d540c78"},
+ {file = "scipy-1.13.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:09c74543c4fbeb67af6ce457f6a6a28e5d3739a87f62412e4a16e46f164f0ae5"},
+ {file = "scipy-1.13.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:28e286bf9ac422d6beb559bc61312c348ca9b0f0dae0d7c5afde7f722d6ea13d"},
+ {file = "scipy-1.13.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:33fde20efc380bd23a78a4d26d59fc8704e9b5fd9b08841693eb46716ba13d86"},
+ {file = "scipy-1.13.0-cp39-cp39-win_amd64.whl", hash = "sha256:45c08bec71d3546d606989ba6e7daa6f0992918171e2a6f7fbedfa7361c2de1e"},
+ {file = "scipy-1.13.0.tar.gz", hash = "sha256:58569af537ea29d3f78e5abd18398459f195546bb3be23d16677fb26616cc11e"},
+]
+
+[package.dependencies]
+numpy = ">=1.22.4,<2.3"
[package.extras]
-dev = ["click", "cython-lint (>=0.12.2)", "doit (>=0.36.0)", "mypy", "pycodestyle", "pydevtool", "rich-click", "ruff", "types-psutil", "typing_extensions"]
-doc = ["jupytext", "matplotlib (>2)", "myst-nb", "numpydoc", "pooch", "pydata-sphinx-theme (==0.9.0)", "sphinx (!=4.1.0)", "sphinx-design (>=0.2.0)"]
-test = ["asv", "gmpy2", "hypothesis", "mpmath", "pooch", "pytest", "pytest-cov", "pytest-timeout", "pytest-xdist", "scikit-umfpack", "threadpoolctl"]
+dev = ["cython-lint (>=0.12.2)", "doit (>=0.36.0)", "mypy", "pycodestyle", "pydevtool", "rich-click", "ruff", "types-psutil", "typing_extensions"]
+doc = ["jupyterlite-pyodide-kernel", "jupyterlite-sphinx (>=0.12.0)", "jupytext", "matplotlib (>=3.5)", "myst-nb", "numpydoc", "pooch", "pydata-sphinx-theme (>=0.15.2)", "sphinx (>=5.0.0)", "sphinx-design (>=0.4.0)"]
+test = ["array-api-strict", "asv", "gmpy2", "hypothesis (>=6.30)", "mpmath", "pooch", "pytest", "pytest-cov", "pytest-timeout", "pytest-xdist", "scikit-umfpack", "threadpoolctl"]
[[package]]
name = "seaborn"
@@ -5480,63 +5483,30 @@ tutorials = ["matplotlib", "pandas", "tabulate", "torch"]
[[package]]
name = "typer"
-version = "0.12.0"
-description = "Typer, build great CLIs. Easy to code. Based on Python type hints."
-optional = false
-python-versions = ">=3.7"
-files = [
- {file = "typer-0.12.0-py3-none-any.whl", hash = "sha256:0441a0bb8962fb4383b8537ada9f7eb2d0deda0caa2cfe7387cc221290f617e4"},
- {file = "typer-0.12.0.tar.gz", hash = "sha256:900fe786ce2d0ea44653d3c8ee4594a22a496a3104370ded770c992c5e3c542d"},
-]
-
-[package.dependencies]
-typer-cli = "0.12.0"
-typer-slim = {version = "0.12.0", extras = ["standard"]}
-
-[[package]]
-name = "typer-cli"
-version = "0.12.0"
-description = "Typer, build great CLIs. Easy to code. Based on Python type hints."
-optional = false
-python-versions = ">=3.7"
-files = [
- {file = "typer_cli-0.12.0-py3-none-any.whl", hash = "sha256:7b7e2dd49f59974bb5a869747045d5444b17bffb851e006cd424f602d3578104"},
- {file = "typer_cli-0.12.0.tar.gz", hash = "sha256:603ed3d5a278827bd497e4dc73a39bb714b230371c8724090b0de2abdcdd9f6e"},
-]
-
-[package.dependencies]
-typer-slim = {version = "0.12.0", extras = ["standard"]}
-
-[[package]]
-name = "typer-slim"
-version = "0.12.0"
+version = "0.12.1"
description = "Typer, build great CLIs. Easy to code. Based on Python type hints."
optional = false
python-versions = ">=3.7"
files = [
- {file = "typer_slim-0.12.0-py3-none-any.whl", hash = "sha256:ddd7042b29a32140528caa415750bcae54113ba0c32270ca11a6f64069ddadf9"},
- {file = "typer_slim-0.12.0.tar.gz", hash = "sha256:3e8a3f17286b173d76dca0fd4e02651c9a2ce1467b3754876b1ac4bd72572beb"},
+ {file = "typer-0.12.1-py3-none-any.whl", hash = "sha256:43ebb23c8a358c3d623e31064359a65f50229d0bf73ae8dfd203f49d9126ae06"},
+ {file = "typer-0.12.1.tar.gz", hash = "sha256:72d218ef3c686aed9c6ff3ca25b238aee0474a1628b29c559b18b634cfdeca88"},
]
[package.dependencies]
click = ">=8.0.0"
-rich = {version = ">=10.11.0", optional = true, markers = "extra == \"standard\""}
-shellingham = {version = ">=1.3.0", optional = true, markers = "extra == \"standard\""}
+rich = ">=10.11.0"
+shellingham = ">=1.3.0"
typing-extensions = ">=3.7.4.3"
-[package.extras]
-all = ["rich (>=10.11.0)", "shellingham (>=1.3.0)"]
-standard = ["rich (>=10.11.0)", "shellingham (>=1.3.0)"]
-
[[package]]
name = "typing-extensions"
-version = "4.10.0"
+version = "4.11.0"
description = "Backported and Experimental Type Hints for Python 3.8+"
optional = false
python-versions = ">=3.8"
files = [
- {file = "typing_extensions-4.10.0-py3-none-any.whl", hash = "sha256:69b1a937c3a517342112fb4c6df7e72fc39a38e7891a5730ed4985b5214b5475"},
- {file = "typing_extensions-4.10.0.tar.gz", hash = "sha256:b0abd7c89e8fb96f98db18d86106ff1d90ab692004eb746cf6eda2682f91b3cb"},
+ {file = "typing_extensions-4.11.0-py3-none-any.whl", hash = "sha256:c1f94d72897edaf4ce775bb7558d5b79d8126906a14ea5ed1635921406c0387a"},
+ {file = "typing_extensions-4.11.0.tar.gz", hash = "sha256:83f085bd5ca59c80295fc2a82ab5dac679cbe02b9f33f7d83af68e241bea51b0"},
]
[[package]]
@@ -5552,12 +5522,13 @@ files = [
[[package]]
name = "umap-learn"
-version = "0.5.5"
+version = "0.5.6"
description = "Uniform Manifold Approximation and Projection"
optional = false
python-versions = "*"
files = [
- {file = "umap-learn-0.5.5.tar.gz", hash = "sha256:c54d607364413eade968b73ba07c8b3ea14412817f53cd07b6f720ac957293c4"},
+ {file = "umap-learn-0.5.6.tar.gz", hash = "sha256:5b3917a862c23ba0fc83bfcd67a7b719dec85b3d9c01fdc7d894cce455df4e03"},
+ {file = "umap_learn-0.5.6-py3-none-any.whl", hash = "sha256:881cc0c2ee845b790bf0455aa1664f9f68b838d9d0fe12a1291b85c5a559c913"},
]
[package.dependencies]
@@ -5569,7 +5540,7 @@ scipy = ">=1.3.1"
tqdm = "*"
[package.extras]
-parametric-umap = ["tensorflow (>=2.1)", "tensorflow-probability (>=0.10)"]
+parametric-umap = ["tensorflow (>=2.1)"]
plot = ["bokeh", "colorcet", "datashader", "holoviews", "matplotlib", "pandas", "scikit-image", "seaborn"]
tbb = ["tbb (>=2019.0)"]
@@ -5959,20 +5930,20 @@ pyyaml = ">=6.0,<7.0"
[[package]]
name = "zarr"
-version = "2.17.1"
+version = "2.17.2"
description = "An implementation of chunked, compressed, N-dimensional arrays for Python"
optional = false
python-versions = ">=3.9"
files = [
- {file = "zarr-2.17.1-py3-none-any.whl", hash = "sha256:e25df2741a6e92645f3890f30f3136d5b57a0f8f831094b024bbcab5f2797bc7"},
- {file = "zarr-2.17.1.tar.gz", hash = "sha256:564b3aa072122546fe69a0fa21736f466b20fad41754334b62619f088ce46261"},
+ {file = "zarr-2.17.2-py3-none-any.whl", hash = "sha256:70d7cc07c24280c380ef80644151d136b7503b0d83c9f214e8000ddc0f57f69b"},
+ {file = "zarr-2.17.2.tar.gz", hash = "sha256:2cbaa6cb4e342d45152d4a7a4b2013c337fcd3a8e7bc98253560180de60552ce"},
]
[package.dependencies]
asciitree = "*"
fasteners = {version = "*", markers = "sys_platform != \"emscripten\""}
numcodecs = ">=0.10.0"
-numpy = ">=1.21.1"
+numpy = ">=1.23"
[package.extras]
docs = ["numcodecs[msgpack]", "numpydoc", "pydata-sphinx-theme", "sphinx", "sphinx-automodapi", "sphinx-copybutton", "sphinx-design", "sphinx-issues"]
diff --git a/pyproject.toml b/pyproject.toml
index 51b77ca6..657797fd 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -1,6 +1,6 @@
[tool.poetry]
name = "sopa"
-version = "1.0.9"
+version = "1.0.10"
description = "Spatial-omics pipeline and analysis"
documentation = "https://gustaveroussy.github.io/sopa"
homepage = "https://gustaveroussy.github.io/sopa"
diff --git a/sopa/annotation/tangram/run.py b/sopa/annotation/tangram/run.py
index 85507af2..ae068c6d 100644
--- a/sopa/annotation/tangram/run.py
+++ b/sopa/annotation/tangram/run.py
@@ -208,6 +208,9 @@ def run(self):
log.info("Finished running Tangram")
+ if SopaKeys.UNS_KEY not in self.ad_sp.uns:
+ self.ad_sp.uns[SopaKeys.UNS_KEY] = {}
+
self.ad_sp.uns[SopaKeys.UNS_KEY][SopaKeys.UNS_CELL_TYPES] = [
self.level_obs_key(level) for level in range(self.levels)
]
diff --git a/sopa/cli/explorer.py b/sopa/cli/explorer.py
index bd6b3a0e..8bf23f2e 100644
--- a/sopa/cli/explorer.py
+++ b/sopa/cli/explorer.py
@@ -132,7 +132,7 @@ def add_aligned(
from sopa.io.explorer.images import align
sdata = spatialdata.read_zarr(sdata_path)
- image = io.imaging.ome_tif(image_path, as_image=True)
+ image = io.ome_tif(image_path, as_image=True)
align(
sdata, image, transformation_matrix_path, overwrite=overwrite, image_key=original_image_key
diff --git a/sopa/io/__init__.py b/sopa/io/__init__.py
index c4eaff1a..25b01775 100644
--- a/sopa/io/__init__.py
+++ b/sopa/io/__init__.py
@@ -1,8 +1,13 @@
-from .imaging import macsima, phenocycler, hyperion, ome_tif
from .explorer import write, align
from .standardize import write_standardized
-from .transcriptomics import merscope, xenium, cosmx
-from .histopathology import wsi, wsi_autoscale
+from .reader.cosmx import cosmx
+from .reader.merscope import merscope
+from .reader.xenium import xenium
+from .reader.macsima import macsima
+from .reader.phenocycler import phenocycler
+from .reader.hyperion import hyperion
+from .reader.utils import ome_tif
+from .reader.wsi import wsi, wsi_autoscale
from .report import write_report
from ..utils.data import blobs, uniform
diff --git a/sopa/io/imaging.py b/sopa/io/imaging.py
deleted file mode 100644
index 063b69ad..00000000
--- a/sopa/io/imaging.py
+++ /dev/null
@@ -1,305 +0,0 @@
-# Readers for multiplex-imaging technologies
-# In the future, we will completely rely on spatialdata-io (when all these functions exist)
-
-from __future__ import annotations
-
-import logging
-import re
-from pathlib import Path
-from typing import Callable
-
-import dask.array as da
-import numpy as np
-import pandas as pd
-import tifffile as tf
-from dask.delayed import delayed
-from dask_image.imread import imread
-from spatial_image import SpatialImage
-from spatialdata import SpatialData
-from spatialdata.models import Image2DModel
-from spatialdata.transformations import Identity
-
-log = logging.getLogger(__name__)
-
-
-def _deduplicate_names(df):
- is_duplicated = df[0].duplicated(keep=False)
- df.loc[is_duplicated, 0] += " (" + df.loc[is_duplicated, 1] + ")"
- return df[0].values
-
-
-def _parse_name_macsima(file):
- index = file.name[2:5] if file.name[0] == "C" else file.name[:3]
- match = re.search(r"_A-(.*?)_C-", file.name)
- if match:
- antibody = match.group(1)
- channel = re.search(r"_C-(.*?)\.tif", file.name).group(1)
- uid = f"{channel}-{index}"
- else:
- antibody = re.search(r"_A-(.*?)\.tif", file.name).group(1)
- uid = index
- return [antibody, uid]
-
-
-def _get_channel_names_macsima(files):
- df_antibodies = pd.DataFrame([_parse_name_macsima(file) for file in files])
- return _deduplicate_names(df_antibodies)
-
-
-def _default_image_models_kwargs(image_models_kwargs: dict | None = None):
- image_models_kwargs = {} if image_models_kwargs is None else image_models_kwargs
-
- if "chunks" not in image_models_kwargs:
- image_models_kwargs["chunks"] = (1, 1024, 1024)
-
- if "scale_factors" not in image_models_kwargs:
- image_models_kwargs["scale_factors"] = [2, 2, 2, 2]
-
- return image_models_kwargs
-
-
-def macsima(path: Path, **kwargs: int) -> SpatialData:
- """Read MACSIMA data as a `SpatialData` object
-
- Notes:
- For all dulicated name, their index will be added in brackets after, for instance you will often find `DAPI (000)` to indicate the DAPI channel of index `000`
-
- Args:
- path: Path to the directory containing the MACSIMA `.tif` images
- kwargs: Kwargs for `_general_tif_directory_reader`
-
- Returns:
- A `SpatialData` object with a 2D-image of shape `(C, Y, X)`
- """
- return _general_tif_directory_reader(
- path, files_to_channels=_get_channel_names_macsima, **kwargs
- )
-
-
-def _get_files_stem(files: list[Path]):
- return [file.stem for file in files]
-
-
-def _general_tif_directory_reader(
- path: str,
- files_to_channels: Callable = _get_files_stem,
- suffix: str = ".tif",
- image_models_kwargs: dict | None = None,
- imread_kwargs: dict | None = None,
-):
- image_models_kwargs = _default_image_models_kwargs(image_models_kwargs)
- imread_kwargs = {} if imread_kwargs is None else imread_kwargs
-
- files = [file for file in Path(path).iterdir() if file.suffix == suffix]
-
- names = files_to_channels(files)
- image = da.concatenate(
- [imread(file, **imread_kwargs) for file in files],
- axis=0,
- )
- image = image.rechunk(chunks=image_models_kwargs["chunks"])
-
- log.info(f"Found channel names {names}")
-
- image_name = Path(path).absolute().stem
- image = Image2DModel.parse(
- image,
- dims=("c", "y", "x"),
- transformations={"pixels": Identity()},
- c_coords=names,
- **image_models_kwargs,
- )
-
- return SpatialData(images={image_name: image})
-
-
-def _get_channel_name_qptiff(description):
- import xml.etree.ElementTree as ET
-
- root = ET.fromstring(description)
-
- for xml_path in [".//Biomarker", ".//ExcitationFilter//Bands//Name"]:
- field = root.find(xml_path)
- if field is not None:
- return field.text
-
- return re.search(r"(.*?)", description).group(1)
-
-
-def _get_channel_names_qptiff(page_series):
- df_names = pd.DataFrame(
- [[_get_channel_name_qptiff(page.description), str(i)] for i, page in enumerate(page_series)]
- )
- return _deduplicate_names(df_names)
-
-
-def _get_IJ_channel_names(path: str) -> list[str]:
- with tf.TiffFile(path) as tif:
- default_names = [str(i) for i in range(len(tif.pages))]
-
- if len(tif.pages) > 1:
- ij_metadata_tag = tif.pages[0].tags.get("IJMetadata", None)
-
- if ij_metadata_tag and "Labels" in ij_metadata_tag.value:
- return ij_metadata_tag.value["Labels"]
-
- log.warn("Could not find channel names in IJMetadata.")
- return default_names
-
- log.warn("The TIF file does not have multiple channels.")
- return default_names
-
-
-def _rename_channels(names: list[str], channels_renaming: dict | None = None):
- log.info(f"Found channel names {names}")
- if channels_renaming is not None and len(channels_renaming):
- log.info(f"Channels will be renamed by the dictionnary: {channels_renaming}")
- names = [channels_renaming.get(name, name) for name in names]
- log.info(f"New names are: {names}")
- return names
-
-
-def phenocycler(
- path: str | Path, channels_renaming: dict | None = None, image_models_kwargs: dict | None = None
-) -> SpatialData:
- """Read Phenocycler data as a `SpatialData` object
-
- Args:
- path: Path to a `.qptiff` file, or a `.tif` file (if exported from QuPath)
- channels_renaming: A dictionnary whose keys correspond to channels and values to their corresponding new name. Not all channels need to be renamed.
- image_models_kwargs: Kwargs provided to the `Image2DModel`
-
- Returns:
- A `SpatialData` object with a 2D-image of shape `(C, Y, X)`
- """
- image_models_kwargs = _default_image_models_kwargs(image_models_kwargs)
-
- path = Path(path)
- image_name = path.absolute().stem
-
- if path.suffix == ".qptiff":
- with tf.TiffFile(path) as tif:
- series = tif.series[0]
- names = _get_channel_names_qptiff(series)
-
- delayed_image = delayed(lambda series: series.asarray())(tif)
- image = da.from_delayed(delayed_image, dtype=series.dtype, shape=series.shape)
- elif path.suffix == ".tif":
- image = imread(path)
- names = _get_IJ_channel_names(path)
- else:
- raise ValueError(f"Unsupported file extension {path.suffix}. Must be '.qptiff' or '.tif'.")
-
- names = _rename_channels(names, channels_renaming)
- image = image.rechunk(chunks=image_models_kwargs["chunks"])
-
- image = Image2DModel.parse(
- image,
- dims=("c", "y", "x"),
- transformations={"pixels": Identity()},
- c_coords=names,
- **image_models_kwargs,
- )
-
- return SpatialData(images={image_name: image})
-
-
-def _get_channel_names_hyperion(files: list[Path]):
- return [file.name[:-9].split("_")[1] for file in files]
-
-
-def hyperion(
- path: Path, image_models_kwargs: dict | None = None, imread_kwargs: dict | None = None
-) -> SpatialData:
- """Read Hyperion data as a `SpatialData` object
-
- Args:
- path: Path to the directory containing the Hyperion `.tiff` images
- image_models_kwargs: Kwargs provided to the `Image2DModel`
- imread_kwargs: Kwargs provided to `dask_image.imread.imread`
-
- Returns:
- A `SpatialData` object with a 2D-image of shape `(C, Y, X)`
- """
- image_models_kwargs = _default_image_models_kwargs(image_models_kwargs)
- imread_kwargs = {} if imread_kwargs is None else imread_kwargs
-
- files = [file for file in Path(path).iterdir() if file.suffix == ".tiff"]
-
- names = _get_channel_names_hyperion(files)
- image = da.concatenate(
- [imread(file, **imread_kwargs) for file in files],
- axis=0,
- )
- image = (image / image.max(axis=(1, 2)).compute()[:, None, None] * 255).astype(np.uint8)
- image = image.rechunk(chunks=image_models_kwargs["chunks"])
-
- log.info(f"Found channel names {names}")
-
- image_name = Path(path).absolute().stem
- image = Image2DModel.parse(
- image,
- dims=("c", "y", "x"),
- transformations={"pixels": Identity()},
- c_coords=names,
- **image_models_kwargs,
- )
-
- return SpatialData(images={image_name: image})
-
-
-def _ome_channels_names(path: str):
- import xml.etree.ElementTree as ET
-
- tiff = tf.TiffFile(path)
- omexml_string = tiff.pages[0].description
-
- root = ET.fromstring(omexml_string)
- namespaces = {"ome": "http://www.openmicroscopy.org/Schemas/OME/2016-06"}
- channels = root.findall("ome:Image[1]/ome:Pixels/ome:Channel", namespaces)
- return [c.attrib["Name"] if "Name" in c.attrib else c.attrib["ID"] for c in channels]
-
-
-def ome_tif(path: Path, as_image: bool = False) -> SpatialImage | SpatialData:
- """Read an `.ome.tif` image. This image should be a 2D image (with possibly multiple channels).
- Typically, this function can be used to open Xenium IF images.
-
- Args:
- path: Path to the `.ome.tif` image
- as_image: If `True`, will return a `SpatialImage` object
-
- Returns:
- A `SpatialImage` or a `SpatialData` object
- """
- image_models_kwargs = _default_image_models_kwargs()
- image_name = Path(path).absolute().name.split(".")[0]
- image: da.Array = imread(path)
-
- if image.ndim == 4:
- assert image.shape[0] == 1, "4D images not supported"
- image = da.moveaxis(image[0], 2, 0)
- log.info(f"Transformed 4D image into a 3D image of shape (c, y, x) = {image.shape}")
- elif image.ndim != 3:
- raise ValueError(f"Number of dimensions not supported: {image.ndim}")
-
- image = image.rechunk(chunks=image_models_kwargs["chunks"])
-
- channel_names = _ome_channels_names(path)
- if len(channel_names) != len(image):
- channel_names = [str(i) for i in range(len(image))]
- log.warn(f"Channel names couldn't be read. Using {channel_names} instead.")
-
- image = SpatialImage(image, dims=["c", "y", "x"], name=image_name, coords={"c": channel_names})
-
- if as_image:
- return image
-
- image = Image2DModel.parse(
- image,
- dims=("c", "y", "x"),
- c_coords=channel_names,
- transformations={"pixels": Identity()},
- **image_models_kwargs,
- )
-
- return SpatialData(images={image_name: image})
diff --git a/sopa/io/reader/__init__.py b/sopa/io/reader/__init__.py
new file mode 100644
index 00000000..e69de29b
diff --git a/sopa/io/reader/cosmx.py b/sopa/io/reader/cosmx.py
new file mode 100644
index 00000000..8259f470
--- /dev/null
+++ b/sopa/io/reader/cosmx.py
@@ -0,0 +1,275 @@
+from __future__ import annotations
+
+import logging
+import re
+from pathlib import Path
+from typing import Optional
+
+import dask.array as da
+import pandas as pd
+import tifffile
+from dask_image.imread import imread
+from spatialdata import SpatialData
+from spatialdata.models import Image2DModel, PointsModel
+from spatialdata_io._constants._constants import CosmxKeys
+
+from .utils import _default_image_kwargs
+
+log = logging.getLogger(__name__)
+
+
+def cosmx(
+ path: str | Path,
+ dataset_id: Optional[str] = None,
+ fov: int | str | None = None,
+ read_proteins: bool = False,
+ image_models_kwargs: dict | None = None,
+ imread_kwargs: dict | None = None,
+) -> SpatialData:
+ """
+ Read *Cosmx Nanostring* data. The fields of view are stitched together, except if `fov` is provided.
+
+ This function reads the following files:
+ - `*_fov_positions_file.csv` or `*_fov_positions_file.csv.gz`: FOV locations
+ - `Morphology2D` directory: all the FOVs morphology images
+ - `Morphology_ChannelID_Dictionary.txt`: Morphology channels names
+ - `*_tx_file.csv.gz` or `*_tx_file.csv`: Transcripts location and names
+ - If `read_proteins` is `True`, all the images under the nested `ProteinImages` directories will be read
+
+ Args:
+ path: Path to the root directory containing *Nanostring* files.
+ dataset_id: Optional name of the dataset (needs to be provided if not infered).
+ fov: Name or number of one single field of view to be read. If a string is provided, an example of correct syntax is "F008". By default, reads all FOVs.
+ read_proteins: Whether to read the proteins or the transcripts.
+ image_models_kwargs: Keyword arguments passed to `spatialdata.models.Image2DModel`.
+ imread_kwargs: Keyword arguments passed to `dask_image.imread.imread`.
+
+ Returns:
+ A `SpatialData` object representing the CosMX experiment
+ """
+ path = Path(path)
+ image_models_kwargs, imread_kwargs = _default_image_kwargs(image_models_kwargs, imread_kwargs)
+
+ dataset_id = _infer_dataset_id(path, dataset_id)
+ fov_locs = _read_cosmx_fov_locs(path, dataset_id)
+ fov_id, fov = _check_fov_id(fov)
+
+ protein_dir_dict = {}
+ if read_proteins:
+ protein_dir_dict = {
+ int(protein_dir.parent.name[3:]): protein_dir
+ for protein_dir in list(path.rglob("**/ProteinImages"))
+ }
+ assert len(protein_dir_dict), f"No directory called 'ProteinImages' was found under {path}"
+
+ ### Read image(s)
+ images_dir = _find_dir(path, "Morphology2D")
+ if fov is None:
+ image, protein_names = _read_stitched_image(
+ images_dir,
+ fov_locs,
+ protein_dir_dict,
+ **imread_kwargs,
+ )
+ image = image.rechunk(image_models_kwargs["chunks"])
+ image_name = "stitched_image"
+ else:
+ pattern = f"*{fov_id}.TIF"
+ fov_files = list(images_dir.rglob(pattern))
+
+ assert len(fov_files), f"No file matches the pattern {pattern} inside {images_dir}"
+ assert (
+ len(fov_files) == 1
+ ), f"Multiple files match the pattern {pattern}: {', '.join(fov_files)}"
+
+ image, protein_names = _read_fov_image(
+ images_dir / fov_files[0], protein_dir_dict.get(fov), **imread_kwargs
+ )
+ image_name = f"{fov}_image"
+
+ c_coords = _cosmx_c_coords(path, image.shape[0], protein_names)
+
+ parsed_image = Image2DModel.parse(
+ image, dims=("c", "y", "x"), c_coords=c_coords, **image_models_kwargs
+ )
+
+ if read_proteins:
+ return SpatialData(images={image_name: parsed_image})
+
+ ### Read transcripts
+ transcripts_data = _read_cosmx_csv(path, dataset_id)
+
+ if fov is None:
+ transcripts_data["x"] = transcripts_data["x_global_px"] - fov_locs["xmin"].min()
+ transcripts_data["y"] = transcripts_data["y_global_px"] - fov_locs["ymin"].min()
+ coordinates = None
+ points_name = "points"
+ else:
+ transcripts_data = transcripts_data[transcripts_data["fov"] == fov]
+ coordinates = {"x": "x_local_px", "y": "y_local_px"}
+ points_name = f"{fov}_points"
+
+ transcripts = PointsModel.parse(
+ transcripts_data,
+ coordinates=coordinates,
+ feature_key=CosmxKeys.TARGET_OF_TRANSCRIPT,
+ )
+
+ return SpatialData(images={image_name: parsed_image}, points={points_name: transcripts})
+
+
+def _read_fov_image(
+ morphology_path: Path, protein_path: Path | None, **imread_kwargs
+) -> tuple[da.Array, list[str] | None]:
+ image = imread(morphology_path, **imread_kwargs)
+
+ protein_names = None
+ if protein_path is not None:
+ protein_image, protein_names = _read_protein_fov(protein_path)
+ image = da.concatenate([image, protein_image], axis=0)
+
+ return image, protein_names
+
+
+def _infer_dataset_id(path: Path, dataset_id: str | None) -> str:
+ if isinstance(dataset_id, str):
+ return dataset_id
+
+ for suffix in [".csv", ".csv.gz"]:
+ counts_files = list(path.rglob(f"*_fov_positions_file{suffix}"))
+
+ if len(counts_files) == 1:
+ found = re.match(rf"(.*)_fov_positions_file{suffix}", str(counts_files[0]))
+ if found:
+ return found.group(1)
+
+ raise ValueError(
+ "Could not infer `dataset_id` from the name of the transcript file. Please specify it manually."
+ )
+
+
+def _read_cosmx_fov_locs(path: Path, dataset_id: str) -> pd.DataFrame:
+ fov_file = path / f"{dataset_id}_fov_positions_file.csv"
+
+ if not fov_file.exists():
+ fov_file = path / f"{dataset_id}_fov_positions_file.csv.gz"
+
+ assert fov_file.exists(), f"Missing field of view file: {fov_file}"
+
+ fov_locs = pd.read_csv(fov_file, index_col=1)
+
+ pixel_size = 0.120280945 # size of a pixel in microns
+
+ fov_locs["xmin"] = fov_locs["X_mm"] * 1e3 / pixel_size
+ fov_locs["xmax"] = 0 # will be filled when reading the images
+
+ fov_locs["ymin"] = 0 # will be filled when reading the images
+ fov_locs["ymax"] = fov_locs["Y_mm"] * 1e3 / pixel_size
+
+ return fov_locs
+
+
+def _read_stitched_image(
+ images_dir: Path, fov_locs: pd.DataFrame, protein_dir_dict: dict, **imread_kwargs
+) -> tuple[da.Array, list[str] | None]:
+ log.warn("Image stitching is currently experimental")
+
+ fov_images = {}
+ protein_names = None
+ pattern = re.compile(r".*_F(\d+)")
+ for image_path in images_dir.iterdir():
+ if image_path.suffix == ".TIF":
+ fov = int(pattern.findall(image_path.name)[0])
+
+ image, protein_names = _read_fov_image(
+ image_path, protein_dir_dict.get(fov), **imread_kwargs
+ )
+
+ fov_images[fov] = da.flip(image, axis=1)
+
+ fov_locs.loc[fov, "xmax"] = fov_locs.loc[fov, "xmin"] + image.shape[2]
+ fov_locs.loc[fov, "ymin"] = fov_locs.loc[fov, "ymax"] - image.shape[1]
+
+ for dim in ["x", "y"]:
+ shift = fov_locs[f"{dim}min"].min()
+ fov_locs[f"{dim}0"] = (fov_locs[f"{dim}min"] - shift).round().astype(int)
+ fov_locs[f"{dim}1"] = (fov_locs[f"{dim}max"] - shift).round().astype(int)
+
+ stitched_image = da.zeros(
+ shape=(image.shape[0], fov_locs["y1"].max(), fov_locs["x1"].max()), dtype=image.dtype
+ )
+
+ for fov, im in fov_images.items():
+ xmin, xmax = fov_locs.loc[fov, "x0"], fov_locs.loc[fov, "x1"]
+ ymin, ymax = fov_locs.loc[fov, "y0"], fov_locs.loc[fov, "y1"]
+ stitched_image[:, ymin:ymax, xmin:xmax] = im
+
+ return stitched_image, protein_names
+
+
+def _check_fov_id(fov: str | int | None) -> tuple[str, int]:
+ if fov is None:
+ return None, None
+
+ if isinstance(fov, int):
+ return f"F{fov:0>3}", fov
+
+ assert (
+ fov[0] == "F" and len(fov) == 4 and all(c.isdigit() for c in fov[1:])
+ ), f"'fov' needs to start with a F followed by three digits. Found '{fov}'."
+
+ return fov, int(fov[1:])
+
+
+def _read_cosmx_csv(path: Path, dataset_id: str) -> pd.DataFrame:
+ transcripts_file = path / f"{dataset_id}_tx_file.csv.gz"
+
+ if transcripts_file.exists():
+ return pd.read_csv(transcripts_file, compression="gzip")
+
+ transcripts_file = path / f"{dataset_id}_tx_file.csv"
+
+ assert transcripts_file.exists(), f"Transcript file {transcripts_file} not found."
+
+ return pd.read_csv(transcripts_file)
+
+
+def _cosmx_c_coords(path: Path, n_channels: int, protein_names: list[str] | None) -> list[str]:
+ channel_ids_path = path / "Morphology_ChannelID_Dictionary.txt"
+
+ if channel_ids_path.exists():
+ channel_names = list(pd.read_csv(channel_ids_path, delimiter="\t")["BiologicalTarget"])
+ else:
+ n_channels = n_channels - len(protein_names) if protein_names is not None else n_channels
+ channel_names = [str(i) for i in range(n_channels)]
+ log.warn(f"Channel file not found at {channel_ids_path}, using {channel_names=} instead.")
+
+ if protein_names is not None:
+ channel_names += protein_names
+
+ return channel_names
+
+
+def _find_dir(path: Path, name: str):
+ if (path / name).is_dir():
+ return path / name
+
+ paths = list(path.rglob(f"**/{name}"))
+ assert len(paths) == 1, f"Found {len(paths)} path(s) with name {name} inside {path}"
+
+ return paths[0]
+
+
+def _get_cosmx_protein_name(image_path: Path) -> str:
+ with tifffile.TiffFile(image_path) as tif:
+ description = tif.pages[0].description
+ substrings = re.findall(r'"DisplayName": "(.*?)",', description)
+ return substrings[0]
+
+
+def _read_protein_fov(protein_dir: Path) -> tuple[da.Array, list[str]]:
+ images_paths = list(protein_dir.rglob("*.TIF"))
+ protein_image = da.concatenate([imread(image_path) for image_path in images_paths], axis=0)
+ channel_names = [_get_cosmx_protein_name(image_path) for image_path in images_paths]
+
+ return protein_image, channel_names
diff --git a/sopa/io/reader/hyperion.py b/sopa/io/reader/hyperion.py
new file mode 100644
index 00000000..750d97ed
--- /dev/null
+++ b/sopa/io/reader/hyperion.py
@@ -0,0 +1,61 @@
+# Readers for multiplex-imaging technologies
+# In the future, we will completely rely on spatialdata-io (when all these functions exist)
+
+from __future__ import annotations
+
+import logging
+from pathlib import Path
+
+import dask.array as da
+import numpy as np
+from dask_image.imread import imread
+from spatialdata import SpatialData
+from spatialdata.models import Image2DModel
+from spatialdata.transformations import Identity
+
+from .utils import _default_image_kwargs
+
+log = logging.getLogger(__name__)
+
+
+def hyperion(
+ path: Path, image_models_kwargs: dict | None = None, imread_kwargs: dict | None = None
+) -> SpatialData:
+ """Read Hyperion data as a `SpatialData` object
+
+ Args:
+ path: Path to the directory containing the Hyperion `.tiff` images
+ image_models_kwargs: Keyword arguments passed to `spatialdata.models.Image2DModel`.
+ imread_kwargs: Keyword arguments passed to `dask_image.imread.imread`.
+
+ Returns:
+ A `SpatialData` object with a 2D-image of shape `(C, Y, X)`
+ """
+ image_models_kwargs, imread_kwargs = _default_image_kwargs(image_models_kwargs, imread_kwargs)
+
+ files = [file for file in Path(path).iterdir() if file.suffix == ".tiff"]
+
+ names = _get_channel_names_hyperion(files)
+ image = da.concatenate(
+ [imread(file, **imread_kwargs) for file in files],
+ axis=0,
+ )
+ image = (image / image.max(axis=(1, 2)).compute()[:, None, None] * 255).astype(np.uint8)
+ image = image.rechunk(chunks=image_models_kwargs["chunks"])
+
+ log.info(f"Found channel names {names}")
+
+ image_name = Path(path).absolute().stem
+ image = Image2DModel.parse(
+ image,
+ dims=("c", "y", "x"),
+ transformations={"pixels": Identity()},
+ c_coords=names,
+ **image_models_kwargs,
+ )
+
+ return SpatialData(images={image_name: image})
+
+
+def _get_channel_names_hyperion(files: list[Path]):
+ return [file.name[:-9].split("_")[1] for file in files]
diff --git a/sopa/io/reader/macsima.py b/sopa/io/reader/macsima.py
new file mode 100644
index 00000000..54feb1c3
--- /dev/null
+++ b/sopa/io/reader/macsima.py
@@ -0,0 +1,51 @@
+# Readers for multiplex-imaging technologies
+# In the future, we will completely rely on spatialdata-io (when all these functions exist)
+
+from __future__ import annotations
+
+import logging
+import re
+from pathlib import Path
+
+import pandas as pd
+from spatialdata import SpatialData
+
+from .utils import _deduplicate_names, _general_tif_directory_reader
+
+log = logging.getLogger(__name__)
+
+
+def macsima(path: Path, **kwargs: int) -> SpatialData:
+ """Read MACSIMA data as a `SpatialData` object
+
+ Notes:
+ For all dulicated name, their index will be added in brackets after, for instance you will often find `DAPI (000)` to indicate the DAPI channel of index `000`
+
+ Args:
+ path: Path to the directory containing the MACSIMA `.tif` images
+ kwargs: Kwargs for `_general_tif_directory_reader`
+
+ Returns:
+ A `SpatialData` object with a 2D-image of shape `(C, Y, X)`
+ """
+ return _general_tif_directory_reader(
+ path, files_to_channels=_get_channel_names_macsima, **kwargs
+ )
+
+
+def _parse_name_macsima(file):
+ index = file.name[2:5] if file.name[0] == "C" else file.name[:3]
+ match = re.search(r"_A-(.*?)_C-", file.name)
+ if match:
+ antibody = match.group(1)
+ channel = re.search(r"_C-(.*?)\.tif", file.name).group(1)
+ uid = f"{channel}-{index}"
+ else:
+ antibody = re.search(r"_A-(.*?)\.tif", file.name).group(1)
+ uid = index
+ return [antibody, uid]
+
+
+def _get_channel_names_macsima(files):
+ df_antibodies = pd.DataFrame([_parse_name_macsima(file) for file in files])
+ return _deduplicate_names(df_antibodies)
diff --git a/sopa/io/transcriptomics.py b/sopa/io/reader/merscope.py
similarity index 50%
rename from sopa/io/transcriptomics.py
rename to sopa/io/reader/merscope.py
index 91cc060d..0747333a 100644
--- a/sopa/io/transcriptomics.py
+++ b/sopa/io/reader/merscope.py
@@ -1,41 +1,27 @@
-# Readers for spatial-transcriptomics technologies
# Updated from spatialdata-io: https://spatialdata.scverse.org/projects/io/en/latest/
# In the future, we will completely rely on spatialdata-io (when stable enough)
from __future__ import annotations
-import json
import logging
import re
import warnings
-from collections.abc import Mapping
from pathlib import Path
-from types import MappingProxyType
-from typing import Any
+import dask.array as da
import dask.dataframe as dd
import numpy as np
-import spatialdata_io
import xarray
-from dask import array as da
-from dask.dataframe import read_parquet
from dask_image.imread import imread
from spatialdata import SpatialData
from spatialdata._logging import logger
from spatialdata.models import Image2DModel, PointsModel
-from spatialdata.transformations import Affine, Identity, Scale
-from spatialdata_io._constants._constants import MerscopeKeys, XeniumKeys
+from spatialdata.transformations import Affine, Identity
+from spatialdata_io._constants._constants import MerscopeKeys
-log = logging.getLogger(__name__)
-
-
-def _get_channel_names(images_dir: Path) -> list[str]:
- exp = r"mosaic_(?P[\w|-]+[0-9]?)_z(?P[0-9]+).tif"
- matches = [re.search(exp, file.name) for file in images_dir.iterdir()]
-
- stainings = {match.group("stain") for match in matches if match}
+from .utils import _default_image_kwargs
- return list(stainings)
+log = logging.getLogger(__name__)
SUPPORTED_BACKENDS = ["dask_image", "rioxarray"]
@@ -43,19 +29,28 @@ def _get_channel_names(images_dir: Path) -> list[str]:
def merscope(
path: str | Path,
+ backend: str = "dask_image",
z_layers: int | list[int] | None = 3,
region_name: str | None = None,
slide_name: str | None = None,
- backend: str = "dask_image",
- imread_kwargs: Mapping[str, Any] = MappingProxyType({}),
- image_models_kwargs: Mapping[str, Any] = MappingProxyType({}),
+ image_models_kwargs: dict | None = None,
+ imread_kwargs: dict | None = None,
) -> SpatialData:
- """Read MERSCOPE data as a `SpatialData` object. For more information, refer to [spatialdata-io](https://spatialdata.scverse.org/projects/io/en/latest/generated/spatialdata_io.merscope.html).
+ """Read MERSCOPE data as a `SpatialData` object.
+
+ This function reads the following files:
+ - `detected_transcripts.csv`: transcripts locations and names
+ - all the images under the `images` directory
+ - `images/micron_to_mosaic_pixel_transform.csv`: affine transformation
Args:
path: Path to the MERSCOPE directory containing all the experiment files
backend: Either `dask_image` or `rioxarray` (the latter should use less RAM, but it is still experimental)
- **kwargs: See link above.
+ z_layers: Indices of the z-layers to consider. Either one `int` index, or a list of `int` indices. If `None`, then no image is loaded. By default, only the middle layer is considered (that is, layer 3).
+ region_name: Name of the region of interest, e.g., `'region_0'`. If `None` then the name of the `path` directory is used.
+ slide_name: Name of the slide/run. If `None` then the name of the parent directory of `path` is used (whose name starts with a date).
+ image_models_kwargs: Keyword arguments passed to `spatialdata.models.Image2DModel`.
+ imread_kwargs: Keyword arguments passed to `dask_image.imread.imread`.
Returns:
A `SpatialData` object representing the MERSCOPE experiment
@@ -64,21 +59,9 @@ def merscope(
backend in SUPPORTED_BACKENDS
), f"Backend '{backend} not supported. Should be one of: {', '.join(SUPPORTED_BACKENDS)}"
- if backend == "rioxarray":
- log.info("Using experimental rioxarray backend.")
-
- if "chunks" not in image_models_kwargs:
- if isinstance(image_models_kwargs, MappingProxyType):
- image_models_kwargs = {}
- assert isinstance(image_models_kwargs, dict)
- image_models_kwargs["chunks"] = (1, 1024, 1024)
- if "scale_factors" not in image_models_kwargs:
- if isinstance(image_models_kwargs, MappingProxyType):
- image_models_kwargs = {}
- assert isinstance(image_models_kwargs, dict)
- image_models_kwargs["scale_factors"] = [2, 2, 2, 2]
-
path = Path(path).absolute()
+ image_models_kwargs, imread_kwargs = _default_image_kwargs(image_models_kwargs, imread_kwargs)
+
images_dir = path / MerscopeKeys.IMAGES_DIR
microns_to_pixels = Affine(
@@ -124,6 +107,15 @@ def merscope(
return SpatialData(points=points, images=images)
+def _get_channel_names(images_dir: Path) -> list[str]:
+ exp = r"mosaic_(?P[\w|-]+[0-9]?)_z(?P[0-9]+).tif"
+ matches = [re.search(exp, file.name) for file in images_dir.iterdir()]
+
+ stainings = {match.group("stain") for match in matches if match}
+
+ return list(stainings)
+
+
def _rioxarray_load_merscope(
images_dir: Path,
stainings: list[str],
@@ -132,6 +124,8 @@ def _rioxarray_load_merscope(
transformations: dict,
**kwargs,
):
+ log.info("Using experimental rioxarray backend.")
+
import rioxarray
from rasterio.errors import NotGeoreferencedWarning
@@ -186,103 +180,8 @@ def _get_points(transcript_path: Path):
transcripts = PointsModel.parse(
transcript_df,
coordinates={"x": MerscopeKeys.GLOBAL_X, "y": MerscopeKeys.GLOBAL_Y},
+ feature_key="gene",
transformations={"microns": Identity()},
)
transcripts["gene"] = transcripts["gene"].astype("category")
return transcripts
-
-
-def xenium(
- path: str | Path,
- imread_kwargs=MappingProxyType({}),
- image_models_kwargs=MappingProxyType({}),
-) -> SpatialData:
- """Read Xenium data as a `SpatialData` object. For more information, refer to [spatialdata-io](https://spatialdata.scverse.org/projects/io/en/latest/generated/spatialdata_io.xenium.html).
-
- Args:
- path: Path to the Xenium directory containing all the experiment files
- imread_kwargs: See link above.
- image_models_kwargs:See link above.
-
- Returns:
- A `SpatialData` object representing the Xenium experiment
- """
- if "chunks" not in image_models_kwargs:
- if isinstance(image_models_kwargs, MappingProxyType):
- image_models_kwargs = {}
- assert isinstance(image_models_kwargs, dict)
- image_models_kwargs["chunks"] = (1, 1024, 1024)
- if "scale_factors" not in image_models_kwargs:
- if isinstance(image_models_kwargs, MappingProxyType):
- image_models_kwargs = {}
- assert isinstance(image_models_kwargs, dict)
- image_models_kwargs["scale_factors"] = [2, 2, 2, 2]
-
- path = Path(path)
- with open(path / XeniumKeys.XENIUM_SPECS) as f:
- specs = json.load(f)
-
- points = {"transcripts": _get_points_xenium(path, specs)}
-
- images = {
- "morphology_mip": _get_images_xenium(
- path,
- XeniumKeys.MORPHOLOGY_MIP_FILE,
- imread_kwargs,
- image_models_kwargs,
- )
- }
-
- return SpatialData(images=images, points=points)
-
-
-def _get_points_xenium(path: Path, specs: dict[str, Any]):
- table = read_parquet(path / XeniumKeys.TRANSCRIPTS_FILE)
- table["feature_name"] = table["feature_name"].apply(
- lambda x: x.decode("utf-8") if isinstance(x, bytes) else str(x),
- meta=("feature_name", "object"),
- )
-
- transform = Scale([1.0 / specs["pixel_size"], 1.0 / specs["pixel_size"]], axes=("x", "y"))
- points = PointsModel.parse(
- table,
- coordinates={
- "x": XeniumKeys.TRANSCRIPTS_X,
- "y": XeniumKeys.TRANSCRIPTS_Y,
- "z": XeniumKeys.TRANSCRIPTS_Z,
- },
- feature_key=XeniumKeys.FEATURE_NAME,
- instance_key=XeniumKeys.CELL_ID,
- transformations={"global": transform},
- )
- return points
-
-
-def _get_images_xenium(
- path: Path,
- file: str,
- imread_kwargs: Mapping[str, Any] = MappingProxyType({}),
- image_models_kwargs: Mapping[str, Any] = MappingProxyType({}),
-):
- image = imread(path / file, **imread_kwargs)
- return Image2DModel.parse(
- image,
- transformations={"global": Identity()},
- dims=("c", "y", "x"),
- c_coords=list(map(str, range(len(image)))),
- **image_models_kwargs,
- )
-
-
-def cosmx(path: str, **kwargs: int) -> SpatialData:
- """Alias to the [spatialdata-io reader](https://spatialdata.scverse.org/projects/io/en/latest/generated/spatialdata_io.cosmx.html).
-
- Args:
- path: Path to the CosMX data directory
- **kwargs: See link above.
-
- Returns:
- A `SpatialData` object representing the CosMX experiment
- """
- # TODO: add stitching + set chunksize to 1024
- return spatialdata_io.cosmx(path, **kwargs)
diff --git a/sopa/io/reader/phenocycler.py b/sopa/io/reader/phenocycler.py
new file mode 100644
index 00000000..e414d681
--- /dev/null
+++ b/sopa/io/reader/phenocycler.py
@@ -0,0 +1,112 @@
+# Readers for multiplex-imaging technologies
+# In the future, we will completely rely on spatialdata-io (when all these functions exist)
+
+from __future__ import annotations
+
+import logging
+import re
+from pathlib import Path
+
+import dask.array as da
+import pandas as pd
+import tifffile as tf
+from dask.delayed import delayed
+from dask_image.imread import imread
+from spatialdata import SpatialData
+from spatialdata.models import Image2DModel
+from spatialdata.transformations import Identity
+
+from .utils import _deduplicate_names, _default_image_kwargs
+
+log = logging.getLogger(__name__)
+
+
+def phenocycler(
+ path: str | Path, channels_renaming: dict | None = None, image_models_kwargs: dict | None = None
+) -> SpatialData:
+ """Read Phenocycler data as a `SpatialData` object
+
+ Args:
+ path: Path to a `.qptiff` file, or a `.tif` file (if exported from QuPath)
+ channels_renaming: A dictionnary whose keys correspond to channels and values to their corresponding new name. Not all channels need to be renamed.
+ image_models_kwargs: Keyword arguments passed to `spatialdata.models.Image2DModel`.
+
+ Returns:
+ A `SpatialData` object with a 2D-image of shape `(C, Y, X)`
+ """
+ image_models_kwargs, _ = _default_image_kwargs(image_models_kwargs)
+
+ path = Path(path)
+ image_name = path.absolute().stem
+
+ if path.suffix == ".qptiff":
+ with tf.TiffFile(path) as tif:
+ series = tif.series[0]
+ names = _get_channel_names_qptiff(series)
+
+ delayed_image = delayed(lambda series: series.asarray())(tif)
+ image = da.from_delayed(delayed_image, dtype=series.dtype, shape=series.shape)
+ elif path.suffix == ".tif":
+ image = imread(path)
+ names = _get_IJ_channel_names(path)
+ else:
+ raise ValueError(f"Unsupported file extension {path.suffix}. Must be '.qptiff' or '.tif'.")
+
+ names = _rename_channels(names, channels_renaming)
+ image = image.rechunk(chunks=image_models_kwargs["chunks"])
+
+ image = Image2DModel.parse(
+ image,
+ dims=("c", "y", "x"),
+ transformations={"pixels": Identity()},
+ c_coords=names,
+ **image_models_kwargs,
+ )
+
+ return SpatialData(images={image_name: image})
+
+
+def _get_channel_name_qptiff(description):
+ import xml.etree.ElementTree as ET
+
+ root = ET.fromstring(description)
+
+ for xml_path in [".//Biomarker", ".//ExcitationFilter//Bands//Name"]:
+ field = root.find(xml_path)
+ if field is not None:
+ return field.text
+
+ return re.search(r"(.*?)", description).group(1)
+
+
+def _get_channel_names_qptiff(page_series):
+ df_names = pd.DataFrame(
+ [[_get_channel_name_qptiff(page.description), str(i)] for i, page in enumerate(page_series)]
+ )
+ return _deduplicate_names(df_names)
+
+
+def _get_IJ_channel_names(path: str) -> list[str]:
+ with tf.TiffFile(path) as tif:
+ default_names = [str(i) for i in range(len(tif.pages))]
+
+ if len(tif.pages) > 1:
+ ij_metadata_tag = tif.pages[0].tags.get("IJMetadata", None)
+
+ if ij_metadata_tag and "Labels" in ij_metadata_tag.value:
+ return ij_metadata_tag.value["Labels"]
+
+ log.warn("Could not find channel names in IJMetadata.")
+ return default_names
+
+ log.warn("The TIF file does not have multiple channels.")
+ return default_names
+
+
+def _rename_channels(names: list[str], channels_renaming: dict | None = None):
+ log.info(f"Found channel names {names}")
+ if channels_renaming is not None and len(channels_renaming):
+ log.info(f"Channels will be renamed by the dictionnary: {channels_renaming}")
+ names = [channels_renaming.get(name, name) for name in names]
+ log.info(f"New names are: {names}")
+ return names
diff --git a/sopa/io/reader/utils.py b/sopa/io/reader/utils.py
new file mode 100644
index 00000000..82f3342a
--- /dev/null
+++ b/sopa/io/reader/utils.py
@@ -0,0 +1,129 @@
+from __future__ import annotations
+
+import logging
+from pathlib import Path
+from typing import Callable
+
+import dask.array as da
+import tifffile as tf
+from dask_image.imread import imread
+from spatial_image import SpatialImage
+from spatialdata import SpatialData
+from spatialdata.models import Image2DModel
+from spatialdata.transformations import Identity
+
+log = logging.getLogger(__name__)
+
+
+def _default_image_kwargs(
+ image_models_kwargs: dict | None = None, imread_kwargs: dict | None = None
+) -> tuple[dict, dict]:
+ image_models_kwargs = {} if image_models_kwargs is None else image_models_kwargs
+ imread_kwargs = {} if imread_kwargs is None else imread_kwargs
+
+ if "chunks" not in image_models_kwargs:
+ image_models_kwargs["chunks"] = (1, 1024, 1024)
+
+ if "scale_factors" not in image_models_kwargs:
+ image_models_kwargs["scale_factors"] = [2, 2, 2, 2]
+
+ return image_models_kwargs, imread_kwargs
+
+
+def _deduplicate_names(df):
+ is_duplicated = df[0].duplicated(keep=False)
+ df.loc[is_duplicated, 0] += " (" + df.loc[is_duplicated, 1] + ")"
+ return df[0].values
+
+
+def _get_files_stem(files: list[Path]):
+ return [file.stem for file in files]
+
+
+def _general_tif_directory_reader(
+ path: str,
+ files_to_channels: Callable = _get_files_stem,
+ suffix: str = ".tif",
+ image_models_kwargs: dict | None = None,
+ imread_kwargs: dict | None = None,
+):
+ image_models_kwargs, imread_kwargs = _default_image_kwargs(image_models_kwargs, imread_kwargs)
+
+ files = [file for file in Path(path).iterdir() if file.suffix == suffix]
+
+ names = files_to_channels(files)
+ image = da.concatenate(
+ [imread(file, **imread_kwargs) for file in files],
+ axis=0,
+ )
+ image = image.rechunk(chunks=image_models_kwargs["chunks"])
+
+ log.info(f"Found channel names {names}")
+
+ image_name = Path(path).absolute().stem
+ image = Image2DModel.parse(
+ image,
+ dims=("c", "y", "x"),
+ transformations={"pixels": Identity()},
+ c_coords=names,
+ **image_models_kwargs,
+ )
+
+ return SpatialData(images={image_name: image})
+
+
+def _ome_channels_names(path: str):
+ import xml.etree.ElementTree as ET
+
+ tiff = tf.TiffFile(path)
+ omexml_string = tiff.pages[0].description
+
+ root = ET.fromstring(omexml_string)
+ namespaces = {"ome": "http://www.openmicroscopy.org/Schemas/OME/2016-06"}
+ channels = root.findall("ome:Image[1]/ome:Pixels/ome:Channel", namespaces)
+ return [c.attrib["Name"] if "Name" in c.attrib else c.attrib["ID"] for c in channels]
+
+
+def ome_tif(path: Path, as_image: bool = False) -> SpatialImage | SpatialData:
+ """Read an `.ome.tif` image. This image should be a 2D image (with possibly multiple channels).
+ Typically, this function can be used to open Xenium IF images.
+
+ Args:
+ path: Path to the `.ome.tif` image
+ as_image: If `True`, will return a `SpatialImage` object
+
+ Returns:
+ A `SpatialImage` or a `SpatialData` object
+ """
+ image_models_kwargs, _ = _default_image_kwargs()
+ image_name = Path(path).absolute().name.split(".")[0]
+ image: da.Array = imread(path)
+
+ if image.ndim == 4:
+ assert image.shape[0] == 1, "4D images not supported"
+ image = da.moveaxis(image[0], 2, 0)
+ log.info(f"Transformed 4D image into a 3D image of shape (c, y, x) = {image.shape}")
+ elif image.ndim != 3:
+ raise ValueError(f"Number of dimensions not supported: {image.ndim}")
+
+ image = image.rechunk(chunks=image_models_kwargs["chunks"])
+
+ channel_names = _ome_channels_names(path)
+ if len(channel_names) != len(image):
+ channel_names = [str(i) for i in range(len(image))]
+ log.warn(f"Channel names couldn't be read. Using {channel_names} instead.")
+
+ image = SpatialImage(image, dims=["c", "y", "x"], name=image_name, coords={"c": channel_names})
+
+ if as_image:
+ return image
+
+ image = Image2DModel.parse(
+ image,
+ dims=("c", "y", "x"),
+ c_coords=channel_names,
+ transformations={"pixels": Identity()},
+ **image_models_kwargs,
+ )
+
+ return SpatialData(images={image_name: image})
diff --git a/sopa/io/histopathology.py b/sopa/io/reader/wsi.py
similarity index 100%
rename from sopa/io/histopathology.py
rename to sopa/io/reader/wsi.py
diff --git a/sopa/io/reader/xenium.py b/sopa/io/reader/xenium.py
new file mode 100644
index 00000000..0e5951d7
--- /dev/null
+++ b/sopa/io/reader/xenium.py
@@ -0,0 +1,97 @@
+# Updated from spatialdata-io: https://spatialdata.scverse.org/projects/io/en/latest/
+# In the future, we will completely rely on spatialdata-io (when stable enough)
+
+from __future__ import annotations
+
+import json
+import logging
+from pathlib import Path
+from typing import Any
+
+from dask.dataframe import read_parquet
+from dask_image.imread import imread
+from spatialdata import SpatialData
+from spatialdata.models import Image2DModel, PointsModel
+from spatialdata.transformations import Identity, Scale
+from spatialdata_io._constants._constants import XeniumKeys
+
+from .utils import _default_image_kwargs
+
+log = logging.getLogger(__name__)
+
+
+def xenium(
+ path: str | Path,
+ image_models_kwargs: dict | None = None,
+ imread_kwargs: dict | None = None,
+) -> SpatialData:
+ """Read Xenium data as a `SpatialData` object. For more information, refer to [spatialdata-io](https://spatialdata.scverse.org/projects/io/en/latest/generated/spatialdata_io.xenium.html).
+
+ This function reads the following files:
+ - `transcripts.parquet`: transcripts locations and names
+ - `morphology_mip.ome.tif`: morphology image
+
+ Args:
+ path: Path to the Xenium directory containing all the experiment files
+ image_models_kwargs: Keyword arguments passed to `spatialdata.models.Image2DModel`.
+ imread_kwargs: Keyword arguments passed to `dask_image.imread.imread`.
+
+ Returns:
+ A `SpatialData` object representing the Xenium experiment
+ """
+ path = Path(path)
+ image_models_kwargs, imread_kwargs = _default_image_kwargs(image_models_kwargs, imread_kwargs)
+
+ with open(path / XeniumKeys.XENIUM_SPECS) as f:
+ specs = json.load(f)
+
+ points = {"transcripts": _get_points_xenium(path, specs)}
+
+ images = {
+ "morphology_mip": _get_images_xenium(
+ path,
+ XeniumKeys.MORPHOLOGY_MIP_FILE,
+ imread_kwargs,
+ image_models_kwargs,
+ )
+ }
+
+ return SpatialData(images=images, points=points)
+
+
+def _get_points_xenium(path: Path, specs: dict[str, Any]):
+ table = read_parquet(path / XeniumKeys.TRANSCRIPTS_FILE)
+ table["feature_name"] = table["feature_name"].apply(
+ lambda x: x.decode("utf-8") if isinstance(x, bytes) else str(x),
+ meta=("feature_name", "object"),
+ )
+
+ transform = Scale([1.0 / specs["pixel_size"], 1.0 / specs["pixel_size"]], axes=("x", "y"))
+ points = PointsModel.parse(
+ table,
+ coordinates={
+ "x": XeniumKeys.TRANSCRIPTS_X,
+ "y": XeniumKeys.TRANSCRIPTS_Y,
+ "z": XeniumKeys.TRANSCRIPTS_Z,
+ },
+ feature_key=XeniumKeys.FEATURE_NAME,
+ instance_key=XeniumKeys.CELL_ID,
+ transformations={"global": transform},
+ )
+ return points
+
+
+def _get_images_xenium(
+ path: Path,
+ file: str,
+ imread_kwargs: dict,
+ image_models_kwargs: dict,
+):
+ image = imread(path / file, **imread_kwargs)
+ return Image2DModel.parse(
+ image,
+ transformations={"global": Identity()},
+ dims=("c", "y", "x"),
+ c_coords=list(map(str, range(len(image)))),
+ **image_models_kwargs,
+ )
diff --git a/sopa/io/report/generate.py b/sopa/io/report/generate.py
index 71f7bc86..5d78c431 100644
--- a/sopa/io/report/generate.py
+++ b/sopa/io/report/generate.py
@@ -54,6 +54,14 @@ def _kdeplot_vmax_quantile(values: np.ndarray, quantile: float = 0.95):
class SectionBuilder:
+ SECTION_NAMES = [
+ "general_section",
+ "cell_section",
+ "channel_section",
+ "transcripts_section",
+ "representation_section",
+ ]
+
def __init__(self, sdata: SpatialData):
self.sdata = sdata
self.adata = self.sdata.tables.get(SopaKeys.TABLE)
@@ -64,8 +72,6 @@ def _table_has(self, key, default=False):
return self.adata.uns[SopaKeys.UNS_KEY].get(key, default)
def general_section(self):
- log.info("Writing general section")
-
return Section(
"General",
[
@@ -82,8 +88,6 @@ def general_section(self):
)
def cell_section(self):
- log.info("Writing cell section")
-
shapes_key, _ = get_boundaries(self.sdata, return_key=True)
coord_system = get_intrinsic_cs(self.sdata, shapes_key)
@@ -111,8 +115,6 @@ def cell_section(self):
)
def channel_section(self):
- log.info("Writing channel section")
-
image = get_spatial_image(self.sdata)
subsections = [
@@ -154,8 +156,6 @@ def transcripts_section(self):
if not self._table_has(SopaKeys.UNS_HAS_TRANSCRIPTS):
return None
- log.info("Writing transcript section")
-
mean_transcript_count = self.adata.X.mean(0).A1
low_average = mean_transcript_count < LOW_AVERAGE_COUNT
@@ -183,8 +183,6 @@ def transcripts_section(self):
return Section("Transcripts", [SubSection("Quality controls", QC_subsubsections)])
def representation_section(self, max_obs: int = 400_000):
- log.info("Writing representation section")
-
if self._table_has(SopaKeys.UNS_HAS_TRANSCRIPTS):
sc.pp.normalize_total(self.adata)
sc.pp.log1p(self.adata)
@@ -209,11 +207,14 @@ def representation_section(self, max_obs: int = 400_000):
)
def compute_sections(self) -> list[Section]:
- sections = [
- self.general_section(),
- self.cell_section(),
- self.channel_section(),
- self.transcripts_section(),
- self.representation_section(),
- ]
+ sections = []
+
+ for name in self.SECTION_NAMES:
+ try:
+ log.info(f"Writing {name}")
+ section = getattr(self, name)()
+ sections.append(section)
+ except Exception as e:
+ log.warn(f"Section {name} failed with error {e}")
+
return [section for section in sections if section is not None]
diff --git a/sopa/segmentation/aggregate.py b/sopa/segmentation/aggregate.py
index de4141b0..4f0d5412 100644
--- a/sopa/segmentation/aggregate.py
+++ b/sopa/segmentation/aggregate.py
@@ -109,6 +109,8 @@ def filter_cells(self, where_filter: np.ndarray):
self.geo_df = self.geo_df[~where_filter]
+ self.sdata.shapes[self.shapes_key] = self.geo_df
+
if self.table is not None:
self.table = self.table[~where_filter]
diff --git a/sopa/segmentation/patching.py b/sopa/segmentation/patching.py
index 0beff162..193f02ca 100644
--- a/sopa/segmentation/patching.py
+++ b/sopa/segmentation/patching.py
@@ -245,6 +245,8 @@ def patchify_transcripts(
class BaysorPatches:
+ MIN_TRANSCRIPTS_PER_PATCH = 4000
+
def __init__(self, patches_2d: Patches2D, df: dd.DataFrame):
self.patches_2d = patches_2d
self.df = df
@@ -302,10 +304,12 @@ def _clean_directory(self):
def valid_indices(self):
for index in range(len(self.patches_2d)):
patch_path = self._patch_path(index)
- if self._check_min_lines(patch_path, 1000):
+ if self._check_min_lines(patch_path, self.MIN_TRANSCRIPTS_PER_PATCH):
yield index
else:
- log.info(f"Patch {index} has < 1000 transcripts. Baysor will not be run on it.")
+ log.info(
+ f"Patch {index} has < {self.MIN_TRANSCRIPTS_PER_PATCH} transcripts. Baysor will not be run on it."
+ )
def _query_points_partition(self, gdf: gpd.GeoDataFrame, df: pd.DataFrame) -> pd.DataFrame:
points_gdf = gpd.GeoDataFrame(df, geometry=gpd.points_from_xy(df["x"], df["y"]))
diff --git a/sopa/utils/data.py b/sopa/utils/data.py
index 13435950..efbdf467 100644
--- a/sopa/utils/data.py
+++ b/sopa/utils/data.py
@@ -21,7 +21,7 @@ def uniform(
*_,
length: int = 2_048,
cell_density: float = 1e-4,
- n_points_per_cell: int = 50,
+ n_points_per_cell: int = 100,
c_coords: list[str] = ["DAPI", "CK", "CD3", "CD20"],
genes: int | list[str] = ["EPCAM", "CD3E", "CD20", "CXCL4", "CXCL10"],
sigma_factor: float = 0.05,
diff --git a/workflow/Snakefile b/workflow/Snakefile
index 9e95997c..e82d9616 100644
--- a/workflow/Snakefile
+++ b/workflow/Snakefile
@@ -93,7 +93,7 @@ rule patch_segmentation_baysor:
params:
args_baysor_prior_seg = args.baysor_prior_seg,
resources:
- mem_mb=64_000,
+ mem_mb=128_000,
shell:
"""
if command -v module &> /dev/null; then
diff --git a/workflow/config/cosmx/baysor.yaml b/workflow/config/cosmx/baysor.yaml
new file mode 100644
index 00000000..c8447404
--- /dev/null
+++ b/workflow/config/cosmx/baysor.yaml
@@ -0,0 +1,57 @@
+# For parameters details, see this commented example: https://github.com/gustaveroussy/sopa/blob/master/workflow/config/example_commented.yaml
+read:
+ technology: cosmx
+
+patchify:
+ patch_width_pixel: 6000
+ patch_overlap_pixel: 150
+ patch_width_microns: 8000
+ patch_overlap_microns: 150
+
+segmentation:
+ baysor:
+ min_area: 2000
+
+ config:
+ data:
+ force_2d: true # if false, uses 3D mode
+ min_molecules_per_cell: 10
+ x: "x"
+ y: "y"
+ z: "z"
+ gene: "target"
+ min_molecules_per_gene: 0
+ min_molecules_per_segment: 3
+ confidence_nn_id: 6
+
+ segmentation:
+ scale: 60 # typical cell radius
+ scale_std: "25%" # cell radius standard deviation
+ prior_segmentation_confidence: 0
+ estimate_scale_from_centers: false
+ n_clusters: 4
+ iters: 500
+ n_cells_init: 0
+ nuclei_genes: ""
+ cyto_genes: ""
+ new_component_weight: 0.2
+ new_component_fraction: 0.3
+
+aggregate:
+ average_intensities: true
+ min_transcripts: 10 # [optional] cells whose transcript count is below that this threshold are filtered
+
+# Comment this out if you want to use tangram -->
+
+# annotation:
+# method: tangram
+# args:
+# sc_reference_path: "..."
+# cell_type_key: ct
+
+explorer:
+ gene_column: "target"
+ ram_threshold_gb: 4
+
+executables:
+ baysor: ~/.julia/bin/baysor # if you run baysor, put here the path to the baysor executable
diff --git a/workflow/config/cosmx/cellpose.yaml b/workflow/config/cosmx/cellpose.yaml
new file mode 100644
index 00000000..c5dfe4ca
--- /dev/null
+++ b/workflow/config/cosmx/cellpose.yaml
@@ -0,0 +1,37 @@
+# For parameters details, see this commented example: https://github.com/gustaveroussy/sopa/blob/master/workflow/config/example_commented.yaml
+read:
+ technology: cosmx
+
+patchify:
+ patch_width_pixel: 6000
+ patch_overlap_pixel: 150
+ patch_width_microns: 8000
+ patch_overlap_microns: 150
+
+segmentation:
+ cellpose:
+ diameter: 60
+ channels: ["DNA"]
+ flow_threshold: 2
+ cellprob_threshold: -6
+ min_area: 2000
+
+aggregate:
+ average_intensities: true
+ gene_column: "target"
+ min_transcripts: 10 # [optional] cells whose transcript count is below that this threshold are filtered
+
+# Comment this out if you want to use tangram -->
+
+# annotation:
+# method: tangram
+# args:
+# sc_reference_path: "..."
+# cell_type_key: ct
+
+explorer:
+ gene_column: "target"
+ ram_threshold_gb: 4
+
+executables:
+ baysor: ~/.julia/bin/baysor # if you run baysor, put here the path to the baysor executable
diff --git a/workflow/config/xenium/repro_pancreas.yaml b/workflow/config/cosmx/cellpose_baysor.yaml
similarity index 55%
rename from workflow/config/xenium/repro_pancreas.yaml
rename to workflow/config/cosmx/cellpose_baysor.yaml
index 260a0e92..dcbf9567 100644
--- a/workflow/config/xenium/repro_pancreas.yaml
+++ b/workflow/config/cosmx/cellpose_baysor.yaml
@@ -1,37 +1,40 @@
# For parameters details, see this commented example: https://github.com/gustaveroussy/sopa/blob/master/workflow/config/example_commented.yaml
read:
- technology: xenium
+ technology: cosmx
patchify:
patch_width_pixel: 6000
patch_overlap_pixel: 150
- patch_width_microns: 1000
- patch_overlap_microns: 20
+ patch_width_microns: 8000
+ patch_overlap_microns: 150
segmentation:
cellpose:
- diameter: 30
- channels: [0]
+ diameter: 60
+ channels: ["DNA"]
flow_threshold: 2
cellprob_threshold: -6
+ min_area: 2000
baysor:
+ min_area: 2000
+
config:
data:
force_2d: true # if false, uses 3D mode
- min_molecules_per_cell: 10 # min number of transcripts per cell
+ min_molecules_per_cell: 10
x: "x"
y: "y"
z: "z"
- gene: "feature_name"
+ gene: "target"
min_molecules_per_gene: 0
min_molecules_per_segment: 3
confidence_nn_id: 6
segmentation:
- scale: 6.25 # typical cell radius
+ scale: 60 # typical cell radius
scale_std: "25%" # cell radius standard deviation
- prior_segmentation_confidence: 0.75 # confidence of the cellpose confidence (float in [0, 1])
+ prior_segmentation_confidence: 0
estimate_scale_from_centers: false
n_clusters: 4
iters: 500
@@ -41,18 +44,21 @@ segmentation:
new_component_weight: 0.2
new_component_fraction: 0.3
-annotation:
- method: tangram
- args:
- sc_reference_path: /mnt/beegfs/merfish/data/reference/2023_Reference_disco_pancreas_healthy.h5ad
- cell_type_key: ct
-
aggregate:
+ average_intensities: true
+ min_transcripts: 10 # [optional] cells whose transcript count is below that this threshold are filtered
+
+# Comment this out if you want to use tangram -->
+# annotation:
+# method: tangram
+# args:
+# sc_reference_path: "..."
+# cell_type_key: ct
explorer:
- gene_column: "feature_name"
+ gene_column: "target"
ram_threshold_gb: 4
executables:
- baysor: /mnt/beegfs/merfish/bin/baysor/bin/baysor
+ baysor: ~/.julia/bin/baysor # if you run baysor, put here the path to the baysor executable
diff --git a/workflow/config/example_commented.yaml b/workflow/config/example_commented.yaml
index 716fa062..1de783e1 100644
--- a/workflow/config/example_commented.yaml
+++ b/workflow/config/example_commented.yaml
@@ -42,7 +42,7 @@ segmentation:
data:
exclude_genes: "Blank*" # genes excluded from the Baysor segmentation
force_2d: true # if false, uses 3D mode
- min_molecules_per_cell: 30 # min number of transcripts per cell
+ min_molecules_per_cell: 10
gene: "gene" # name of the column of the transcript dataframe indicating the genes names
min_molecules_per_gene: 0
min_molecules_per_segment: 8
diff --git a/workflow/config/merscope/README.md b/workflow/config/merscope/README.md
new file mode 100644
index 00000000..91976e06
--- /dev/null
+++ b/workflow/config/merscope/README.md
@@ -0,0 +1,3 @@
+# Notes
+- The `baysor_vizgen.yaml` config is Baysor running on the prior Vizgen segmentation (also cellpose based). ***Recommended***.
+- The `baysor_cellpose.yaml` config is similar to the above, but will run Cellpose with Sopa to provide it as a prior to Baysor.
diff --git a/workflow/config/merscope/repro_liver.yaml b/workflow/config/merscope/baysor_cellpose.yaml
similarity index 74%
rename from workflow/config/merscope/repro_liver.yaml
rename to workflow/config/merscope/baysor_cellpose.yaml
index cc25ec46..aea7c542 100644
--- a/workflow/config/merscope/repro_liver.yaml
+++ b/workflow/config/merscope/baysor_cellpose.yaml
@@ -14,14 +14,16 @@ segmentation:
channels: ["DAPI"]
flow_threshold: 2
cellprob_threshold: -6
+ min_area: 2000
baysor:
- cell_key: cell_id
+ min_area: 20
+
config:
data:
exclude_genes: "Blank*" # genes excluded from the Baysor segmentation
force_2d: true # if false, uses 3D mode
- min_molecules_per_cell: 10 # min number of transcripts per cell
+ min_molecules_per_cell: 10
x: "x"
y: "y"
z: "z"
@@ -44,12 +46,16 @@ segmentation:
new_component_fraction: 0.3
aggregate:
+ average_intensities: true
+ min_transcripts: 10 # [optional] cells whose transcript count is below that this threshold are filtered
+
+# Comment this out if you want to use tangram -->
-annotation:
- method: tangram
- args:
- sc_reference_path: /mnt/beegfs/merfish/data/reference/2023_Reference_disco_liver.h5ad
- cell_type_key: ct
+# annotation:
+# method: tangram
+# args:
+# sc_reference_path: "..."
+# cell_type_key: ct
explorer:
gene_column: "gene"
@@ -57,4 +63,4 @@ explorer:
pixel_size: 0.108
executables:
- baysor: /mnt/beegfs/merfish/bin/baysor/bin/baysor
+ baysor: ~/.julia/bin/baysor # if you run baysor, put here the path to the baysor executable
diff --git a/workflow/config/merscope/base.yaml b/workflow/config/merscope/baysor_vizgen.yaml
similarity index 91%
rename from workflow/config/merscope/base.yaml
rename to workflow/config/merscope/baysor_vizgen.yaml
index 55fa84d4..a5ecccbd 100644
--- a/workflow/config/merscope/base.yaml
+++ b/workflow/config/merscope/baysor_vizgen.yaml
@@ -18,7 +18,7 @@ segmentation:
data:
exclude_genes: "Blank*" # genes excluded from the Baysor segmentation
force_2d: true # if false, uses 3D mode
- min_molecules_per_cell: 10 # min number of transcripts per cell
+ min_molecules_per_cell: 10
x: "x"
y: "y"
z: "z"
@@ -42,6 +42,7 @@ segmentation:
aggregate:
average_intensities: true
+ min_transcripts: 10 # [optional] cells whose transcript count is below that this threshold are filtered
# Comment this out if you want to use tangram -->
diff --git a/workflow/config/merscope/cellpose.yaml b/workflow/config/merscope/cellpose.yaml
new file mode 100644
index 00000000..81c130a8
--- /dev/null
+++ b/workflow/config/merscope/cellpose.yaml
@@ -0,0 +1,38 @@
+# For parameters details, see this commented example: https://github.com/gustaveroussy/sopa/blob/master/workflow/config/example_commented.yaml
+read:
+ technology: merscope
+
+patchify:
+ patch_width_pixel: 6000
+ patch_overlap_pixel: 150
+ patch_width_microns: 1000
+ patch_overlap_microns: 20
+
+segmentation:
+ cellpose:
+ diameter: 60
+ channels: ["DAPI"]
+ flow_threshold: 2
+ cellprob_threshold: -6
+ min_area: 2000
+
+aggregate:
+ average_intensities: true
+ gene_column: "gene"
+ min_transcripts: 10 # [optional] cells whose transcript count is below that this threshold are filtered
+
+# Comment this out if you want to use tangram -->
+
+# annotation:
+# method: tangram
+# args:
+# sc_reference_path: "..."
+# cell_type_key: ct
+
+explorer:
+ gene_column: "gene"
+ ram_threshold_gb: 4
+ pixel_size: 0.108
+
+executables:
+ baysor: ~/.julia/bin/baysor # if you run baysor, put here the path to the baysor executable
diff --git a/workflow/config/phenocycler/README.md b/workflow/config/phenocycler/README.md
index 829e43ff..f6eadf78 100644
--- a/workflow/config/phenocycler/README.md
+++ b/workflow/config/phenocycler/README.md
@@ -1,3 +1,5 @@
+# Notes
+
For PhenoCycler data, there are multiple config files based on the resolution setting of the PhenoCycler.
Choose the right one according to your settings:
- 10X (1 micron is 1 pixel)
diff --git a/workflow/config/toy/uniform_baysor.yaml b/workflow/config/toy/uniform_baysor.yaml
index 021b0167..b628470d 100644
--- a/workflow/config/toy/uniform_baysor.yaml
+++ b/workflow/config/toy/uniform_baysor.yaml
@@ -15,7 +15,7 @@ segmentation:
config:
data:
force_2d: true
- min_molecules_per_cell: 10 # min number of transcripts per cell
+ min_molecules_per_cell: 10
x: "x"
y: "y"
z: "z"
@@ -39,6 +39,7 @@ segmentation:
aggregate:
average_intensities: true
+ min_transcripts: 5 # [optional] cells whose transcript count is below that this threshold are filtered
annotation:
method: fluorescence
diff --git a/workflow/config/toy/uniform_baysor_overlaps.yaml b/workflow/config/toy/uniform_baysor_overlaps.yaml
index 38f0e8a3..3b08951f 100644
--- a/workflow/config/toy/uniform_baysor_overlaps.yaml
+++ b/workflow/config/toy/uniform_baysor_overlaps.yaml
@@ -15,7 +15,7 @@ segmentation:
config:
data:
force_2d: true
- min_molecules_per_cell: 10 # min number of transcripts per cell
+ min_molecules_per_cell: 10
x: "x"
y: "y"
z: "z"
@@ -39,6 +39,7 @@ segmentation:
aggregate:
average_intensities: true
+ min_transcripts: 5 # [optional] cells whose transcript count is below that this threshold are filtered
annotation:
method: fluorescence
diff --git a/workflow/config/xenium/base.yaml b/workflow/config/xenium/baysor.yaml
similarity index 90%
rename from workflow/config/xenium/base.yaml
rename to workflow/config/xenium/baysor.yaml
index 3bdbf5b0..21ed8a9d 100644
--- a/workflow/config/xenium/base.yaml
+++ b/workflow/config/xenium/baysor.yaml
@@ -13,7 +13,7 @@ segmentation:
config:
data:
force_2d: true # if false, uses 3D mode
- min_molecules_per_cell: 10 # min number of transcripts per cell
+ min_molecules_per_cell: 10
x: "x"
y: "y"
z: "z"
@@ -37,6 +37,7 @@ segmentation:
aggregate:
average_intensities: true
+ min_transcripts: 10 # [optional] cells whose transcript count is below that this threshold are filtered
# Comment this out if you want to use tangram -->
diff --git a/workflow/config/xenium/cellpose.yaml b/workflow/config/xenium/cellpose.yaml
new file mode 100644
index 00000000..8e3de7f7
--- /dev/null
+++ b/workflow/config/xenium/cellpose.yaml
@@ -0,0 +1,37 @@
+# For parameters details, see this commented example: https://github.com/gustaveroussy/sopa/blob/master/workflow/config/example_commented.yaml
+read:
+ technology: xenium
+
+patchify:
+ patch_width_pixel: 6000
+ patch_overlap_pixel: 150
+ patch_width_microns: 1000
+ patch_overlap_microns: 20
+
+segmentation:
+ cellpose:
+ diameter: 30
+ channels: [0]
+ flow_threshold: 2
+ cellprob_threshold: -6
+ min_area: 400
+
+aggregate:
+ average_intensities: true
+ gene_column: "feature_name"
+ min_transcripts: 10 # [optional] cells whose transcript count is below that this threshold are filtered
+
+# Comment this out if you want to use tangram -->
+
+# annotation:
+# method: tangram
+# args:
+# sc_reference_path: "..."
+# cell_type_key: ct
+
+explorer:
+ gene_column: "feature_name"
+ ram_threshold_gb: 4
+
+executables:
+ baysor: ~/.julia/bin/baysor # if you run baysor, put here the path to the baysor executable
diff --git a/workflow/config/xenium/cellpose_baysor.yaml b/workflow/config/xenium/cellpose_baysor.yaml
index d54f38b4..614f3fcf 100644
--- a/workflow/config/xenium/cellpose_baysor.yaml
+++ b/workflow/config/xenium/cellpose_baysor.yaml
@@ -22,7 +22,7 @@ segmentation:
config:
data:
force_2d: true # if false, uses 3D mode
- min_molecules_per_cell: 10 # min number of transcripts per cell
+ min_molecules_per_cell: 10
x: "x"
y: "y"
z: "z"
@@ -46,6 +46,7 @@ segmentation:
aggregate:
average_intensities: true
+ min_transcripts: 10 # [optional] cells whose transcript count is below that this threshold are filtered
# Comment this out if you want to use tangram -->