From 8dabdbcf20de1e9c09bd6ada7f89be5b0939d7eb Mon Sep 17 00:00:00 2001 From: CBroz1 Date: Sat, 18 Nov 2023 10:33:55 -0600 Subject: [PATCH] add mdformat-mkdocs --- .pre-commit-config.yaml | 1 + CHANGELOG.md | 38 ++++---- docs/README.md | 2 +- docs/src/contribute.md | 125 ++++++++++++++------------- docs/src/misc/database_management.md | 4 +- docs/src/misc/insert_data.md | 80 ++++++++--------- docs/src/misc/merge_tables.md | 10 +-- notebooks/README.md | 6 +- 8 files changed, 134 insertions(+), 132 deletions(-) diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 3b2e82f53..40e72f2f9 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -13,6 +13,7 @@ repos: types: [markdown] args: [--wrap, "80", --number] additional_dependencies: + - mdformat-mkdocs - mdformat-toc - mdformat-beautysh - mdformat-config diff --git a/CHANGELOG.md b/CHANGELOG.md index dba384f10..5073aa58c 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -26,17 +26,17 @@ ### Pipelines - Common: - - Added support multiple cameras per epoch. #557 - - Removed `common_backup` schema. #631 - - Added support for multiple position objects per NWB in `common_behav` via - PositionSource.SpatialSeries and RawPosition.PosObject #628, #616. _Note:_ - Existing functions have been made compatible, but column labels for - `RawPosition.fetch1_dataframe` may change. + - Added support multiple cameras per epoch. #557 + - Removed `common_backup` schema. #631 + - Added support for multiple position objects per NWB in `common_behav` via + PositionSource.SpatialSeries and RawPosition.PosObject #628, #616. _Note:_ + Existing functions have been made compatible, but column labels for + `RawPosition.fetch1_dataframe` may change. - Spike sorting: - - Added pipeline populator. #637, #646, #647 - - Fixed curation functionality for `nn_isolation`. #597, #598 + - Added pipeline populator. #637, #646, #647 + - Fixed curation functionality for `nn_isolation`. #597, #598 - Position: Added position interval/epoch mapping via PositionIntervalMap. #620, - #621, #627 + #621, #627 - LFP: Refactored pipeline. #594, #588, #605, #606, #607, #608, #615, #629 ## [0.4.1] (June 30, 2023) @@ -47,12 +47,12 @@ ## [0.4.0] (May 22, 2023) - Updated call to `spikeinterface.preprocessing.whiten` to use dtype np.float16. - #446, + #446, - Updated default spike sorting metric parameters. #447 - Updated whitening to be compatible with recent changes in spikeinterface when - using mountainsort. #449 + using mountainsort. #449 - Moved LFP pipeline to `src/spyglass/lfp/v1` and addressed related usability - issues. #468, #478, #482, #484, #504 + issues. #468, #478, #482, #484, #504 - Removed whiten parameter for clusterless thresholder. #454 - Added plot to plot all DIO events in a session. #457 - Added file sharing functionality through kachery_cloud. #458, #460 @@ -60,28 +60,28 @@ - Added scripts to add guests and collaborators as users. #463 - Cleaned up installation instructions in repo README. #467 - Added checks in decoding visualization to ensure time dimensions are the - correct length. + correct length. - Fixed artifact removed valid times. #472 - Added codespell workflow for spell checking and fixed typos. #471 - Updated LFP code to save LFP as `pynwb.ecephys.LFP` type. #475 - Added artifact detection to LFP pipeline. #473 - Replaced calls to `spikeinterface.sorters.get_default_params` with - `spikeinterface.sorters.get_default_sorter_params`. #486 + `spikeinterface.sorters.get_default_sorter_params`. #486 - Updated position pipeline and added functionality to handle pose estimation - through DeepLabCut. #367, #505 + through DeepLabCut. #367, #505 - Updated `environment_position.yml`. #502 - Renamed `FirFilter` class to `FirFilterParameters`. #512 ## [0.3.4] (March 30, 2023) - Fixed error in spike sorting pipeline referencing the "probe_type" column - which is no longer accessible from the `Electrode` table. #437 + which is no longer accessible from the `Electrode` table. #437 - Fixed error when inserting an NWB file that does not have a probe - manufacturer. #433, #436 + manufacturer. #433, #436 - Fixed error when adding a new `DataAcquisitionDevice` and a new `ProbeType`. - #436 + #436 - Fixed inconsistency between capitalized/uncapitalized versions of "Intan" for - DataAcquisitionAmplifier and DataAcquisitionDevice.adc_circuit. #430, #438 + DataAcquisitionAmplifier and DataAcquisitionDevice.adc_circuit. #430, #438 ## [0.3.3] (March 29, 2023) diff --git a/docs/README.md b/docs/README.md index 80510daed..d2541eede 100644 --- a/docs/README.md +++ b/docs/README.md @@ -6,7 +6,7 @@ section of this file. New pages should be either: 1. A markdown file in the `docs/` directory. -2. A Jupyter notebook in the `notebooks/` directory. +1. A Jupyter notebook in the `notebooks/` directory. The remainder of `mkdocs.yml` specifies the site's [configuration](https://www.mkdocs.org/user-guide/configuration/) diff --git a/docs/src/contribute.md b/docs/src/contribute.md index 2d018464b..4fbb4b0d4 100644 --- a/docs/src/contribute.md +++ b/docs/src/contribute.md @@ -17,20 +17,20 @@ for features that will involve multiple contributors. - Tables are grouped into schemas by topic (e.g., `common_metrics`) - Schemas - - Are defined in a `py` pile. - - Correspond to MySQL 'databases'. - - Are organized into modules (e.g., `common`) by folders. + - Are defined in a `py` pile. + - Correspond to MySQL 'databases'. + - Are organized into modules (e.g., `common`) by folders. - The _common_ module - - In principle, contains schema that are shared across all projects. - - In practice, contains shared tables (e.g., Session) and the first draft of - schemas that have since been split into their own modality-specific\ - modules - (e.g., `lfp`) - - Should not be added to without discussion. + - In principle, contains schema that are shared across all projects. + - In practice, contains shared tables (e.g., Session) and the first draft of + schemas that have since been split into their own + modality-specific\ + modules (e.g., `lfp`) + - Should not be added to without discussion. - A pipeline - - Refers to a set of tables used for processing data of a particular modality - (e.g., LFP, spike sorting, position tracking). - - May span multiple schema. + - Refers to a set of tables used for processing data of a particular modality + (e.g., LFP, spike sorting, position tracking). + - May span multiple schema. - For analysis that will be only useful to you, create your own schema. ## Types of tables @@ -54,7 +54,7 @@ Tables shared across multiple pipelines for shared data types. - Naming convention: None - Data tier: `dj.Manual` - Examples: `IntervalList` (time interval for any analysis), `AnalysisNwbfile` - (analysis NWB files) + (analysis NWB files) _Note_: Because these are stand-alone tables not part of the dependency structure, developers should include enough information to link entries back to @@ -72,8 +72,8 @@ should be included in the `make` method of `Session`. - Non-primary key: `object_id`, the unique hash of an object in the NWB file. - Examples: `Raw`, `Institution`, etc. - Required methods: - - `make`: must read information from an NWB file and insert it to the table. - - `fetch_nwb`: retrieve the data specified by the object ID. + - `make`: must read information from an NWB file and insert it to the table. + - `fetch_nwb`: retrieve the data specified by the object ID. ### Parameters @@ -99,8 +99,8 @@ session. - Naming convention: end with `Selection` - Data tier: `dj.Manual` - Primary key(s): Foreign key references to - - one or more NWB or data tables - - optionally, one or more parameter tables + - one or more NWB or data tables + - optionally, one or more parameter tables - Non-primary key: None - Examples: `MetricSelection`, `LFPSElection` @@ -119,16 +119,17 @@ method that carries out the computation specified in the Selection table when - Data tier: `dj.Computed` - Primary key: Foreign key reference to a Selection table. - Non-primary key: `analysis_file_name` inherited from `AnalysisNwbfile` table - (i.e., name of the analysis NWB file that will hold the output of the - computation). + (i.e., name of the analysis NWB file that will hold the output of the + computation). - Required methods: - - `make`: carries out the computation and insert a new entry; must also create - an analysis NWB file and insert it to the `AnalysisNwbfile` table. Note that - this method is never called directly; it is called via `populate`. Multiple - entries can be run in parallel when called with `reserve_jobs=True`. - - `delete`: extension of the `delete` method that checks user privilege before - deleting entries as a way to prevent accidental deletion of computations - that take a long time (see below). + - `make`: carries out the computation and insert a new entry; must also create + an analysis NWB file and insert it to the `AnalysisNwbfile` table. Note + that this method is never called directly; it is called via `populate`. + Multiple entries can be run in parallel when called with + `reserve_jobs=True`. + - `delete`: extension of the `delete` method that checks user privilege before + deleting entries as a way to prevent accidental deletion of computations + that take a long time (see below). - Example: `QualityMetrics`, `LFPV1` ### Merge @@ -154,15 +155,15 @@ all analyses. - Naming: `{animal name}YYYYMMDD.nwb` - Storage: - - On disk, directory identified by `settings.py` as `raw_dir` (e.g., - `/stelmo/nwb/raw`) - - In database, in the `Nwbfile` table + - On disk, directory identified by `settings.py` as `raw_dir` (e.g., + `/stelmo/nwb/raw`) + - In database, in the `Nwbfile` table - Copies: - - made with an underscore `{animal name}YYYYMMDD_.nwb` - - stored in the same `raw_dir` - - contain pointers to objects in original file - - permit adding new parts to the NWB file without risk of corrupting the - original data + - made with an underscore `{animal name}YYYYMMDD_.nwb` + - stored in the same `raw_dir` + - contain pointers to objects in original file + - permit adding new parts to the NWB file without risk of corrupting the + original data ### Analysis files @@ -170,12 +171,12 @@ Hold the results of intermediate steps in the analysis. - Naming: `{animal name}YYYYMMDD_{10-character random string}.nwb` - Storage: - - On disk, directory identified by `settings.py` as `analysis_dir` (e.g., - `/stelmo/nwb/analysis`). Items are further sorted into folders matching - original NWB file name - - In database, in the `AnalysisNwbfile` table. + - On disk, directory identified by `settings.py` as `analysis_dir` (e.g., + `/stelmo/nwb/analysis`). Items are further sorted into folders matching + original NWB file name + - In database, in the `AnalysisNwbfile` table. - Examples: filtered recordings, spike times of putative units after sorting, or - waveform snippets. + waveform snippets. _Note_: Because NWB files and analysis files exist both on disk and listed in tables, these can become out of sync. You can 'equalize' the database table @@ -194,16 +195,16 @@ HDF5 format. This duplication should be resolved in the future. The following objects should be uniquely named. - _Recordings_: Underscore-separated concatenations of uniquely defining - features, - `NWBFileName_IntervalName_ElectrodeGroupName_PreprocessingParamsName`. + features, + `NWBFileName_IntervalName_ElectrodeGroupName_PreprocessingParamsName`. - _SpikeSorting_: Adds `SpikeSorter_SorterParamName` to the name of the - recording. + recording. - _Waveforms_: Adds `_WaveformParamName` to the name of the sorting. - _Quality metrics_: Adds `_MetricParamName` to the name of the waveform. - _Analysis NWB files_: - `NWBFileName_IntervalName_ElectrodeGroupName_PreprocessingParamsName.nwb` + `NWBFileName_IntervalName_ElectrodeGroupName_PreprocessingParamsName.nwb` - Each recording and sorting is given truncated UUID strings as part of - concatenations. + concatenations. Following broader Python conventions, methods a method that will not be explicitly called by the user should start with `_` @@ -217,24 +218,24 @@ faulty connection. - Intervals can be nested for a set of disjoint intervals. - Some recordings have explicit - [PTP timestamps](https://en.wikipedia.org/wiki/Precision_Time_Protocol) - associated with each sample. Some older recordings are missing PTP times, and - times must be inferred from the TTL pulses from the camera. + [PTP timestamps](https://en.wikipedia.org/wiki/Precision_Time_Protocol) + associated with each sample. Some older recordings are missing PTP times, + and times must be inferred from the TTL pulses from the camera. ## Misc - During development, we suggest using a Docker container. See - [example](./notebooks/00_Setup.ipynb). + [example](./notebooks/00_Setup.ipynb). - DataJoint is unable to set delete permissions on a per-table basis. If a user - is able to delete entries in a given table, she can delete entries in any - table in the schema. The `SpikeSorting` table extends the built-in `delete` - method to check if the username matches a list of allowed users when `delete` - is called. Issues #226 and #586 track the progress of generalizing this - feature. + is able to delete entries in a given table, she can delete entries in any + table in the schema. The `SpikeSorting` table extends the built-in `delete` + method to check if the username matches a list of allowed users when + `delete` is called. Issues #226 and #586 track the progress of generalizing + this feature. - `numpy` style docstrings will be interpreted by API docs. To check for - compliance, monitor the std out when building docs (see `docs/README.md`) + compliance, monitor the std out when building docs (see `docs/README.md`) - `fetch_nwb` is currently reperated across many tables. For progress on a fix, - follow issue #530 + follow issue #530 ## Making a release @@ -242,10 +243,10 @@ Spyglass follows [Semantic Versioning](https://semver.org/) with versioning of the form `X.Y.Z` (e.g., `0.4.2`). 1. In `CITATION.cff`, update the `version` key. -2. Make a pull request with changes. -3. After the pull request is merged, pull this merge commit and tag it with - `git tag {version}` -4. Publish the new release tag. Run `git push origin {version}`. This will - rebuild docs and push updates to PyPI. -5. Make a new - [release on GitHub](https://docs.github.com/en/repositories/releasing-projects-on-github/managing-releases-in-a-repository). +1. Make a pull request with changes. +1. After the pull request is merged, pull this merge commit and tag it with + `git tag {version}` +1. Publish the new release tag. Run `git push origin {version}`. This will + rebuild docs and push updates to PyPI. +1. Make a new + [release on GitHub](https://docs.github.com/en/repositories/releasing-projects-on-github/managing-releases-in-a-repository). diff --git a/docs/src/misc/database_management.md b/docs/src/misc/database_management.md index 0d4471510..d4117bbc9 100644 --- a/docs/src/misc/database_management.md +++ b/docs/src/misc/database_management.md @@ -34,7 +34,7 @@ schema/database prefix. - `SELECT` privileges allow users to read, write, and delete data. - `ALL` privileges allow users to create, alter, or drop tables and schemas in - addition to operations above. + addition to operations above. In practice, DataJoint only permits alerations of secondary keys on existing tables, and more derstructive operations would require using DataJoint to @@ -76,7 +76,7 @@ migrate the contents to another server. Some conventions to note: - `.host`: files used in the host's context - `.container`: files used inside the database Docker container - `.env`: files used to set environment variables used by the scripts for - database name, backup name, and backup credentials + database name, backup name, and backup credentials ### mysql.env.host diff --git a/docs/src/misc/insert_data.md b/docs/src/misc/insert_data.md index dd76c22ce..19daf5a1d 100644 --- a/docs/src/misc/insert_data.md +++ b/docs/src/misc/insert_data.md @@ -20,46 +20,46 @@ particular probe is stored in the `ProbeType` and `Probe` tables of `spyglass.common`. The user can either: 1. create these entries programmatically using DataJoint `insert` commands, for - example: - - ```python - sgc.ProbeType.insert1( - { - "probe_type": "128c-4s6mm6cm-15um-26um-sl", - "probe_description": "A Livermore flexible probe with 128 channels, 4 shanks, 6 mm shank length, 6 cm ribbon length. 15 um contact diameter, 26 um center-to-center distance (pitch), single-line configuration.", - "manufacturer": "Lawrence Livermore National Lab", - "num_shanks": 4, - }, - skip_duplicates=True, - ) - ``` - -2. define these entries in a special YAML file called `entries.yaml` that is - processed when `spyglass` is imported. One can think of `entries.yaml` as a - place to define information that the database should come pre-equipped prior - to ingesting any NWB files. The `entries.yaml` file should be placed in the - `spyglass` base directory. An example can be found in - `examples/config_yaml/entries.yaml`. It has the following structure: - - ```yaml - TableName: - - TableEntry1Field1: Value - TableEntry1Field2: Value - - TableEntry2Field1: Value - TableEntry2Field2: Value - ``` - - For example, - - ```yaml - ProbeType: - - probe_type: 128c-4s6mm6cm-15um-26um-sl - probe_description: A Livermore flexible probe with 128 channels, 4 shanks, 6 mm shank - length, 6 cm ribbon length. 15 um contact diameter, 26 um center-to-center distance - (pitch), single-line configuration. - manufacturer: Lawrence Livermore National Lab - num_shanks: 4 - ``` + example: + + ```python + sgc.ProbeType.insert1( + { + "probe_type": "128c-4s6mm6cm-15um-26um-sl", + "probe_description": "A Livermore flexible probe with 128 channels, 4 shanks, 6 mm shank length, 6 cm ribbon length. 15 um contact diameter, 26 um center-to-center distance (pitch), single-line configuration.", + "manufacturer": "Lawrence Livermore National Lab", + "num_shanks": 4, + }, + skip_duplicates=True, + ) + ``` + +1. define these entries in a special YAML file called `entries.yaml` that is + processed when `spyglass` is imported. One can think of `entries.yaml` as a + place to define information that the database should come pre-equipped + prior to ingesting any NWB files. The `entries.yaml` file should be placed + in the `spyglass` base directory. An example can be found in + `examples/config_yaml/entries.yaml`. It has the following structure: + + ```yaml + TableName: + - TableEntry1Field1: Value + TableEntry1Field2: Value + - TableEntry2Field1: Value + TableEntry2Field2: Value + ``` + + For example, + + ```yaml + ProbeType: + - probe_type: 128c-4s6mm6cm-15um-26um-sl + probe_description: A Livermore flexible probe with 128 channels, 4 shanks, 6 mm shank + length, 6 cm ribbon length. 15 um contact diameter, 26 um center-to-center distance + (pitch), single-line configuration. + manufacturer: Lawrence Livermore National Lab + num_shanks: 4 + ``` Using a YAML file over programmatically creating these entries in a notebook or script has the advantages that the YAML file maintains a record of what entries diff --git a/docs/src/misc/merge_tables.md b/docs/src/misc/merge_tables.md index 72725985b..fd764535e 100644 --- a/docs/src/misc/merge_tables.md +++ b/docs/src/misc/merge_tables.md @@ -24,12 +24,12 @@ A Merge Table is fundamentally a master table with one part for each divergent pipeline. By convention... 1. The master table has one primary key, `merge_id`, a - [UUID](https://en.wikipedia.org/wiki/Universally_unique_identifier), and one - secondary attribute, `source`, which gives the part table name. Both are - managed with the custom `insert` function of this class. + [UUID](https://en.wikipedia.org/wiki/Universally_unique_identifier), and + one secondary attribute, `source`, which gives the part table name. Both + are managed with the custom `insert` function of this class. -2. Each part table has inherits the final table in its respective pipeline, and - shares the same name as this table. +1. Each part table has inherits the final table in its respective pipeline, and + shares the same name as this table. ```python from spyglass.utils.dj_merge_tables import _Merge diff --git a/notebooks/README.md b/notebooks/README.md index 7bf363aed..af0ad5dd5 100644 --- a/notebooks/README.md +++ b/notebooks/README.md @@ -18,7 +18,7 @@ For folks running ephys analysis, one could use the either one or both of the following... 1. Spike Sorting, and optionally the Curation notebooks -2. LFP, and optionally Theta notebooks +1. LFP, and optionally Theta notebooks ## 2. Position @@ -33,9 +33,9 @@ processing. - Ripple Detection: Uses LFP and Position information - Extract Marks: Comparing actual and mental position using unclustered spikes - and spike waveform features. + and spike waveform features. - Decoding: Uses either spike sorted of clusterless ephys analysis to look at - mental position. + mental position.