Skip to content

Commit

Permalink
add mdformat-mkdocs
Browse files Browse the repository at this point in the history
  • Loading branch information
CBroz1 committed Nov 18, 2023
1 parent ddeaf92 commit 8dabdbc
Show file tree
Hide file tree
Showing 8 changed files with 134 additions and 132 deletions.
1 change: 1 addition & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ repos:
types: [markdown]
args: [--wrap, "80", --number]
additional_dependencies:
- mdformat-mkdocs
- mdformat-toc
- mdformat-beautysh
- mdformat-config
Expand Down
38 changes: 19 additions & 19 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,17 +26,17 @@
### Pipelines

- Common:
- Added support multiple cameras per epoch. #557
- Removed `common_backup` schema. #631
- Added support for multiple position objects per NWB in `common_behav` via
PositionSource.SpatialSeries and RawPosition.PosObject #628, #616. _Note:_
Existing functions have been made compatible, but column labels for
`RawPosition.fetch1_dataframe` may change.
- Added support multiple cameras per epoch. #557
- Removed `common_backup` schema. #631
- Added support for multiple position objects per NWB in `common_behav` via
PositionSource.SpatialSeries and RawPosition.PosObject #628, #616. _Note:_
Existing functions have been made compatible, but column labels for
`RawPosition.fetch1_dataframe` may change.
- Spike sorting:
- Added pipeline populator. #637, #646, #647
- Fixed curation functionality for `nn_isolation`. #597, #598
- Added pipeline populator. #637, #646, #647
- Fixed curation functionality for `nn_isolation`. #597, #598
- Position: Added position interval/epoch mapping via PositionIntervalMap. #620,
#621, #627
#621, #627
- LFP: Refactored pipeline. #594, #588, #605, #606, #607, #608, #615, #629

## [0.4.1] (June 30, 2023)
Expand All @@ -47,41 +47,41 @@
## [0.4.0] (May 22, 2023)

- Updated call to `spikeinterface.preprocessing.whiten` to use dtype np.float16.
#446,
#446,
- Updated default spike sorting metric parameters. #447
- Updated whitening to be compatible with recent changes in spikeinterface when
using mountainsort. #449
using mountainsort. #449
- Moved LFP pipeline to `src/spyglass/lfp/v1` and addressed related usability
issues. #468, #478, #482, #484, #504
issues. #468, #478, #482, #484, #504
- Removed whiten parameter for clusterless thresholder. #454
- Added plot to plot all DIO events in a session. #457
- Added file sharing functionality through kachery_cloud. #458, #460
- Pinned numpy version to `numpy<1.24`
- Added scripts to add guests and collaborators as users. #463
- Cleaned up installation instructions in repo README. #467
- Added checks in decoding visualization to ensure time dimensions are the
correct length.
correct length.
- Fixed artifact removed valid times. #472
- Added codespell workflow for spell checking and fixed typos. #471
- Updated LFP code to save LFP as `pynwb.ecephys.LFP` type. #475
- Added artifact detection to LFP pipeline. #473
- Replaced calls to `spikeinterface.sorters.get_default_params` with
`spikeinterface.sorters.get_default_sorter_params`. #486
`spikeinterface.sorters.get_default_sorter_params`. #486
- Updated position pipeline and added functionality to handle pose estimation
through DeepLabCut. #367, #505
through DeepLabCut. #367, #505
- Updated `environment_position.yml`. #502
- Renamed `FirFilter` class to `FirFilterParameters`. #512

## [0.3.4] (March 30, 2023)

- Fixed error in spike sorting pipeline referencing the "probe_type" column
which is no longer accessible from the `Electrode` table. #437
which is no longer accessible from the `Electrode` table. #437
- Fixed error when inserting an NWB file that does not have a probe
manufacturer. #433, #436
manufacturer. #433, #436
- Fixed error when adding a new `DataAcquisitionDevice` and a new `ProbeType`.
#436
#436
- Fixed inconsistency between capitalized/uncapitalized versions of "Intan" for
DataAcquisitionAmplifier and DataAcquisitionDevice.adc_circuit. #430, #438
DataAcquisitionAmplifier and DataAcquisitionDevice.adc_circuit. #430, #438

## [0.3.3] (March 29, 2023)

Expand Down
2 changes: 1 addition & 1 deletion docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
section of this file. New pages should be either:

1. A markdown file in the `docs/` directory.
2. A Jupyter notebook in the `notebooks/` directory.
1. A Jupyter notebook in the `notebooks/` directory.

The remainder of `mkdocs.yml` specifies the site's
[configuration](https://www.mkdocs.org/user-guide/configuration/)
Expand Down
125 changes: 63 additions & 62 deletions docs/src/contribute.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,20 +17,20 @@ for features that will involve multiple contributors.

- Tables are grouped into schemas by topic (e.g., `common_metrics`)
- Schemas
- Are defined in a `py` pile.
- Correspond to MySQL 'databases'.
- Are organized into modules (e.g., `common`) by folders.
- Are defined in a `py` pile.
- Correspond to MySQL 'databases'.
- Are organized into modules (e.g., `common`) by folders.
- The _common_ module
- In principle, contains schema that are shared across all projects.
- In practice, contains shared tables (e.g., Session) and the first draft of
schemas that have since been split into their own modality-specific\
modules
(e.g., `lfp`)
- Should not be added to without discussion.
- In principle, contains schema that are shared across all projects.
- In practice, contains shared tables (e.g., Session) and the first draft of
schemas that have since been split into their own
modality-specific\
modules (e.g., `lfp`)
- Should not be added to without discussion.
- A pipeline
- Refers to a set of tables used for processing data of a particular modality
(e.g., LFP, spike sorting, position tracking).
- May span multiple schema.
- Refers to a set of tables used for processing data of a particular modality
(e.g., LFP, spike sorting, position tracking).
- May span multiple schema.
- For analysis that will be only useful to you, create your own schema.

## Types of tables
Expand All @@ -54,7 +54,7 @@ Tables shared across multiple pipelines for shared data types.
- Naming convention: None
- Data tier: `dj.Manual`
- Examples: `IntervalList` (time interval for any analysis), `AnalysisNwbfile`
(analysis NWB files)
(analysis NWB files)

_Note_: Because these are stand-alone tables not part of the dependency
structure, developers should include enough information to link entries back to
Expand All @@ -72,8 +72,8 @@ should be included in the `make` method of `Session`.
- Non-primary key: `object_id`, the unique hash of an object in the NWB file.
- Examples: `Raw`, `Institution`, etc.
- Required methods:
- `make`: must read information from an NWB file and insert it to the table.
- `fetch_nwb`: retrieve the data specified by the object ID.
- `make`: must read information from an NWB file and insert it to the table.
- `fetch_nwb`: retrieve the data specified by the object ID.

### Parameters

Expand All @@ -99,8 +99,8 @@ session.
- Naming convention: end with `Selection`
- Data tier: `dj.Manual`
- Primary key(s): Foreign key references to
- one or more NWB or data tables
- optionally, one or more parameter tables
- one or more NWB or data tables
- optionally, one or more parameter tables
- Non-primary key: None
- Examples: `MetricSelection`, `LFPSElection`

Expand All @@ -119,16 +119,17 @@ method that carries out the computation specified in the Selection table when
- Data tier: `dj.Computed`
- Primary key: Foreign key reference to a Selection table.
- Non-primary key: `analysis_file_name` inherited from `AnalysisNwbfile` table
(i.e., name of the analysis NWB file that will hold the output of the
computation).
(i.e., name of the analysis NWB file that will hold the output of the
computation).
- Required methods:
- `make`: carries out the computation and insert a new entry; must also create
an analysis NWB file and insert it to the `AnalysisNwbfile` table. Note that
this method is never called directly; it is called via `populate`. Multiple
entries can be run in parallel when called with `reserve_jobs=True`.
- `delete`: extension of the `delete` method that checks user privilege before
deleting entries as a way to prevent accidental deletion of computations
that take a long time (see below).
- `make`: carries out the computation and insert a new entry; must also create
an analysis NWB file and insert it to the `AnalysisNwbfile` table. Note
that this method is never called directly; it is called via `populate`.
Multiple entries can be run in parallel when called with
`reserve_jobs=True`.
- `delete`: extension of the `delete` method that checks user privilege before
deleting entries as a way to prevent accidental deletion of computations
that take a long time (see below).
- Example: `QualityMetrics`, `LFPV1`

### Merge
Expand All @@ -154,28 +155,28 @@ all analyses.

- Naming: `{animal name}YYYYMMDD.nwb`
- Storage:
- On disk, directory identified by `settings.py` as `raw_dir` (e.g.,
`/stelmo/nwb/raw`)
- In database, in the `Nwbfile` table
- On disk, directory identified by `settings.py` as `raw_dir` (e.g.,
`/stelmo/nwb/raw`)
- In database, in the `Nwbfile` table
- Copies:
- made with an underscore `{animal name}YYYYMMDD_.nwb`
- stored in the same `raw_dir`
- contain pointers to objects in original file
- permit adding new parts to the NWB file without risk of corrupting the
original data
- made with an underscore `{animal name}YYYYMMDD_.nwb`
- stored in the same `raw_dir`
- contain pointers to objects in original file
- permit adding new parts to the NWB file without risk of corrupting the
original data

### Analysis files

Hold the results of intermediate steps in the analysis.

- Naming: `{animal name}YYYYMMDD_{10-character random string}.nwb`
- Storage:
- On disk, directory identified by `settings.py` as `analysis_dir` (e.g.,
`/stelmo/nwb/analysis`). Items are further sorted into folders matching
original NWB file name
- In database, in the `AnalysisNwbfile` table.
- On disk, directory identified by `settings.py` as `analysis_dir` (e.g.,
`/stelmo/nwb/analysis`). Items are further sorted into folders matching
original NWB file name
- In database, in the `AnalysisNwbfile` table.
- Examples: filtered recordings, spike times of putative units after sorting, or
waveform snippets.
waveform snippets.

_Note_: Because NWB files and analysis files exist both on disk and listed in
tables, these can become out of sync. You can 'equalize' the database table
Expand All @@ -194,16 +195,16 @@ HDF5 format. This duplication should be resolved in the future.
The following objects should be uniquely named.

- _Recordings_: Underscore-separated concatenations of uniquely defining
features,
`NWBFileName_IntervalName_ElectrodeGroupName_PreprocessingParamsName`.
features,
`NWBFileName_IntervalName_ElectrodeGroupName_PreprocessingParamsName`.
- _SpikeSorting_: Adds `SpikeSorter_SorterParamName` to the name of the
recording.
recording.
- _Waveforms_: Adds `_WaveformParamName` to the name of the sorting.
- _Quality metrics_: Adds `_MetricParamName` to the name of the waveform.
- _Analysis NWB files_:
`NWBFileName_IntervalName_ElectrodeGroupName_PreprocessingParamsName.nwb`
`NWBFileName_IntervalName_ElectrodeGroupName_PreprocessingParamsName.nwb`
- Each recording and sorting is given truncated UUID strings as part of
concatenations.
concatenations.

Following broader Python conventions, methods a method that will not be
explicitly called by the user should start with `_`
Expand All @@ -217,35 +218,35 @@ faulty connection.

- Intervals can be nested for a set of disjoint intervals.
- Some recordings have explicit
[PTP timestamps](https://en.wikipedia.org/wiki/Precision_Time_Protocol)
associated with each sample. Some older recordings are missing PTP times, and
times must be inferred from the TTL pulses from the camera.
[PTP timestamps](https://en.wikipedia.org/wiki/Precision_Time_Protocol)
associated with each sample. Some older recordings are missing PTP times,
and times must be inferred from the TTL pulses from the camera.

## Misc

- During development, we suggest using a Docker container. See
[example](./notebooks/00_Setup.ipynb).
[example](./notebooks/00_Setup.ipynb).
- DataJoint is unable to set delete permissions on a per-table basis. If a user
is able to delete entries in a given table, she can delete entries in any
table in the schema. The `SpikeSorting` table extends the built-in `delete`
method to check if the username matches a list of allowed users when `delete`
is called. Issues #226 and #586 track the progress of generalizing this
feature.
is able to delete entries in a given table, she can delete entries in any
table in the schema. The `SpikeSorting` table extends the built-in `delete`
method to check if the username matches a list of allowed users when
`delete` is called. Issues #226 and #586 track the progress of generalizing
this feature.
- `numpy` style docstrings will be interpreted by API docs. To check for
compliance, monitor the std out when building docs (see `docs/README.md`)
compliance, monitor the std out when building docs (see `docs/README.md`)
- `fetch_nwb` is currently reperated across many tables. For progress on a fix,
follow issue #530
follow issue #530

## Making a release

Spyglass follows [Semantic Versioning](https://semver.org/) with versioning of
the form `X.Y.Z` (e.g., `0.4.2`).

1. In `CITATION.cff`, update the `version` key.
2. Make a pull request with changes.
3. After the pull request is merged, pull this merge commit and tag it with
`git tag {version}`
4. Publish the new release tag. Run `git push origin {version}`. This will
rebuild docs and push updates to PyPI.
5. Make a new
[release on GitHub](https://docs.github.com/en/repositories/releasing-projects-on-github/managing-releases-in-a-repository).
1. Make a pull request with changes.
1. After the pull request is merged, pull this merge commit and tag it with
`git tag {version}`
1. Publish the new release tag. Run `git push origin {version}`. This will
rebuild docs and push updates to PyPI.
1. Make a new
[release on GitHub](https://docs.github.com/en/repositories/releasing-projects-on-github/managing-releases-in-a-repository).
4 changes: 2 additions & 2 deletions docs/src/misc/database_management.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ schema/database prefix.

- `SELECT` privileges allow users to read, write, and delete data.
- `ALL` privileges allow users to create, alter, or drop tables and schemas in
addition to operations above.
addition to operations above.

In practice, DataJoint only permits alerations of secondary keys on existing
tables, and more derstructive operations would require using DataJoint to
Expand Down Expand Up @@ -76,7 +76,7 @@ migrate the contents to another server. Some conventions to note:
- `.host`: files used in the host's context
- `.container`: files used inside the database Docker container
- `.env`: files used to set environment variables used by the scripts for
database name, backup name, and backup credentials
database name, backup name, and backup credentials

### mysql.env.host

Expand Down
Loading

0 comments on commit 8dabdbc

Please sign in to comment.