Skip to content

Commit

Permalink
Docs fixes: tidy decoding docstrings, change hatch version get, add i…
Browse files Browse the repository at this point in the history
…nits
  • Loading branch information
CBroz1 committed Jan 18, 2024
1 parent 6705ee0 commit db5273f
Show file tree
Hide file tree
Showing 11 changed files with 164 additions and 82 deletions.
3 changes: 3 additions & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,3 +55,6 @@ The following items can be commented out in `mkdocs.yml` to reduce build time:
- `mkdocs-jupyter`: Generates tutorial pages from notebooks.

To end the process in your console, use `ctrl+c`.

If your new submodule is causing a build error (e.g., "Could not collect ..."),
you may need to add `__init__.py` files to the submodule directories.
7 changes: 4 additions & 3 deletions docs/build-docs.sh
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,14 @@ cp ./LICENSE ./docs/src/LICENSE.md
mkdir -p ./docs/src/notebooks
cp ./notebooks/*ipynb ./docs/src/notebooks/
cp ./notebooks/*md ./docs/src/notebooks/
cp ./docs/src/notebooks/README.md ./docs/src/notebooks/index.md
mv ./docs/src/notebooks/README.md ./docs/src/notebooks/index.md
cp -r ./notebook-images ./docs/src/notebooks/
cp -r ./notebook-images ./docs/src/

# Get major version
FULL_VERSION=$(hatch version) # Most recent tag, may include periods
export MAJOR_VERSION="${FULL_VERSION:0:3}" # First 3 chars of tag
version_line=$(grep "__version__ =" ./src/spyglass/_version.py)
version_string=$(echo "$version_line" | awk -F"[\"']" '{print $2}')
export MAJOR_VERSION="${version_string:0:3}"
echo "$MAJOR_VERSION"

# Get ahead of errors
Expand Down
26 changes: 13 additions & 13 deletions docs/mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,6 @@ theme:
favicon: images/Spyglass.svg
features:
- toc.follow
# - navigation.expand # CBroz1: removed bc long tutorial list hides rest
# - toc.integrate
# - navigation.sections
- navigation.top
- navigation.instant # saves loading time - 1 browser page
- navigation.tracking # even with above, changes URL by section
Expand Down Expand Up @@ -55,27 +52,30 @@ nav:
- Database Management: misc/database_management.md
- Tutorials:
- Overview: notebooks/index.md
- General:
- Intro:
- Setup: notebooks/00_Setup.ipynb
- Insert Data: notebooks/01_Insert_Data.ipynb
- Data Sync: notebooks/02_Data_Sync.ipynb
- Merge Tables: notebooks/03_Merge_Tables.ipynb
- Ephys:
- Spike Sorting: notebooks/10_Spike_Sorting.ipynb
- Config Populate: notebooks/04_PopulateConfigFile.ipynb
- Spikes:
- Spike Sorting V0: notebooks/10_Spike_SortingV0.ipynb
- Spike Sorting V1: notebooks/10_Spike_SortingV1.ipynb
- Curation: notebooks/11_Curation.ipynb
- LFP: notebooks/12_LFP.ipynb
- Theta: notebooks/14_Theta.ipynb
- Position:
- Position Trodes: notebooks/20_Position_Trodes.ipynb
- DLC From Scratch: notebooks/21_Position_DLC_1.ipynb
- DLC From Model: notebooks/22_Position_DLC_2.ipynb
- DLC Prediction: notebooks/23_Position_DLC_3.ipynb
- Linearization: notebooks/24_Linearization.ipynb
- Combined:
- Ripple Detection: notebooks/30_Ripple_Detection.ipynb
- Extract Mark Indicators: notebooks/31_Extract_Mark_Indicators.ipynb
- Decoding with GPUs: notebooks/32_Decoding_with_GPUs.ipynb
- Decoding Clusterless: notebooks/33_Decoding_Clusterless.ipynb
- LFP:
- LFP: notebooks/30_LFP.ipynb
- Theta: notebooks/31_Theta.ipynb
- Ripple Detection: notebooks/32_Ripple_Detection.ipynb
- Decoding:
- Extract Clusterless: notebooks/41_Extracting_Clusterless_Waveform_Features.ipynb
- Decoding Clusterless: notebooks/42_Decoding_Clusterless.ipynb
- Decoding Sorted Spikes: notebooks/43_Decoding_SortedSpikes.ipynb
- API Reference: api/ # defer to gen-files + literate-nav
- How to Contribute: contribute.md
- Change Log: CHANGELOG.md
Expand Down
5 changes: 0 additions & 5 deletions docs/src/api/make_pages.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,11 +28,6 @@
else:
break

if add_limit is not None:
from IPython import embed

embed()


with mkdocs_gen_files.open("api/navigation.md", "w") as nav_file:
nav_file.write("* [Overview](../api/index.md)\n")
Expand Down
52 changes: 42 additions & 10 deletions notebooks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,32 +8,64 @@ described in the categories below.

## 0. Intro

Everyone should complete the [Setup](./00_Setup.ipynb) and [Insert Data](./01_Insert_Data.ipynb) notebooks.
Everyone should complete the [Setup](./00_Setup.ipynb) and
[Insert Data](./01_Insert_Data.ipynb) notebooks.

[Data Sync](./02_Data_Sync.ipynb) is an optional additional tool for collaborators that want to share analysis files.
[Data Sync](./02_Data_Sync.ipynb) is an optional additional tool for
collaborators that want to share analysis files.

The [Merge Tables notebook](./03_Merge_Tables.ipynb) explains details on a new table tier unique to Spyglass that allows the user to use different versions of pipelines on the same data. This is important for understanding the later notebooks.
The [Merge Tables notebook](./03_Merge_Tables.ipynb) explains details on a new
table tier unique to Spyglass that allows the user to use different versions of
pipelines on the same data. This is important for understanding the later
notebooks.

## 1. Spike Sorting Pipeline

This series of notebooks covers the process of spike sorting, from automated spike sorting to optional manual curation of the output of the automated sorting.
This series of notebooks covers the process of spike sorting, from automated
spike sorting to optional manual curation of the output of the automated
sorting.

## 2. Position Pipeline

This series of notebooks covers tracking the position(s) of the animal. The user can employ two different methods:
This series of notebooks covers tracking the position(s) of the animal. The
user can employ two different methods:

1. the simple [Trodes](20_Position_Trodes.ipynb) methods of tracking LEDs on the animal's headstage
2. [DLC (DeepLabCut)](./21_Position_DLC_1.ipynb) which uses a neural network to track the animal's body parts
1. the simple [Trodes](20_Position_Trodes.ipynb) methods of tracking LEDs on
the animal's headstage
2. [DLC (DeepLabCut)](./21_Position_DLC_1.ipynb) which uses a neural network to
track the animal's body parts

Either case can be followed by the [Linearization notebook](./24_Linearization.ipynb) if the user wants to linearize the position data for later use.
Either case can be followed by the
[Linearization notebook](./24_Linearization.ipynb) if the user wants to
linearize the position data for later use.

## 3. LFP Pipeline

This series of notebooks covers the process of LFP analysis. The [LFP](./30_LFP.ipynb) covers the extraction of the LFP in specific bands from the raw data. The [Theta](./31_Theta.ipynb) notebook shows specifically how to extract the theta band power and phase from the LFP data. Finally the [Ripple Detection](./32_Ripple_Detection.ipynb) notebook shows how to detect ripples in the LFP data.
This series of notebooks covers the process of LFP analysis. The
[LFP](./30_LFP.ipynb) covers the extraction of the LFP in specific bands from
the raw data. The [Theta](./31_Theta.ipynb) notebook shows specifically how to
extract the theta band power and phase from the LFP data. Finally the
[Ripple Detection](./32_Ripple_Detection.ipynb) notebook shows how to detect
ripples in the LFP data.

## 4. Decoding Pipeline

This series of notebooks covers the process of decoding the position of the animal from spiking data. It relies on the position data from the Position pipeline and the output of spike sorting from the Spike Sorting pipeline. Decoding can be from sorted or from unsorted data using spike waveform features (so-called clusterless decoding). The first notebook([Extracting Clusterless Waveform Features](./41_Extracting_Clusterless_Waveform_Features.ipynb)) in this series shows how to retrieve the spike waveform features used for clusterless decoding. The second notebook ([Clusterless Decoding](./42_Decoding_Clusterless.ipynb)) shows a detailed example of how to decode the position of the animal from the spike waveform features. The third notebook ([Decoding](./43_Decoding.ipynb)) shows how to decode the position of the animal from the sorted spikes.
This series of notebooks covers the process of decoding the position of the
animal from spiking data. It relies on the position data from the Position
pipeline and the output of spike sorting from the Spike Sorting pipeline.
Decoding can be from sorted or from unsorted data using spike waveform features
(so-called clusterless decoding).

The first notebook
([Extracting Clusterless Waveform Features](./41_Extracting_Clusterless_Waveform_Features.ipynb))
in this series shows how to retrieve the spike waveform features used for
clusterless decoding.

The second notebook
([Clusterless Decoding](./42_Decoding_Clusterless.ipynb)) shows a detailed
example of how to decode the position of the animal from the spike waveform
features. The third notebook ([Decoding](./43_Decoding.ipynb)) shows how to
decode the position of the animal from the sorted spikes.

## Developer note

Expand Down
2 changes: 1 addition & 1 deletion src/spyglass/common/common_lab.py
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ def get_djuser_name(cls, dj_user) -> str:
Parameters
----------
user: str
dj_user: str
The datajoint user name.
Returns
Expand Down
Empty file.
Loading

0 comments on commit db5273f

Please sign in to comment.