Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
  • Loading branch information
CBroz1 committed Jan 24, 2024
1 parent 1e5661e commit 21346e5
Show file tree
Hide file tree
Showing 2 changed files with 41 additions and 23 deletions.
44 changes: 24 additions & 20 deletions notebooks/03_Merge_Tables.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,14 @@
"- For additional info on DataJoint syntax, including table definitions and\n",
" inserts, see\n",
" [these additional tutorials](https://github.com/datajoint/datajoint-tutorials)\n",
"- For information on why we use merge tables, and how to make one, see our \n",
" [documentation](https://lorenfranklab.github.io/spyglass/0.4/misc/merge_tables/)\n"
"- For information on why we use merge tables, and how to make one, see our\n",
" [documentation](https://lorenfranklab.github.io/spyglass/0.4/misc/merge_tables/)\n",
"\n",
"In short, merge tables represent the end processing point of a given way of\n",
"processing the data in our pipelines. Merge Tables allow us to build new\n",
"processing pipeline, or a new version of an existing pipeline, without having to\n",
"drop or migrate the old tables. They allow data to be processed in different\n",
"ways, but with a unified end result that downstream pipelines can all access.\n"
]
},
{
Expand All @@ -46,7 +52,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"Let's start by importing the `spyglass` package, along with a few others.\n"
]
},
Expand Down Expand Up @@ -102,7 +107,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Check to make sure the data inserted in the previour notebook is still there."
"Check to make sure the data inserted in the previour notebook is still there.\n"
]
},
{
Expand Down Expand Up @@ -238,7 +243,7 @@
"_Note_: Some existing parents of Merge Tables perform the Merge Table insert as\n",
"part of the populate methods. This practice will be revised in the future.\n",
"\n",
"<!-- TODO: Add entry to another parent to cover mutual exclusivity issues. -->"
"<!-- TODO: Add entry to another parent to cover mutual exclusivity issues. -->\n"
]
},
{
Expand Down Expand Up @@ -309,10 +314,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Merge Tables have multiple custom methods that begin with `merge`.\n",
"\n",
"Merge Tables have multiple custom methods that begin with `merge`. \n",
"\n",
"`help` can show us the docstring of each"
"`help` can show us the docstring of each\n"
]
},
{
Expand Down Expand Up @@ -365,7 +369,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Showing data"
"## Showing data\n"
]
},
{
Expand Down Expand Up @@ -598,7 +602,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Selecting data"
"## Selecting data\n"
]
},
{
Expand Down Expand Up @@ -852,7 +856,7 @@
"metadata": {},
"source": [
"`fetch` will collect all relevant entries and return them as a list in\n",
" the format specified by keyword arguments and one's DataJoint config.\n"
"the format specified by keyword arguments and one's DataJoint config.\n"
]
},
{
Expand Down Expand Up @@ -880,8 +884,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"`merge_fetch` requires a restriction as the first argument. For no restriction, \n",
"use `True`."
"`merge_fetch` requires a restriction as the first argument. For no restriction,\n",
"use `True`.\n"
]
},
{
Expand Down Expand Up @@ -936,7 +940,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Deletion from Merge Tables"
"## Deletion from Merge Tables\n"
]
},
{
Expand All @@ -956,7 +960,7 @@
"\n",
"The two latter cases can be destructive, so we include an extra layer of\n",
"protection with `dry_run`. When true (by default), these functions return\n",
"a list of tables with the entries that would otherwise be deleted."
"a list of tables with the entries that would otherwise be deleted.\n"
]
},
{
Expand All @@ -978,8 +982,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"To delete all merge table entries associated with an NWB file, use \n",
"`delete_downstream_merge` with the `Nwbfile` table. \n"
"To delete all merge table entries associated with an NWB file, use\n",
"`delete_downstream_merge` with the `Nwbfile` table.\n"
]
},
{
Expand All @@ -1000,15 +1004,15 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Up Next"
"## Up Next\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the [next notebook](./10_Spike_Sorting.ipynb), we'll start working with \n",
"ephys data with spike sorting."
"In the [next notebook](./10_Spike_Sorting.ipynb), we'll start working with\n",
"ephys data with spike sorting.\n"
]
}
],
Expand Down
20 changes: 17 additions & 3 deletions notebooks/py_scripts/03_Merge_Tables.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,11 +32,16 @@
# - For information on why we use merge tables, and how to make one, see our
# [documentation](https://lorenfranklab.github.io/spyglass/0.4/misc/merge_tables/)
#
# In short, merge tables represent the end processing point of a given way of
# processing the data in our pipelines. Merge Tables allow us to build new
# processing pipeline, or a new version of an existing pipeline, without having to
# drop or migrate the old tables. They allow data to be processed in different
# ways, but with a unified end result that downstream pipelines can all access.
#

# ## Imports
#

#
# Let's start by importing the `spyglass` package, along with a few others.
#

Expand Down Expand Up @@ -70,6 +75,7 @@
#

# Check to make sure the data inserted in the previour notebook is still there.
#

nwb_file_name = "minirec20230622.nwb"
nwb_copy_file_name = get_nwb_copy_filename(nwb_file_name)
Expand All @@ -82,6 +88,7 @@
# part of the populate methods. This practice will be revised in the future.
#
# <!-- TODO: Add entry to another parent to cover mutual exclusivity issues. -->
#

sgc.FirFilterParameters().create_standard_filters()
lfp.lfp_electrode.LFPElectrodeGroup.create_lfp_electrode_group(
Expand All @@ -103,17 +110,18 @@
# ## Helper functions
#

#
# Merge Tables have multiple custom methods that begin with `merge`.
#
# `help` can show us the docstring of each
#

merge_methods = [d for d in dir(Merge) if d.startswith("merge")]
print(merge_methods)

help(getattr(Merge, merge_methods[-1]))

# ## Showing data
#

# `merge_view` shows a union of the master and all part tables.
#
Expand Down Expand Up @@ -143,6 +151,7 @@
result2 == result1

# ## Selecting data
#

# There are also functions for retrieving part/parent table(s) and fetching data.
#
Expand All @@ -156,14 +165,15 @@
result5

# `fetch` will collect all relevant entries and return them as a list in
# the format specified by keyword arguments and one's DataJoint config.
# the format specified by keyword arguments and one's DataJoint config.
#

result6 = result5.fetch("lfp_sampling_rate") # Sample rate for all mini* files
result6

# `merge_fetch` requires a restriction as the first argument. For no restriction,
# use `True`.
#

result7 = LFPOutput.merge_fetch(True, "filter_name", "nwb_file_name")
result7
Expand All @@ -172,6 +182,7 @@
result8

# ## Deletion from Merge Tables
#

# When deleting from Merge Tables, we can either...
#
Expand All @@ -187,6 +198,7 @@
# The two latter cases can be destructive, so we include an extra layer of
# protection with `dry_run`. When true (by default), these functions return
# a list of tables with the entries that would otherwise be deleted.
#

LFPOutput.merge_delete(nwb_file_dict) # Delete from merge table
LFPOutput.merge_delete_parent(restriction=nwb_file_dict, dry_run=True)
Expand All @@ -208,6 +220,8 @@
)

# ## Up Next
#

# In the [next notebook](./10_Spike_Sorting.ipynb), we'll start working with
# ephys data with spike sorting.
#

0 comments on commit 21346e5

Please sign in to comment.