diff --git a/notebooks/03_Merge_Tables.ipynb b/notebooks/03_Merge_Tables.ipynb index 0cbd4e1b7..04cc6ba13 100644 --- a/notebooks/03_Merge_Tables.ipynb +++ b/notebooks/03_Merge_Tables.ipynb @@ -31,8 +31,14 @@ "- For additional info on DataJoint syntax, including table definitions and\n", " inserts, see\n", " [these additional tutorials](https://github.com/datajoint/datajoint-tutorials)\n", - "- For information on why we use merge tables, and how to make one, see our \n", - " [documentation](https://lorenfranklab.github.io/spyglass/0.4/misc/merge_tables/)\n" + "- For information on why we use merge tables, and how to make one, see our\n", + " [documentation](https://lorenfranklab.github.io/spyglass/0.4/misc/merge_tables/)\n", + "\n", + "In short, merge tables represent the end processing point of a given way of\n", + "processing the data in our pipelines. Merge Tables allow us to build new\n", + "processing pipeline, or a new version of an existing pipeline, without having to\n", + "drop or migrate the old tables. They allow data to be processed in different\n", + "ways, but with a unified end result that downstream pipelines can all access.\n" ] }, { @@ -46,7 +52,6 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "\n", "Let's start by importing the `spyglass` package, along with a few others.\n" ] }, @@ -102,7 +107,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Check to make sure the data inserted in the previour notebook is still there." + "Check to make sure the data inserted in the previour notebook is still there.\n" ] }, { @@ -238,7 +243,7 @@ "_Note_: Some existing parents of Merge Tables perform the Merge Table insert as\n", "part of the populate methods. This practice will be revised in the future.\n", "\n", - "" + "\n" ] }, { @@ -309,10 +314,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "Merge Tables have multiple custom methods that begin with `merge`.\n", "\n", - "Merge Tables have multiple custom methods that begin with `merge`. \n", - "\n", - "`help` can show us the docstring of each" + "`help` can show us the docstring of each\n" ] }, { @@ -365,7 +369,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Showing data" + "## Showing data\n" ] }, { @@ -598,7 +602,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Selecting data" + "## Selecting data\n" ] }, { @@ -852,7 +856,7 @@ "metadata": {}, "source": [ "`fetch` will collect all relevant entries and return them as a list in\n", - " the format specified by keyword arguments and one's DataJoint config.\n" + "the format specified by keyword arguments and one's DataJoint config.\n" ] }, { @@ -880,8 +884,8 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "`merge_fetch` requires a restriction as the first argument. For no restriction, \n", - "use `True`." + "`merge_fetch` requires a restriction as the first argument. For no restriction,\n", + "use `True`.\n" ] }, { @@ -936,7 +940,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Deletion from Merge Tables" + "## Deletion from Merge Tables\n" ] }, { @@ -956,7 +960,7 @@ "\n", "The two latter cases can be destructive, so we include an extra layer of\n", "protection with `dry_run`. When true (by default), these functions return\n", - "a list of tables with the entries that would otherwise be deleted." + "a list of tables with the entries that would otherwise be deleted.\n" ] }, { @@ -978,8 +982,8 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "To delete all merge table entries associated with an NWB file, use \n", - "`delete_downstream_merge` with the `Nwbfile` table. \n" + "To delete all merge table entries associated with an NWB file, use\n", + "`delete_downstream_merge` with the `Nwbfile` table.\n" ] }, { @@ -1000,15 +1004,15 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Up Next" + "## Up Next\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "In the [next notebook](./10_Spike_Sorting.ipynb), we'll start working with \n", - "ephys data with spike sorting." + "In the [next notebook](./10_Spike_Sorting.ipynb), we'll start working with\n", + "ephys data with spike sorting.\n" ] } ], diff --git a/notebooks/py_scripts/03_Merge_Tables.py b/notebooks/py_scripts/03_Merge_Tables.py index 69cb29600..c4c0abb48 100644 --- a/notebooks/py_scripts/03_Merge_Tables.py +++ b/notebooks/py_scripts/03_Merge_Tables.py @@ -32,11 +32,16 @@ # - For information on why we use merge tables, and how to make one, see our # [documentation](https://lorenfranklab.github.io/spyglass/0.4/misc/merge_tables/) # +# In short, merge tables represent the end processing point of a given way of +# processing the data in our pipelines. Merge Tables allow us to build new +# processing pipeline, or a new version of an existing pipeline, without having to +# drop or migrate the old tables. They allow data to be processed in different +# ways, but with a unified end result that downstream pipelines can all access. +# # ## Imports # -# # Let's start by importing the `spyglass` package, along with a few others. # @@ -70,6 +75,7 @@ # # Check to make sure the data inserted in the previour notebook is still there. +# nwb_file_name = "minirec20230622.nwb" nwb_copy_file_name = get_nwb_copy_filename(nwb_file_name) @@ -82,6 +88,7 @@ # part of the populate methods. This practice will be revised in the future. # # +# sgc.FirFilterParameters().create_standard_filters() lfp.lfp_electrode.LFPElectrodeGroup.create_lfp_electrode_group( @@ -103,10 +110,10 @@ # ## Helper functions # -# # Merge Tables have multiple custom methods that begin with `merge`. # # `help` can show us the docstring of each +# merge_methods = [d for d in dir(Merge) if d.startswith("merge")] print(merge_methods) @@ -114,6 +121,7 @@ help(getattr(Merge, merge_methods[-1])) # ## Showing data +# # `merge_view` shows a union of the master and all part tables. # @@ -143,6 +151,7 @@ result2 == result1 # ## Selecting data +# # There are also functions for retrieving part/parent table(s) and fetching data. # @@ -156,7 +165,7 @@ result5 # `fetch` will collect all relevant entries and return them as a list in -# the format specified by keyword arguments and one's DataJoint config. +# the format specified by keyword arguments and one's DataJoint config. # result6 = result5.fetch("lfp_sampling_rate") # Sample rate for all mini* files @@ -164,6 +173,7 @@ # `merge_fetch` requires a restriction as the first argument. For no restriction, # use `True`. +# result7 = LFPOutput.merge_fetch(True, "filter_name", "nwb_file_name") result7 @@ -172,6 +182,7 @@ result8 # ## Deletion from Merge Tables +# # When deleting from Merge Tables, we can either... # @@ -187,6 +198,7 @@ # The two latter cases can be destructive, so we include an extra layer of # protection with `dry_run`. When true (by default), these functions return # a list of tables with the entries that would otherwise be deleted. +# LFPOutput.merge_delete(nwb_file_dict) # Delete from merge table LFPOutput.merge_delete_parent(restriction=nwb_file_dict, dry_run=True) @@ -208,6 +220,8 @@ ) # ## Up Next +# # In the [next notebook](./10_Spike_Sorting.ipynb), we'll start working with # ephys data with spike sorting. +#