diff --git a/CHANGELOG.md b/CHANGELOG.md
index 0ddb62a59..27779bfd3 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -18,11 +18,10 @@
- Update docs to reflect new notebooks. #776
- Add overview of Spyglass to docs. #779
-
### Pipelines
- Spike sorting: Add SpikeSorting V1 pipeline. #651
-- LFP:
+- LFP:
- Minor fixes to LFPBandV1 populator and `make`. #706, #795
- LFPV1: Fix error for multiple lfp settings on same data #775
- Linearization:
diff --git a/docs/src/misc/merge_tables.md b/docs/src/misc/merge_tables.md
index c11e82670..1cd4b000b 100644
--- a/docs/src/misc/merge_tables.md
+++ b/docs/src/misc/merge_tables.md
@@ -16,17 +16,17 @@ deleting a part entry before the master. To circumvent this, you can add
[`delete` function](https://datajoint.com/docs/core/datajoint-python/0.14/api/datajoint/__init__/#datajoint.table.Table.delete)
call, but this will leave and orphaned primary key in the master. Instead, use
`(YourTable & restriction).delete_downstream_merge()` to delete master/part
-pairs. If errors persist, identify and import the offending part table and
-rerun `delete_downstream_merge` with `reload_cache=True`. This process will
-be faster for subsequent calls if you reassign the your table after importing.
+pairs. If errors persist, identify and import the offending part table and rerun
+`delete_downstream_merge` with `reload_cache=True`. This process will be faster
+for subsequent calls if you reassign the your table after importing.
```python
from spyglass.common import Nwbfile
+
nwbfile = Nwbfile()
(nwbfile & "nwb_file_name LIKE 'Name%'").delete_downstream_merge()
```
-
## What
A Merge Table is fundamentally a master table with one part for each divergent
diff --git a/notebooks/01_Insert_Data.ipynb b/notebooks/01_Insert_Data.ipynb
index f0d89cdfa..de31ea7c8 100644
--- a/notebooks/01_Insert_Data.ipynb
+++ b/notebooks/01_Insert_Data.ipynb
@@ -45,8 +45,8 @@
"name": "stderr",
"output_type": "stream",
"text": [
- "[2023-10-05 11:48:12,292][INFO]: Connecting root@localhost:3306\n",
- "[2023-10-05 11:48:12,302][INFO]: Connected root@localhost:3306\n"
+ "[2024-01-29 16:24:30,933][INFO]: Connecting root@localhost:3309\n",
+ "[2024-01-29 16:24:30,942][INFO]: Connected root@localhost:3309\n"
]
}
],
@@ -719,9 +719,9 @@
"\n",
"- `minirec20230622.nwb`, .3 GB: minimal recording,\n",
" [Link](https://ucsf.box.com/s/k3sgql6z475oia848q1rgms4zdh4rkjn)\n",
- "- `mediumnwb20230802.nwb`, 32 GB: full-featured dataset, \n",
- " [Link](https://ucsf.box.com/s/2qbhxghzpttfam4b7q7j8eg0qkut0opa) \n",
- "- `montague20200802.nwb`, 8 GB: full experimental recording, \n",
+ "- `mediumnwb20230802.nwb`, 32 GB: full-featured dataset,\n",
+ " [Link](https://ucsf.box.com/s/2qbhxghzpttfam4b7q7j8eg0qkut0opa)\n",
+ "- `montague20200802.nwb`, 8 GB: full experimental recording,\n",
" [Link](https://ucsf.box.com/s/26je2eytjpqepyznwpm92020ztjuaomb)\n",
"- For those in the UCSF network, these and many others on `/stelmo/nwb/raw`\n",
"\n",
@@ -747,7 +747,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Spyglass will create a copy with this name."
+ "Spyglass will create a copy with this name.\n"
]
},
{
@@ -1072,7 +1072,6 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "\n",
"`spyglass.data_import.insert_sessions` helps take the many fields of data\n",
"present in an NWB file and insert them into various tables across Spyglass. If\n",
"the NWB file is properly composed, this includes...\n",
@@ -1082,8 +1081,8 @@
"- neural activity (extracellular recording of multiple brain areas)\n",
"- etc.\n",
"\n",
- "_Note:_ this may take time as Spyglass creates the copy. You may see a prompt \n",
- "about inserting device information."
+ "_Note:_ this may take time as Spyglass creates the copy. You may see a prompt\n",
+ "about inserting device information.\n"
]
},
{
@@ -2053,21 +2052,20 @@
"metadata": {},
"source": [
"`IntervalList` has an additional secondary key `pipeline` which can describe the origin of the data.\n",
- "Because it is a _secondary_ key, it is not required to uniquely identify an entry. \n",
+ "Because it is a _secondary_ key, it is not required to uniquely identify an entry.\n",
"Current values for this key from spyglass pipelines are:\n",
"\n",
- "| pipeline | Source|\n",
- "| --- | --- |\n",
- "| position | sg.common.PositionSource |\n",
- "| lfp_v0 | sg.common.LFP |\n",
- "| lfp_v1 | sg.lfp.v1.LFPV1 |\n",
- "| lfp_band | sg.common.LFPBand,
sg.lfp.analysis.v1.LFPBandV1 |\n",
- "| lfp_artifact | sg.lfp.v1.LFPArtifactDetection |\n",
- "| spikesorting_artifact_v0 | sg.spikesorting.ArtifactDetection |\n",
- "| spikesorting_artifact_v1 | sg.spikesorting.v1.ArtifactDetection |\n",
- "| spikesorting_recording_v0 | sg.spikesorting.SpikeSortingRecording |\n",
- "| spikesorting_recording_v1 | sg.spikesorting.v1.SpikeSortingRecording |\n",
- "\n"
+ "| pipeline | Source |\n",
+ "| ------------------------- | --------------------------------------------------- |\n",
+ "| position | sg.common.PositionSource |\n",
+ "| lfp_v0 | sg.common.LFP |\n",
+ "| lfp_v1 | sg.lfp.v1.LFPV1 |\n",
+ "| lfp_band | sg.common.LFPBand,
sg.lfp.analysis.v1.LFPBandV1 |\n",
+ "| lfp_artifact | sg.lfp.v1.LFPArtifactDetection |\n",
+ "| spikesorting_artifact_v0 | sg.spikesorting.ArtifactDetection |\n",
+ "| spikesorting_artifact_v1 | sg.spikesorting.v1.ArtifactDetection |\n",
+ "| spikesorting_recording_v0 | sg.spikesorting.SpikeSortingRecording |\n",
+ "| spikesorting_recording_v1 | sg.spikesorting.v1.SpikeSortingRecording |\n"
]
},
{
@@ -2086,9 +2084,9 @@
"with _cascading deletes_. For example, if we delete our `Session` entry, all\n",
"associated downstream entries are also deleted (e.g. `Raw`, `IntervalList`).\n",
"\n",
- "_Note_: The deletion process can be complicated by \n",
+ "_Note_: The deletion process can be complicated by\n",
"[Merge Tables](https://lorenfranklab.github.io/spyglass/0.4/misc/merge_tables/)\n",
- "when the entry is referenced by a part table. To demo deletion in these cases, \n",
+ "when the entry is referenced by a part table. To demo deletion in these cases,\n",
"run the hidden code below.\n",
"\n",
"\n",
@@ -2113,20 +2111,23 @@
"lfp.v1.LFPSelection.insert1(lfp_key, skip_duplicates=True)\n",
"lfp.v1.LFPV1().populate(lfp_key)\n",
"```\n",
+ "\n",
" \n",
"\n",
"Deleting Merge Entries
\n",
"\n",
"```python\n",
- "from spyglass.utils.dj_merge_tables import delete_downstream_merge\n",
+ "nwbfile = sgc.Nwbfile()\n",
"\n",
- "delete_downstream_merge(\n",
- " sgc.Nwbfile(),\n",
- " restriction={\"nwb_file_name\": nwb_copy_file_name},\n",
+ "(nwbfile & {\"nwb_file_name\": nwb_copy_file_name}).delete_downstream_merge(\n",
" dry_run=False, # True will show Merge Table entries that would be deleted\n",
- ") \n",
+ ")\n",
"```\n",
- " "
+ "\n",
+ "Please see the [next notebook](./03_Merge_Tables.ipynb) for a more detailed\n",
+ "explanation.\n",
+ "\n",
+ "\n"
]
},
{
@@ -2659,7 +2660,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Up Next"
+ "## Up Next\n"
]
},
{
diff --git a/notebooks/03_Merge_Tables.ipynb b/notebooks/03_Merge_Tables.ipynb
index 04cc6ba13..2d76867d8 100644
--- a/notebooks/03_Merge_Tables.ipynb
+++ b/notebooks/03_Merge_Tables.ipynb
@@ -66,8 +66,8 @@
"name": "stderr",
"output_type": "stream",
"text": [
- "[2023-10-12 11:15:17,864][INFO]: Connecting root@localhost:3306\n",
- "[2023-10-12 11:15:17,873][INFO]: Connected root@localhost:3306\n"
+ "[2024-01-29 16:15:00,903][INFO]: Connecting root@localhost:3309\n",
+ "[2024-01-29 16:15:00,912][INFO]: Connected root@localhost:3309\n"
]
}
],
@@ -328,7 +328,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "['merge_delete', 'merge_delete_parent', 'merge_fetch', 'merge_get_parent', 'merge_get_part', 'merge_html', 'merge_populate', 'merge_restrict', 'merge_view']\n"
+ "['merge_delete', 'merge_delete_parent', 'merge_fetch', 'merge_get_parent', 'merge_get_parent_class', 'merge_get_part', 'merge_html', 'merge_populate', 'merge_restrict', 'merge_restrict_class', 'merge_view']\n"
]
}
],
@@ -386,7 +386,7 @@
},
{
"cell_type": "code",
- "execution_count": 7,
+ "execution_count": 6,
"metadata": {},
"outputs": [
{
@@ -415,7 +415,7 @@
},
{
"cell_type": "code",
- "execution_count": 8,
+ "execution_count": 7,
"metadata": {},
"outputs": [
{
@@ -497,7 +497,7 @@
" (Total: 1)"
]
},
- "execution_count": 8,
+ "execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@@ -510,7 +510,7 @@
},
{
"cell_type": "code",
- "execution_count": 9,
+ "execution_count": 8,
"metadata": {},
"outputs": [
{
@@ -521,11 +521,11 @@
" 'target_interval_list_name': '01_s1',\n",
" 'filter_name': 'LFP 0-400 Hz',\n",
" 'filter_sampling_rate': 30000,\n",
- " 'analysis_file_name': 'minirec20230622_JOV02AWW09.nwb',\n",
+ " 'analysis_file_name': 'minirec20230622_R5DWQ6S53S.nwb',\n",
" 'interval_list_name': 'lfp_test_01_s1_valid times',\n",
- " 'lfp_object_id': '340b9a0b-626b-40ca-8b48-e033be72570a',\n",
+ " 'lfp_object_id': 'ffb893d1-a31e-41d3-aec7-8dc8936c8898',\n",
" 'lfp_sampling_rate': 1000.0,\n",
- " 'lfp': filtered data pynwb.ecephys.ElectricalSeries at 0x139910624563552\n",
+ " 'lfp': filtered data pynwb.ecephys.ElectricalSeries at 0x129602752674544\n",
" Fields:\n",
" comments: no comments\n",
" conversion: 1.0\n",
@@ -540,7 +540,7 @@
" unit: volts}]"
]
},
- "execution_count": 9,
+ "execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
@@ -552,7 +552,7 @@
},
{
"cell_type": "code",
- "execution_count": 10,
+ "execution_count": 9,
"metadata": {},
"outputs": [
{
@@ -567,7 +567,7 @@
" 'filter_sampling_rate': 30000}"
]
},
- "execution_count": 10,
+ "execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
@@ -579,7 +579,7 @@
},
{
"cell_type": "code",
- "execution_count": 12,
+ "execution_count": 10,
"metadata": {},
"outputs": [
{
@@ -588,7 +588,7 @@
"True"
]
},
- "execution_count": 12,
+ "execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
@@ -616,7 +616,7 @@
},
{
"cell_type": "code",
- "execution_count": 14,
+ "execution_count": 11,
"metadata": {},
"outputs": [
{
@@ -718,7 +718,7 @@
" (Total: 1)"
]
},
- "execution_count": 14,
+ "execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
@@ -730,7 +730,7 @@
},
{
"cell_type": "code",
- "execution_count": 15,
+ "execution_count": 12,
"metadata": {},
"outputs": [
{
@@ -824,9 +824,9 @@
"
01_s1 | \n",
"LFP 0-400 Hz | \n",
"30000 | \n",
- "minirec20230622_JOV02AWW09.nwb | \n",
+ "minirec20230622_R5DWQ6S53S.nwb | \n",
"lfp_test_01_s1_valid times | \n",
- "340b9a0b-626b-40ca-8b48-e033be72570a | \n",
+ "ffb893d1-a31e-41d3-aec7-8dc8936c8898 | \n",
"1000.0 | \n",
" \n",
" \n",
@@ -837,11 +837,11 @@
"FreeTable(`lfp_v1`.`__l_f_p_v1`)\n",
"*nwb_file_name *lfp_electrode *target_interv *filter_name *filter_sampli analysis_file_ interval_list_ lfp_object_id lfp_sampling_r\n",
"+------------+ +------------+ +------------+ +------------+ +------------+ +------------+ +------------+ +------------+ +------------+\n",
- "minirec2023062 test 01_s1 LFP 0-400 Hz 30000 minirec2023062 lfp_test_01_s1 340b9a0b-626b- 1000.0 \n",
+ "minirec2023062 test 01_s1 LFP 0-400 Hz 30000 minirec2023062 lfp_test_01_s1 ffb893d1-a31e- 1000.0 \n",
" (Total: 1)"
]
},
- "execution_count": 15,
+ "execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
@@ -861,7 +861,7 @@
},
{
"cell_type": "code",
- "execution_count": 16,
+ "execution_count": 13,
"metadata": {},
"outputs": [
{
@@ -870,7 +870,7 @@
"array([1000.])"
]
},
- "execution_count": 16,
+ "execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
@@ -890,7 +890,7 @@
},
{
"cell_type": "code",
- "execution_count": 19,
+ "execution_count": 14,
"metadata": {},
"outputs": [
{
@@ -900,7 +900,7 @@
" array(['minirec20230622_.nwb'], dtype=object)]"
]
},
- "execution_count": 19,
+ "execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
@@ -912,7 +912,7 @@
},
{
"cell_type": "code",
- "execution_count": 20,
+ "execution_count": 15,
"metadata": {},
"outputs": [
{
@@ -926,7 +926,7 @@
" 'filter_sampling_rate': 30000}"
]
},
- "execution_count": 20,
+ "execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
@@ -955,8 +955,8 @@
"2. use `merge_delete_parent` to delete from the parent sources, getting rid of\n",
" the entries in the source table they came from.\n",
"\n",
- "3. use `delete_downstream_merge` to find Merge Tables downstream and get rid\n",
- " full entries, avoiding orphaned master table entries.\n",
+ "3. use `delete_downstream_merge` to find Merge Tables downstream of any other\n",
+ " table and get rid full entries, avoiding orphaned master table entries.\n",
"\n",
"The two latter cases can be destructive, so we include an extra layer of\n",
"protection with `dry_run`. When true (by default), these functions return\n",
@@ -965,16 +965,100 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 16,
"metadata": {},
- "outputs": [],
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "[2024-01-29 16:15:23,054][INFO]: Deleting 1 rows from `lfp_merge`.`l_f_p_output__l_f_p_v1`\n",
+ "[2024-01-29 16:15:23,058][INFO]: Deleting 1 rows from `lfp_merge`.`l_f_p_output`\n"
+ ]
+ },
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "[2024-01-29 16:15:24,953][WARNING]: Deletes cancelled\n"
+ ]
+ }
+ ],
+ "source": [
+ "LFPOutput.merge_delete(nwb_file_dict) # Delete from merge table"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 17,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "[FreeTable(`lfp_v1`.`__l_f_p_v1`)\n",
+ " *nwb_file_name *lfp_electrode *target_interv *filter_name *filter_sampli analysis_file_ interval_list_ lfp_object_id lfp_sampling_r\n",
+ " +------------+ +------------+ +------------+ +------------+ +------------+ +------------+ +------------+ +------------+ +------------+\n",
+ " minirec2023062 test 01_s1 LFP 0-400 Hz 30000 minirec2023062 lfp_test_01_s1 ffb893d1-a31e- 1000.0 \n",
+ " (Total: 1)]"
+ ]
+ },
+ "execution_count": 17,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "LFPOutput.merge_delete_parent(restriction=nwb_file_dict, dry_run=True)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "`delete_downstream_merge` is available from any other table in the pipeline,\n",
+ "but it does take some time to find the links downstream. If you're using this,\n",
+ "you can save time by reassigning your table to a variable, which will preserve\n",
+ "a copy of the previous search.\n",
+ "\n",
+ "Because the copy is stored, this function may not see additional merge tables\n",
+ "you've imported. To refresh this copy, set `reload_cache=True`\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 18,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "[16:15:37][INFO] Spyglass: Building merge cache for nwbfile.\n",
+ "\tFound 3 downstream merge tables\n"
+ ]
+ },
+ {
+ "data": {
+ "text/plain": [
+ "dict_values([[*nwb_file_name *analysis_file *lfp_electrode *target_interv *filter_name *filter_sampli *merge_id nwb_file_a analysis_f analysis_file_ analysis_p interval_list_ lfp_object_id lfp_sampling_r\n",
+ "+------------+ +------------+ +------------+ +------------+ +------------+ +------------+ +------------+ +--------+ +--------+ +------------+ +--------+ +------------+ +------------+ +------------+\n",
+ "minirec2023062 minirec2023062 test 01_s1 LFP 0-400 Hz 30000 c34f98c5-7de7- =BLOB= =BLOB= =BLOB= lfp_test_01_s1 ffb893d1-a31e- 1000.0 \n",
+ " (Total: 1)\n",
+ "]])"
+ ]
+ },
+ "execution_count": 18,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
"source": [
- "LFPOutput.merge_delete(nwb_file_dict) # Delete from merge table\n",
- "LFPOutput.merge_delete_parent(restriction=nwb_file_dict, dry_run=True)\n",
- "delete_downstream_merge(\n",
- " table=LFPV1,\n",
- " restriction=nwb_file_dict,\n",
+ "nwbfile = sgc.Nwbfile()\n",
+ "\n",
+ "(nwbfile & nwb_file_dict).delete_downstream_merge(\n",
" dry_run=True,\n",
+ " reload_cache=False, # if still encountering errors, try setting this to True\n",
")"
]
},
@@ -982,8 +1066,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "To delete all merge table entries associated with an NWB file, use\n",
- "`delete_downstream_merge` with the `Nwbfile` table.\n"
+ "This function is run automatically whin you use `cautious_delete`, which\n",
+ "checks team permissions before deleting.\n"
]
},
{
@@ -992,12 +1076,7 @@
"metadata": {},
"outputs": [],
"source": [
- "delete_downstream_merge(\n",
- " table=sgc.Nwbfile,\n",
- " restriction={\"nwb_file_name\": nwb_copy_file_name},\n",
- " dry_run=True,\n",
- " recurse_level=3, # for long pipelines with many tables\n",
- ")"
+ "(nwbfile & nwb_file_dict).cautious_delete()"
]
},
{
diff --git a/notebooks/py_scripts/01_Insert_Data.py b/notebooks/py_scripts/01_Insert_Data.py
index 908c93491..c1fec99a9 100644
--- a/notebooks/py_scripts/01_Insert_Data.py
+++ b/notebooks/py_scripts/01_Insert_Data.py
@@ -128,6 +128,7 @@
# -
# Spyglass will create a copy with this name.
+#
nwb_copy_file_name
@@ -155,9 +156,9 @@
#
sgc.LabMember.LabMemberInfo.insert(
- [ # Full name, Google email address, DataJoint username
- ["Firstname Lastname", "example1@gmail.com", "example1"],
- ["Firstname2 Lastname2", "example2@gmail.com", "example2"],
+ [ # Full name, Google email address, DataJoint username, admin
+ ["Firstname Lastname", "example1@gmail.com", "example1", 0],
+ ["Firstname2 Lastname2", "example2@gmail.com", "example2", 0],
],
skip_duplicates=True,
)
@@ -187,7 +188,6 @@
# ## Inserting from NWB
#
-#
# `spyglass.data_import.insert_sessions` helps take the many fields of data
# present in an NWB file and insert them into various tables across Spyglass. If
# the NWB file is properly composed, this includes...
@@ -199,6 +199,7 @@
#
# _Note:_ this may take time as Spyglass creates the copy. You may see a prompt
# about inserting device information.
+#
sgi.insert_sessions(nwb_file_name)
@@ -306,18 +307,17 @@
# Because it is a _secondary_ key, it is not required to uniquely identify an entry.
# Current values for this key from spyglass pipelines are:
#
-# | pipeline | Source|
-# | --- | --- |
-# | position | sg.common.PositionSource |
-# | lfp_v0 | sg.common.LFP |
-# | lfp_v1 | sg.lfp.v1.LFPV1 |
-# | lfp_band | sg.common.LFPBand,
sg.lfp.analysis.v1.LFPBandV1 |
-# | lfp_artifact | sg.lfp.v1.LFPArtifactDetection |
-# | spikesorting_artifact_v0 | sg.spikesorting.ArtifactDetection |
-# | spikesorting_artifact_v1 | sg.spikesorting.v1.ArtifactDetection |
-# | spikesorting_recording_v0 | sg.spikesorting.SpikeSortingRecording |
-# | spikesorting_recording_v1 | sg.spikesorting.v1.SpikeSortingRecording |
-#
+# | pipeline | Source |
+# | ------------------------- | --------------------------------------------------- |
+# | position | sg.common.PositionSource |
+# | lfp_v0 | sg.common.LFP |
+# | lfp_v1 | sg.lfp.v1.LFPV1 |
+# | lfp_band | sg.common.LFPBand,
sg.lfp.analysis.v1.LFPBandV1 |
+# | lfp_artifact | sg.lfp.v1.LFPArtifactDetection |
+# | spikesorting_artifact_v0 | sg.spikesorting.ArtifactDetection |
+# | spikesorting_artifact_v1 | sg.spikesorting.v1.ArtifactDetection |
+# | spikesorting_recording_v0 | sg.spikesorting.SpikeSortingRecording |
+# | spikesorting_recording_v1 | sg.spikesorting.v1.SpikeSortingRecording |
#
# ## Deleting data
@@ -355,20 +355,24 @@
# lfp.v1.LFPSelection.insert1(lfp_key, skip_duplicates=True)
# lfp.v1.LFPV1().populate(lfp_key)
# ```
+#
#
#
# Deleting Merge Entries
#
# ```python
-# from spyglass.utils.dj_merge_tables import delete_downstream_merge
+# nwbfile = sgc.Nwbfile()
#
-# delete_downstream_merge(
-# sgc.Nwbfile(),
-# restriction={"nwb_file_name": nwb_copy_file_name},
+# (nwbfile & {"nwb_file_name": nwb_copy_file_name}).delete_downstream_merge(
# dry_run=False, # True will show Merge Table entries that would be deleted
# )
# ```
+#
+# Please see the [next notebook](./03_Merge_Tables.ipynb) for a more detailed
+# explanation.
+#
#
+#
session_entry = sgc.Session & {"nwb_file_name": nwb_copy_file_name}
session_entry
@@ -418,6 +422,7 @@
# !ls $SPYGLASS_BASE_DIR/raw
# ## Up Next
+#
# In the [next notebook](./02_Data_Sync.ipynb), we'll explore tools for syncing.
#
diff --git a/notebooks/py_scripts/03_Merge_Tables.py b/notebooks/py_scripts/03_Merge_Tables.py
index c4c0abb48..33b8e9a0e 100644
--- a/notebooks/py_scripts/03_Merge_Tables.py
+++ b/notebooks/py_scripts/03_Merge_Tables.py
@@ -192,8 +192,8 @@
# 2. use `merge_delete_parent` to delete from the parent sources, getting rid of
# the entries in the source table they came from.
#
-# 3. use `delete_downstream_merge` to find Merge Tables downstream and get rid
-# full entries, avoiding orphaned master table entries.
+# 3. use `delete_downstream_merge` to find Merge Tables downstream of any other
+# table and get rid full entries, avoiding orphaned master table entries.
#
# The two latter cases can be destructive, so we include an extra layer of
# protection with `dry_run`. When true (by default), these functions return
@@ -201,23 +201,32 @@
#
LFPOutput.merge_delete(nwb_file_dict) # Delete from merge table
+
LFPOutput.merge_delete_parent(restriction=nwb_file_dict, dry_run=True)
-delete_downstream_merge(
- table=LFPV1,
- restriction=nwb_file_dict,
- dry_run=True,
-)
-# To delete all merge table entries associated with an NWB file, use
-# `delete_downstream_merge` with the `Nwbfile` table.
+# `delete_downstream_merge` is available from any other table in the pipeline,
+# but it does take some time to find the links downstream. If you're using this,
+# you can save time by reassigning your table to a variable, which will preserve
+# a copy of the previous search.
#
+# Because the copy is stored, this function may not see additional merge tables
+# you've imported. To refresh this copy, set `reload_cache=True`
+#
+
+# +
+nwbfile = sgc.Nwbfile()
-delete_downstream_merge(
- table=sgc.Nwbfile,
- restriction={"nwb_file_name": nwb_copy_file_name},
+(nwbfile & nwb_file_dict).delete_downstream_merge(
dry_run=True,
- recurse_level=3, # for long pipelines with many tables
+ reload_cache=False, # if still encountering errors, try setting this to True
)
+# -
+
+# This function is run automatically whin you use `cautious_delete`, which
+# checks team permissions before deleting.
+#
+
+(nwbfile & nwb_file_dict).cautious_delete()
# ## Up Next
#
diff --git a/notebooks/py_scripts/11_Curation.py b/notebooks/py_scripts/11_Curation.py
index 8b75a9c76..25eb698ad 100644
--- a/notebooks/py_scripts/11_Curation.py
+++ b/notebooks/py_scripts/11_Curation.py
@@ -5,7 +5,7 @@
# extension: .py
# format_name: light
# format_version: '1.5'
-# jupytext_version: 1.15.2
+# jupytext_version: 1.16.0
# kernelspec:
# display_name: base
# language: python
diff --git a/src/spyglass/common/common_usage.py b/src/spyglass/common/common_usage.py
index 716649574..8b110cbc2 100644
--- a/src/spyglass/common/common_usage.py
+++ b/src/spyglass/common/common_usage.py
@@ -1,7 +1,7 @@
"""A schema to store the usage of advanced Spyglass features.
Records show usage of features such as table chains, which will be used to
-determine which features are used, how often, and by whom. This will help
+determine which features are used, how often, and by whom. This will help
plan future development of Spyglass.
"""
diff --git a/src/spyglass/utils/dj_merge_tables.py b/src/spyglass/utils/dj_merge_tables.py
index c0dec296f..b748267ad 100644
--- a/src/spyglass/utils/dj_merge_tables.py
+++ b/src/spyglass/utils/dj_merge_tables.py
@@ -54,16 +54,6 @@ def __init__(self):
)
self._source_class_dict = {}
- @property
- def source_class_dict(self) -> dict:
- if not self._source_class_dict:
- module = getmodule(self)
- self._source_class_dict = {
- part_name: getattr(module, part_name)
- for part_name in self.parts(camel_case=True)
- }
- return self._source_class_dict
-
def _remove_comments(self, definition):
"""Use regular expressions to remove comments and blank lines"""
return re.sub( # First remove comments, then blank lines
@@ -511,7 +501,7 @@ def merge_delete_parent(
def fetch_nwb(
self,
- restriction: str = True,
+ restriction: str = None,
multi_source=False,
disable_warning=False,
*attrs,
@@ -531,10 +521,7 @@ def fetch_nwb(
"""
if isinstance(self, dict):
raise ValueError("Try replacing Merge.method with Merge().method")
- if restriction is True and self.restriction:
- if not disable_warning:
- _warn_on_restriction(self, restriction)
- restriction = self.restriction
+ restriction = restriction or self.restriction or True
return self.merge_restrict_class(restriction).fetch_nwb()
diff --git a/src/spyglass/utils/dj_mixin.py b/src/spyglass/utils/dj_mixin.py
index 9889b8a40..11e80b025 100644
--- a/src/spyglass/utils/dj_mixin.py
+++ b/src/spyglass/utils/dj_mixin.py
@@ -1,6 +1,5 @@
-from collections.abc import Iterable
from time import time
-from typing import Dict, List, Union
+from typing import Dict, List
import datajoint as dj
import networkx as nx
@@ -58,6 +57,7 @@ class SpyglassMixin:
_merge_table_cache = {} # Cache of merge tables downstream of self
_merge_chains_cache = {} # Cache of table chains to merges
_session_connection_cache = None # Cache of path from Session to self
+ _test_mode_cache = None # Cache of test mode setting for delete
_usage_table_cache = None # Temporary inclusion for usage tracking
# ------------------------------- fetch_nwb -------------------------------
@@ -106,7 +106,9 @@ def _nwb_table_tuple(self):
self._nwb_table_resolved = (
AnalysisNwbfile
if "-> AnalysisNwbfile" in self.definition
- else Nwbfile if "-> Nwbfile" in self.definition else None
+ else Nwbfile
+ if "-> Nwbfile" in self.definition
+ else None
)
if getattr(self, "_nwb_table_resolved", None) is None:
@@ -440,8 +442,8 @@ def cautious_delete(self, force_permission: bool = False, *args, **kwargs):
if merge_deletes:
for table, content in merge_deletes.items():
- count, name = len(content), table.full_table_name
- dj_logger.info(f"Merge: Deleting {count} rows from {name}")
+ count = sum([len(part) for part in content])
+ dj_logger.info(f"Merge: Deleting {count} rows from {table}")
if (
not self._test_mode
or not safemode
@@ -519,7 +521,7 @@ def __str__(self):
if not self._has_link:
return "No link"
return (
- f"Chain: "
+ "Chain: "
+ self.parent.table_name
+ self._link_symbol
+ self.child.table_name