-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Low verbosity error 1217
on delete prevents cascade
#638
Comments
@samuelbray32 this probably won't solve it but did you try not passing in the dictionary to |
I was able to delete Sharon's file when she had this issue. So one possibility is that this is a permissions issue. The other thought is that experiment_description is varchar(2000), which is a problem with the current MySQL. The permissions issue seems more likely though. |
The alternative delete call also didn't work. Attempting to delete from Nwbfile caused the same error. Also, I mistyped in the initial description when I said upstream. I've edited that now |
My current hypothesis is that it is the |
Hi, in case useful, I haven't populated these alison schema or interacted with them at all in this timeframe, so it would have to be something about how they're instantiated I suppose (there are likely some dependencies on Session as expected, but I'd expect others have custom schema that depend on Session as well) |
To test the permissions part, @samuelbray32 can you reinsert your test file and try to delete it? Upon failure, I think @acomrie can try to delete it, which should work. |
Possibly caused by #641 |
There's another entry |
Created issue for datajoint here |
Hi everyone. I might have a similar issue here. I have some data processed from last year 2022. I cannot open these data now. Relevant files:
This works for data processed this year.
If you try to delete old entries in the IntervalLinearizedPosition(), you get the same error which is (1217, 'Cannot delete or update a parent row: a foreign key constraint fails') |
Hi @shijiegu - Are you seeing Error 1217 with |
@shijiegu how updated is your spyglass? |
@edeno I think you might be right. I only updated Spyglass this year in June... |
@CBroz1's PR will hopefully fix this in datajoint when it is merged: datajoint/datajoint-python#1112 |
Closing with datajoint/datajoint-python#1112 |
|
Could you give a little more context for what the error was? |
The error was the same 1217 integrity error as previously encountered. This occurred for |
Just to update from discussions with @CBroz1: This issue should not affect people newly setting up spyglass so far as we know. Currently only a problem with the Frank Lab database. |
I've (a) reached out to our db admin to request that logs be turned on to better monitor occurrences, (b) reached out to a member of the datajoint team for insights, and (c) posted to stack overflow for community input |
I have continued to pursue solutions as described in post here. |
This list details which tables do and do not permit cascading deletes for low-privilege users. Attempting to delete from each table resulted in one of three outcomes...
@edeno edit formatting for table |
1217
on delete prevents cascade
This issue persists:
gives:
|
Hi! I see the various attempts above, but at the end of the day, if a researcher wants to delete an entry, they are supposed to be able to do it. Now I need to traverse all children tables to find a potential blocking table myself. |
As we've discussed in lab meetings, this is a permissions issue specific to our implemented database and we have collectively agreed on the working solution of escalating a user's permissions as needed. If you are experiencing this error, please reach out the the database admins with the username(s) that are experiencing the issue to have their permissions altered |
Thanks Sam. I vaguely remember this discussion, spanning multiple lab meetings. I also recall a suggestion to implement a special delete function to call in an event like this. I also need to point out no one has perfect memory on every issue of the database. A written resolution on how to proceed temporarily, especially in Open Issues, will be very helpful. |
As a second note, the scattered comments above do not fully explain why I need to traverse children tables because the principle of the datajoint is such that children tables will be subject to parent tables. I could guess that a downstream table entry privilege is different from any of its parent because it is a more precious table, like curated spike sorting, but the reason is not fully split out. To better help users fix their own problems, some summarized knowledge on this kind of long and open issue will be helpful too. |
Hi @shijiegu, please review the Code of Conduct and come talk to me. |
In a further attempt to explore this issue, I dropped all schemas on our development server, declared empty tables with datajoint, and then loaded a This dump...
script#!/bin/bash
# MySQL credentials
USER="cbroz"
PROD_PASS="redacted"
PROD_HOST="redacted"
DEV_PASS="redacted"
DEV_HOST="redacted"
OUTPUT_DIR="/home/cbroz/wrk/alt/common_dump"
DATABASE_LIST="temp-dump-list.txt"
DUMP_FILE="all_common_databases.sql"
mkdir -p "$OUTPUT_DIR" # Create output directory if it doesn't exist
DATABASES=$(tr '\n' ' ' < "$DATABASE_LIST") # Load text list of databases
echo "Dumping databases: $DATABASES"
# dump all, skipping drop and create info
mysqldump \
-u "$USER" -p"$PROD_PASS" -h "$PROD_HOST" \
--skip-add-drop-table \
--databases $DATABASES > "$OUTPUT_DIR/$DUMP_FILE"
if [ $? -eq 0 ]; then
echo "Successfully dumped databases: $DATABASES"
else
echo "Error dumping databases: $DATABASES"
fi
sed -i 's/CREATE TABLE /CREATE TABLE IF NOT EXISTS /g' "$OUTPUT_DIR/$DUMP_FILE"
sed -i 's/INSERT INTO /INSERT IGNORE INTO /g' "$OUTPUT_DIR/$DUMP_FILE"
echo "Loading databases from $DUMP_FILE"
mysql\
-u "$USER" -p"$DEV_PASS" -h "$DEV_HOST" \
< "$OUTPUT_DIR/$DUMP_FILE" |
This script captures which tables have the greatest discrepancy between allocated memory and varchar usage Summaryfrom functools import cached_property
import datajoint as dj
from tqdm import tqdm
from spyglass.utils.database_settings import SHARED_MODULES
schema = dj.schema("cbroz_temp")
@schema
class AllTables(dj.Manual):
definition = """
table : varchar(255)
"""
@cached_property
def all_tables(self):
shared_schemas = [
s for s in dj.list_schemas() if s.split("_")[0] in SHARED_MODULES
]
all_tables = []
for schema in shared_schemas:
print(f"Checking schema: {schema}")
schema_tables = dj.Schema(schema).list_tables()
all_tables.extend([f"{schema}.{table}" for table in schema_tables])
return all_tables
def process(self):
processed = self.fetch("table")
inserts = []
for table in tqdm(self.all_tables, desc="Processing tables"):
if table in processed:
continue
tqdm.write(f"Inserting table: {table}")
inserts.append({"table": table})
self.insert(inserts)
@schema
class KeyLenChecker(dj.Computed):
definition = """
-> AllTables
field='': varchar(255)
---
alloc=0 : int
max=0 : int
"""
ft_cache = {}
def get_ft(self, table):
if table not in self.ft_cache:
self.ft_cache[table] = dj.FreeTable(dj.conn(), table)
return self.ft_cache[table]
def make(self, key):
table = key["table"]
if table not in self.ft_cache:
self.ft_cache[table] = dj.FreeTable(dj.conn(), table)
ft = self.ft_cache[table]
parent_fks = [] # Reduce duplicate fields
for parent in ft.parents(as_objects=True):
parent_fks.extend(parent.primary_key)
alloc = { # Get allocated space for varchar fields
k: v.type.split("(")[1].split(")")[0]
for k, v in ft.heading.attributes.items()
if v.in_key and k not in parent_fks and "varchar" in v.type
}
if not alloc:
self.insert1({"table": table})
return
try:
max_lens = (
dj.U()
.aggr(ft, **{k: f"MAX(CHAR_LENGTH({k}))" for k in alloc.keys()})
.fetch1()
)
except Exception as e:
print(f"Error: {e}")
return
self.insert(
[
{"table": table, "field": k, "alloc": v, "max": max_lens[k]}
for k, v in alloc.items()
]
)
if __name__ == "__main__":
# AllTables().process() # uncomment if new tables are added
kl = KeyLenChecker()
kl.populate(display_progress=True)
print(
(kl & 'max > 1').proj("max", diff="alloc-max")
& dj.Top(limit=20, order_by="diff DESC")
) |
A note for future debugging: Updates in AttributeError: 'NoneType' object has no attribute 'groupdict'" Full error stackIntegrityError Traceback (most recent call last)
File ~/miniforge3/envs/spyglass_sort/lib/python3.9/site-packages/datajoint/table.py:519, in Table.delete.<locals>.cascade(table)
518 try:
--> 519 delete_count = table.delete_quick(get_count=True)
520 except IntegrityError as error:
File ~/miniforge3/envs/spyglass_sort/lib/python3.9/site-packages/datajoint/table.py:474, in Table.delete_quick(self, get_count)
473 query = \"DELETE FROM \" + self.full_table_name + self.where_clause()
--> 474 self.connection.query(query)
475 count = (
476 self.connection.query(\"SELECT ROW_COUNT()\").fetchone()[0]
477 if get_count
478 else None
479 )
File ~/miniforge3/envs/spyglass_sort/lib/python3.9/site-packages/datajoint/connection.py:343, in Connection.query(self, query, args, as_dict, suppress_warnings, reconnect)
342 try:
--> 343 self._execute_query(cursor, query, args, suppress_warnings)
344 except errors.LostConnectionError:
File ~/miniforge3/envs/spyglass_sort/lib/python3.9/site-packages/datajoint/connection.py:299, in Connection._execute_query(cursor, query, args, suppress_warnings)
298 except client.err.Error as err:
--> 299 raise translate_query_error(err, query)
IntegrityError: Cannot delete or update a parent row: a foreign key constraint fails
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
Cell In[29], line 1
----> 1 (CuratedSpikeSorting & auto_curation_out_key).cautious_delete()
File ~/Src/spyglass/src/spyglass/utils/dj_mixin.py:503, in SpyglassMixin.cautious_delete(self, force_permission, dry_run, *args, **kwargs)
496 if dry_run:
497 return (
498 IntervalList(), # cleanup func relies on downstream deletes
499 external[\"raw\"].unused(),
500 external[\"analysis\"].unused(),
501 )
--> 503 super().delete(*args, **kwargs) # Confirmation here
505 for ext_type in [\"raw\", \"analysis\"]:
506 external[ext_type].delete(
507 delete_external_files=True, display_progress=False
508 )
File ~/miniforge3/envs/spyglass_sort/lib/python3.9/site-packages/datajoint/table.py:623, in Table.delete(self, transaction, safemode, force_parts, force_masters)
621 # Cascading delete
622 try:
--> 623 delete_count = cascade(self)
624 except:
625 if transaction:
File ~/miniforge3/envs/spyglass_sort/lib/python3.9/site-packages/datajoint/table.py:521, in Table.delete.<locals>.cascade(table)
519 delete_count = table.delete_quick(get_count=True)
520 except IntegrityError as error:
--> 521 match = foreign_key_error_regexp.match(error.args[0]).groupdict()
522 # if schema name missing, use table
523 if \"`.`\" not in match[\"child\"]:
AttributeError: 'NoneType' object has no attribute 'groupdict'" |
Describe the bug
Attempting to delete an entry within the Session table results in the following error:
IntegrityError: (1217, 'Cannot delete or update a parent row: a foreign key constraint fails')
To Reproduce
Steps to reproduce the behavior:
(Session & {"nwb_file_name": 'samtest20230817_.nwb'}).delete({"nwb_file_name": 'samtest20230817_.nwb'})
Expected behavior
The entry and all other downstream should be removed
Additional context
The only thing done with this nwb_file was insertion into spyglass, if that helps narrow down potential schema permission errors
The text was updated successfully, but these errors were encountered: