Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]: Delete collection resource leak (single-node Chroma) #3297

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

tazarov
Copy link
Contributor

@tazarov tazarov commented Dec 13, 2024

Description of changes

Closes #3296

The delete collection logic slightly changes to accomodate the fix without breaking the transactional integrity of self._sysdb.delete_collection. The chromadb.segment.SegmentManager.delete_segments had to change to accept the list of segments to delete instead of collection_id.

image

Summarize the changes made by this PR.

  • Improvements & Bug fixes
    • Fixes the resource leak when deleting a collection

Test plan

How are these changes tested?

  • Tests pass locally with pytest for python, yarn test for js, cargo test for rust

Documentation Changes

N/A

Copy link

Reviewer Checklist

Please leverage this checklist to ensure your code review is thorough before approving

Testing, Bugs, Errors, Logs, Documentation

  • Can you think of any use case in which the code does not behave as intended? Have they been tested?
  • Can you think of any inputs or external events that could break the code? Is user input validated and safe? Have they been tested?
  • If appropriate, are there adequate property based tests?
  • If appropriate, are there adequate unit tests?
  • Should any logging, debugging, tracing information be added or removed?
  • Are error messages user-friendly?
  • Have all documentation changes needed been made?
  • Have all non-obvious changes been commented?

System Compatibility

  • Are there any potential impacts on other parts of the system or backward compatibility?
  • Does this change intersect with any items on our roadmap, and if so, is there a plan for fitting them together?

Quality

  • Is this code of a unexpectedly high quality (Readability, Modularity, Intuitiveness)

Copy link
Contributor Author

tazarov commented Dec 13, 2024

@tazarov tazarov added bug Something isn't working Local Chroma An improvement to Local (single node) Chroma labels Dec 16, 2024
@tazarov tazarov force-pushed the trayan-12-13-fix_delete_collection_resource_leak branch 2 times, most recently from 2e113a0 to b53dadb Compare January 3, 2025 07:48
Copy link
Contributor

@rohitcpbot rohitcpbot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for identifying the leak and raising the fix. I did not see this earlier so did not review earlier. my miss. Reviewed it now.

@@ -384,10 +384,11 @@ def delete_collection(
)

if existing:
segments = self._sysdb.get_segments(collection=existing[0].id)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel we should try not to call sysdb for getting segments. It adds extra call to the backend for distributed chroma.

Seeing the current code, I see we are already calling sysdb.get_segments() from the manager, so you are simply moving that line here, and not adding extra calls. But i feel we can do better.

Do you think we should just call delete_segment() from delete_collection() ?
So we can add this snippet back -

   for s in self._manager.delete_segments(existing[0]["id"]):
       self._sysdb.delete_segment(s)

and do a no-op inside delete_segments() in db/impl/grpc/client.py
Will that fix the leak ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rohitcpbot,

using this snippet:

for s in self._manager.delete_segments(existing[0]["id"]): 
   self._sysdb.delete_segment(s)

Makes sense however we revert back to a non-atomic deletion of sysdb resources. In the above snippet we'd delete the segments separately from deleting the collection, which I wanted to avoid on purpose which is why I pulled the get of the segments here before the were atomically deleted as part of self._sysdb.delete_collection.

Why do you think that this would cause extra calls in the distributed backend?

@tazarov tazarov force-pushed the trayan-12-13-fix_delete_collection_resource_leak branch from b53dadb to ba07228 Compare January 8, 2025 08:21
@@ -77,8 +83,7 @@ def prepare_segments_for_new_collection(

@override
def delete_segments(self, collection_id: UUID) -> Sequence[UUID]:
segments = self._sysdb.get_segments(collection=collection_id)
return [s["id"] for s in segments]
return [] # noop
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@HammadB, talked with @rohitcpbot and he mentioned that this should be noop, is this fine or should I revert back to the older version with distributed sysdb query?

@@ -384,10 +384,10 @@ def delete_collection(
)

if existing:
self._manager.delete_segments(collection_id=existing[0].id)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rohitcpbot, this is the actual change as we discussed. rest is just black formatting changes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @tazarov.
If possible leave a note with the following comment or similar -
"""
This call will delete segment related data that is stored locally and cannot be part of the atomic SQL transaction.
It is a NoOp for the distributed sysdb implementation.
Omitting this call will lead to leak of segment related resources.
"""

Can you answer something for me - If the process crashes immediately after self._manager.delete_segments(collection_id=existing[0].id)
Then the actual entries in SQL are not deleted, which means the collection is not deleted.
Now if user issues a Get or Query, will the local manager work correctly ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the fix to make local manager work in the above failure scenario is non trivial then we could leave a note here, and take it up as a separate task. But it will be good to know the state of the Db with above change.

The same scenario would had to be thought through even with your earlier changes of doing the local manager delete after the sysdb delete... where the sql could have gone through but the local manager did not because of a crash.. leading to a leak.

Copy link
Contributor Author

@tazarov tazarov Jan 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rohitcpbot, local manager has two segments for each collection:

  • sqlite - this will actually delete the segment from segments -
    def delete(self) -> None:
  • hnsw - it will delete the directory where the the HNSW is stored but will not delete the segment from segments dir

So here is a diagram to explain the point of failure:

image

The main problem as I see it in the current impl (with possible solutions):

image

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the only foolproof way to remove it all is possibly to wrap it all in a single transaction all the way from segment. Then if the physical dir removal fails we'll rollback the whole sqlite transaction.

As a side note, on Windows deleting the segment dir right after closing file handles frequently fails.

@tazarov tazarov requested a review from rohitcpbot January 8, 2025 16:34
@rohitcpbot
Copy link
Contributor

Thanks @tazarov, i left a comment to add a comment, and also a question. We should be good to merge after that.

@tazarov tazarov force-pushed the trayan-12-13-fix_delete_collection_resource_leak branch from 2b21771 to 9fd2b3c Compare January 9, 2025 18:52
@tazarov tazarov force-pushed the trayan-12-13-fix_delete_collection_resource_leak branch from 9fd2b3c to 788a07f Compare January 9, 2025 19:00
@tazarov
Copy link
Contributor Author

tazarov commented Jan 9, 2025

Thanks @tazarov, i left a comment to add a comment, and also a question. We should be good to merge after that.

After some deliberation I think a good course of action is to make it all atomic by making the sysdb.delete_collection take a callback which removes the physical resources for single-node Chroma and is noop for distributed. Here's how this can be look like:

self._sysdb.delete_collection(
                existing[0].id, tenant=tenant, 
                database=database,
                lambda collection_id: self._manager.delete_segments(collection_id=collection_id)
            )

and inside delete_collection well do the callback at the end of the transaction (or the beginning) but the gist is that any failure to remove the segments will cause the deletion of a collection and segments to be rolled back.

Wdyt?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working Local Chroma An improvement to Local (single node) Chroma
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: Resource leak in delete_collection (Single-Node Chroma)
2 participants