-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handling pseudo 3D information #122
Comments
Hello @Marius1311, we currently don't support 3D cellpose segmentation, but this is definitely something I can work on! I'll try to work on this, but I have to say that my next month is really busy, so I can't start working on this before late October. I hope this sounds reasonable. |
Thanks for your reply @quentinblampey! Looking forward to this :) |
Sorry to revive this, but indeed Baysor already works for 3D. However, reading back the cell polygons is a bit more tricky (at least for me) so if sopa can handle 3D at some point, that'd be perfect 😎 |
Hi @lguerard, sorry for the delay, I forgot to answer |
Is your feature request related to a problem? Please describe.
Cellpose natively supports 3D segmentation; we have so far used it in "pseudo 3D mode", where you segment each z-slice separately, and then stitch together the masks to aggregate transcripts across the z stack. From the cellpose docs:
Is there any way to do this through Sopa? We would probably have to load in the entire z-stack of images, as the model needs access to all of them. In our case, we have MERSCOPE data, and we can visually see that cells shift a little bit as we move across the z-stack, so it seems important to segment each z-slice seperately, rather then just using the center slice and ignoring the z-coordinate in the transcripts.
The text was updated successfully, but these errors were encountered: