-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Image segmentation using custom neural network model with Tensorflow #9
Comments
In a more organized fashion:
|
Here an interesting challenge to check is whether the construct of bounding box + mask can be used as a generic unit that e.g. Napari could use for dynamically reading only the pertaining chuncks of the larger .zarr files for display. this could be particular interesting in situations where the user can also filter out particular objects based on certain values, so that only the valid objects are loaded into the canvas. Something going in the direction of the Napari Clusters Plotter |
Can we move this discussion to fractal-analytics-platform/fractal-client#64 and close this one, or are there important differences? Also: I don't have access to https://github.com/fmi-basel/isit (in case it's useful). |
@tcompa There is quite some overlap. I now renamed it to reflect its difference: Let's use this issue to track progress on the more complex case of some custom models at FMI. This is not a July goal. I think the first step will be some cleanup on those models and the isit library that runs them. Once this is achieved, we can integrate the more complex network architectures into Fractal |
This has been covered in the (private) RDCNet task at FMI now |
Currently two different network architectures have been used for object segmentation: Cellpose and RDCNet. As a minimal working example, we should be able to use a pre-trained model from each of these architectures and use it for prediction of label maps. Important to note that Cellpose is using pytorch, whereas RDCNet utilizes Tensorflow.
Cellpose should work by installing it from the repository above. For RDCNet, we have usually used it via ISIT.
The input is the reading of the zarr file in such a way that the images match the images used for training and applying the prediction model on them. The output are object labels which are defined as regions which contain information of where they should be located within the field grid. Each region comprises of a segmented object plus a certain bounding box surrounding it.
Currently the models mostly used with RDCNet rely on object segmentation of MIP projections of each well/overview.
The text was updated successfully, but these errors were encountered: