You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Mitosis Detection Results: A Comparative Analysis of Published Pretrained Models and Reproduce Training Approaches Using Domain Adversarial RetinaNet
#5
Open
shbkukuk opened this issue
Mar 11, 2024
· 1 comment
Hello, firstly, I'd like to express our gratitude for the opportunity to contribute to the advancement of mitosis detection and the associated challenges. We are currently engaged in scientific research focused on mitosis detection, and in our pursuit, we've come across the MIDOG21 and MIDOG22 datasets, which are renowned for their extensive and diverse content.
Our attention was drawn to a proposed 'baseline' methodology known as Domain Adversarial RetinaNet, which we believe holds promise for our objectives. We endeavoured to employ this methodology by utilizing both the published pretrained model and conducting our own training and inference processes. During this endeavour, we ensured consistency by employing the same MIDOG slides for training purposes.
However, upon comparing the results obtained from the pretrained model and our own trained model, we encountered discrepancies. On the left, we have the results obtained from the published pretrained model, while on the right, we present the results from our trained model. In your paper [link], a threshold of 0.64 is mentioned, which we adhered to in both cases. Additionally, we meticulously examined the docker code for inference and the implementation for patching to ensure conformity in parameters and methodology.
Despite these efforts, the observed disparities persist, prompting us to seek clarification and possibly identify areas where our approach may require refinement. We are keen to address these challenges and enhance the efficacy of mitosis detection methodology the below you can see our training process metrics and loss. We could not see any not expected situation. This model got fit.
so from what we can see the model apparently was fitted properly, but it is hard for us to tell from here because of which reason the model fails in inference.
Some suggestions might be:
Normalization is not set up properly (i.e., equal to training)
Data loader uses BGR instead of RGB data format
Model selection did not work properly
Please be reminded that the threshold was optimized on the validation set and 0.64 is only the suitable value for our model, yours will be very likely different. This threshold also hugely depends on the sampling scheme that you use and on the loss function.
To us, it looks as if the non-maximum suppression is not working as intended. You might also want to evaluate the loss itself on the hold out set just to see if the data loader works in both cases similarly, or run inference on the training set, which should yield overly optimistic but sensible results.
Hello, firstly, I'd like to express our gratitude for the opportunity to contribute to the advancement of mitosis detection and the associated challenges. We are currently engaged in scientific research focused on mitosis detection, and in our pursuit, we've come across the MIDOG21 and MIDOG22 datasets, which are renowned for their extensive and diverse content.
Our attention was drawn to a proposed 'baseline' methodology known as Domain Adversarial RetinaNet, which we believe holds promise for our objectives. We endeavoured to employ this methodology by utilizing both the published pretrained model and conducting our own training and inference processes. During this endeavour, we ensured consistency by employing the same MIDOG slides for training purposes.
However, upon comparing the results obtained from the pretrained model and our own trained model, we encountered discrepancies. On the left, we have the results obtained from the published pretrained model, while on the right, we present the results from our trained model. In your paper [link], a threshold of 0.64 is mentioned, which we adhered to in both cases. Additionally, we meticulously examined the docker code for inference and the implementation for patching to ensure conformity in parameters and methodology.
Despite these efforts, the observed disparities persist, prompting us to seek clarification and possibly identify areas where our approach may require refinement. We are keen to address these challenges and enhance the efficacy of mitosis detection methodology the below you can see our training process metrics and loss. We could not see any not expected situation. This model got fit.
Our aim is comparing our proposed methodology with yours.
So Could you help me with issue?
The text was updated successfully, but these errors were encountered: