You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
Thanks for the nice study in your extended TPAMI paper. I find it especially valuable that you have compared all the Mask2Former models with the same training settings. However, there are still some open questions that arise after studying the paper & code closely.
My questions are mainly directed towards the experimental results on the open-set task (StreetHazards dataset) in table 3.
Why aren't there any closed and open set performance for EAM and Maskomaly since you state that you retrained all Mask2Former-based models with the same training settings.
On which data did you perform the closed-set performance. Could it be that those are obtained on the val split?
The relatively good open-IoU of 41.4 for AEM with a super high FPR of 99.7 seems not very plausible to me. In contrast, RbA with a significantly higher AuPRC and similar, even lower FPR95, results in a much lower Open-IoU of 10.9.
You state, that you fine-tune the model for 5000 iteration. Which model weights do you use to report the performance in table 3? Do you use the ones with the lowest validation loss (on val set without anomalies) or the last?
I was wondering whether you have integrated the other M2F-based models in your code base? If yes, do you forsee sharing the collection of all models?
The text was updated successfully, but these errors were encountered:
Hi,
Thanks for the nice study in your extended TPAMI paper. I find it especially valuable that you have compared all the Mask2Former models with the same training settings. However, there are still some open questions that arise after studying the paper & code closely.
My questions are mainly directed towards the experimental results on the open-set task (StreetHazards dataset) in table 3.
I was wondering whether you have integrated the other M2F-based models in your code base? If yes, do you forsee sharing the collection of all models?
The text was updated successfully, but these errors were encountered: