You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm running the GradCAM function on a pretrained model for a given image, and every time I generate the heatmap I get a different one. This happens whether or not I use the guided gradients option. The heatmaps are very different, too.
Is this expected?
The text was updated successfully, but these errors were encountered:
If you run this you'll notice the augmentation layers screw up the GradCAM output because they are active by default when you add them to the model. They have a parameter "training = True" that gets turned off during predictions or evaluations, but the operations performed by explainer.explanation() don't count as prediction/evaluation, so the augmentation layer is active.
The only workarounds I found so far were mangling the already trained model to try and remove the augmentation layers, or manually redefining the model and explicitly passing the "training = False" argument in the relevant layers.
Neither of these solutions feels good though, because they require me to reconstruct the model in some way. It would be ideal to fix this only using the pretrained model somehow. I also think this might be a problem with Batch Normalization layers, which also have the same "training" parameter.
I'm running the GradCAM function on a pretrained model for a given image, and every time I generate the heatmap I get a different one. This happens whether or not I use the guided gradients option. The heatmaps are very different, too.
Is this expected?
The text was updated successfully, but these errors were encountered: