You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi I just found that in the pretrain phase, you choose the SGD optimizer. However, in the meta-train phase, you choose the Adam optimizer. I wonder that why you choose different optimizer in the different phase?
The text was updated successfully, but these errors were encountered:
hi In the torch code,I found that you set the train_aug=false in the meta-training phase.However, in the pretrain phase, you set the train_aug = true. So the train_aug is designed for the pretrain phase? I set train_aug=ture in the meta-training phase, and runned several epochs. The result lower than aug=False.
We apply data augmentation during pre-training to solve the overfitting problem. You may also apply data augmentation during meta-training as well. Please note that you cannot apply data augmentation on the episode test (the test set for each small task).
Hi I just found that in the pretrain phase, you choose the SGD optimizer. However, in the meta-train phase, you choose the Adam optimizer. I wonder that why you choose different optimizer in the different phase?
The text was updated successfully, but these errors were encountered: