You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dear authors, thank you for your inspiring works. I am especially interested in the fixed- and dynamic-context prompt LM tuning settings.
According to the paper, 15% of the original training set is reserved for validation. Could you please publish the new data splits? Also, what are the hyperparameters for every setting? These would be very helpful to align with the experiment results.
Thank you!
PS: The code cannot directly run, but I fixed it after checking with the PET repo.
The text was updated successfully, but these errors were encountered:
Dear authors, thank you for your inspiring works. I am especially interested in the fixed- and dynamic-context prompt LM tuning settings.
According to the paper, 15% of the original training set is reserved for validation. Could you please publish the new data splits? Also, what are the hyperparameters for every setting? These would be very helpful to align with the experiment results.
Thank you!
PS: The code cannot directly run, but I fixed it after checking with the PET repo.
The text was updated successfully, but these errors were encountered: