-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathdemo.txt
10 lines (10 loc) · 925 Bytes
/
demo.txt
1
2
3
4
5
6
7
8
9
10
Grokking, the unusual phenomenon for algorithmic datasets where generalization
happens long after overfitting the training data, has remained elusive. We aim to understand grokking by analyzing the loss landscapes of neural networks, identifying
the mismatch between training and test loss landscapes as the cause for grokking.
We refer to this as the "LU mechanism" because training and test losses (against
model weight norm) typically resemble "L" and "U", respectively. This simple
mechanism can nicely explain many aspects of grokking: data size dependence,
weight decay dependence, the emergence of representations, etc. Guided by the
intuitive picture, we are able to induce grokking on tasks involving images, language and molecules. In the reverse direction, we are able to eliminate grokking for
algorithmic datasets. We attribute the dramatic nature of grokking for algorithmic
datasets to representation learning.