Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accuracy unchanged while training #3

Open
clockworked247 opened this issue Sep 8, 2017 · 3 comments
Open

Accuracy unchanged while training #3

clockworked247 opened this issue Sep 8, 2017 · 3 comments

Comments

@clockworked247
Copy link

clockworked247 commented Sep 8, 2017

@vanzytay love your work here and thanks for sharing it for people to learn from!

I'm running the following command and not seeing any change in output:
python train.py --rnn_type GRU --mode term

Output:
---
[Epoch 1] Train Loss=0.9544535914837327 T=207.711814s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 2] Train Loss=0.9511685806084733 T=223.582632s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 3] Train Loss=0.9511685740223247 T=225.45853200000005s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 4] Train Loss=0.9486824010617166 T=219.52165200000002s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 5] Train Loss=0.9511685766567841 T=222.91000699999995s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 6] Train Loss=0.9486823934876458 T=221.76340299999993s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 7] Train Loss=0.9486823994151795 T=219.41352699999993s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 8] Train Loss=0.9511685839015476 T=222.40653099999986s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 9] Train Loss=0.9486824050134058 T=222.14093300000013s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 10] Train Loss=0.9486824076478653 T=221.4141770000001s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 11] Train Loss=0.9511685786326287 T=222.83806300000015s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 12] Train Loss=0.9536547789257535 T=222.21776s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 13] Train Loss=0.9486823948048755 T=226.17170699999997s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 14] Train Loss=0.9511685914756185 T=223.5613070000004s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 15] Train Loss=0.9511685918049259 T=224.78602600000022s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 16] Train Loss=0.9511686039893008 T=225.99811899999986s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 17] Train Loss=0.9536547822188277 T=229.34864700000026s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 18] Train Loss=0.9511685960859225 T=224.25112900000022s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 19] Train Loss=0.9511685983910745 T=226.01416599999993s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 20] Train Loss=0.9511686039893008 T=227.8825510000006s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 21] Train Loss=0.9511686023427637 T=227.24668999999994s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 22] Train Loss=0.9536547891342837 T=231.79577599999993s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 23] Train Loss=0.9511686059651454 T=227.48067600000013s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 24] Train Loss=0.9511686020134562 T=227.48563000000013s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 25] Train Loss=0.9486824152219361 T=225.5863560000007s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 26] Train Loss=0.9486824201615476 T=221.80185700000038s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 27] Train Loss=0.9511686072823751 T=222.58302899999944s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 28] Train Loss=0.9511686168322906 T=220.37081399999988s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 29] Train Loss=0.9511686115633717 T=217.61475700000028s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 30] Train Loss=0.9511686115633717 T=217.61519999999928s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 31] Train Loss=0.9511686155150608 T=221.39081799999985s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 32] Train Loss=0.9486824198322401 T=217.4700130000001s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 33] Train Loss=0.9486824247718516 T=216.53440500000033s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 34] Train Loss=0.9536548006600438 T=215.7164499999999s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 35] Train Loss=0.9511686247356689 T=219.20409699999982s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 36] Train Loss=0.9536547980255844 T=216.21291700000074s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 37] Train Loss=0.9536547996721215 T=217.60048799999822s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 38] Train Loss=0.9486824237839293 T=217.80505099999937s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 39] Train Loss=0.9511686036599933 T=221.05352800000037s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 40] Train Loss=0.9486824254304664 T=221.01860599999964s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 41] Train Loss=0.9536547960497398 T=220.08077799999955s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 42] Train Loss=0.9511686122219866 T=223.32578099999955s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 43] Train Loss=0.9486824267476962 T=222.5393839999997s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 44] Train Loss=0.9486824323459225 T=222.55362799999966s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 45] Train Loss=0.9511686023427637 T=222.70722000000023s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 46] Train Loss=0.9511686036599933 T=223.65236600000026s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 47] Train Loss=0.9511686059651454 T=222.57912299999953s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 48] Train Loss=0.9536547894635912 T=223.7721619999993s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 49] Train Loss=0.9486824139047064 T=223.58448600000156s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---
[Epoch 50] Train Loss=0.9486824125874767 T=220.14713099999972s
Test loss=0.90144944190979
Output Distribution={2: 1120}
Accuracy=0.65
---

I did python prepare.py for both terms and aspects and it generated the embeddings and store files.

Is there something I'm missing?

@vanzytay
Copy link
Owner

vanzytay commented Sep 9, 2017

Hi.

I have not touched this repository for a long time and i'm not sure if the Pytorch API has changed. (this was made when Pytorch first came out).

That aside, perhaps it is something to do with the learning rate? the loss looks like it's changing (dropping).

I noticed the model tends to predict a single class for quite awhile even when it works. Maybe some hyperparameter tuning could help?

I'm a little busy with other projects currently but will let u know when I get back to check on what's wrong with this repository.

Thanks for your feedback.

@clockworked247
Copy link
Author

Fair enough. Thanks for your response.

@DrJZhou
Copy link

DrJZhou commented Jan 12, 2018

@clockworked247 Excuse me. I also meet the problem you said, do you know how to solve it. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants