Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

For the unigram and bigram embedding for Chinese dataset, what if i wanna train my own set of embedding?? #22

Open
marcusau opened this issue Aug 4, 2020 · 1 comment

Comments

@marcusau
Copy link

marcusau commented Aug 4, 2020

Hi,

Thanks for your amazing work.

For the following unigram and bigram embedding, If I wanna try my own set of vec files, what can i do? T

For the Chinese datasets, you can download the pretrained unigram and bigram embeddings in Baidu Cloud. Download the 'gigaword_chn.all.a2b.uni.iter50.vec' and 'gigaword_chn.all.a2b.bi.iter50.vec'. Then replace the embedding path in train_tener_cn.py

Thanks a lot,

Marcus

@yhcc
Copy link
Member

yhcc commented Aug 6, 2020

Thanks for your attention. You can use the word2vec or glove algorithm (their original version are based on English, therefore, these codes usually take space as the separator, you might need to add space between Chinese characters or bigrams ) to train your own word vectors, and you can use the wiki corpus to train your word vector models. For a sentence like "复旦大学" , the unigram sequence is [“复”, “旦”,“大”,“学”],the bigram sequence is ["复旦", "旦大",“大学”,“学<EOS>”]. Based on my experience, the larger word vector dimension, the better performance (300d>100d>50d). Since the Chinese corpus are not large, usually you only need less than 1 hour to train 5 epochs to get the vectors.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants