-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error while trying to use the model #4
Comments
I have the same issue! |
me too |
I think the authors was planning to use E11, E21, etc. but then changed the code to use # & $. What I have done to solve the issue is that when I read the data in the beginning of the code, I convert the special tokens as the following: E11 & E21 -> # and then everything worked perfectly. |
@bilalghanem Can you share an example of how you converted the training example. |
@sandeeppilania I change it in the code. ex.
|
@bilalghanem I am asking something silly here sorry about that, |
sorry, you're right .. before applying the tokenizer. or even when u start reading the data. |
Got it, |
@sandeeppilania yes, exactly. And this line specifies the end entity in case its length is more than single word. |
@sandeeppilania hi, brother... i have the same issue.. can you share the part of the code where exactly did you made changes to solve the problem.. thanks in advance |
You mean E11 & E12? |
I wonder why they didn't try running the software before they posted it here (and explicitly said it's "stable", when it doesn't even run)... |
Please check the following lines in bert.py. |
Hey guys, look here!Modify the "additional_special_tokens" in the file "bert.py" so that it corresponds to the file "util.py", and pay attention to the starting and ending subscript positions in the file "util.py"; If necessary, modify about 275 lines of code in the file "util.py". After finishing, I can start training. I have tried this method and it is effective. 伙计,来瞧瞧这!修改bert.py文件中的additional_special_tokens使它和util.py文件中的tokens_a对应,同时注意开始和结束的下标位置;必要情况修改util.py文件中的275行左右的代码。完成后即可开始训练,我已经尝试了这种方法,是有效的。 |
I am sorry I have no time to correct the code, the error raises when you are using a modern transformer library. |
I am sorry I have no time to correct the code, the error raises when you are using a modern transformers library. |
OK,thank you😲
…---- Replied Message ----
| From | ***@***.***> |
| Date | 06/21/2022 21:38 |
| To | ***@***.***> |
| Cc | ***@***.******@***.***> |
| Subject | Re: [wang-h/bert-relation-classification] Error while trying to use the model (#4) |
I am sorry I have no time to correct the code, the error raises when you are using a modern transformers library.
model = XXX.from_pretrained(args.bert_model, args=args) tokenizer.add_tokens(additional_special_tokens)
add the following line!!!
model.resize_token_embeddings(len(tokenizer))
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
Traceback (most recent call last):
File "bert.py", line 429, in
main()
File "bert.py", line 373, in main
config, config.task_name, tokenizer, evaluate=False)
File "bert.py", line 268, in load_and_cache_examples
examples, label_list, config.max_seq_len, tokenizer, "classification", use_entity_indicator=confi
g.use_entity_indicator)
File "C:\Users\pilanisp\Desktop\BERT FINAL\BERT IE\bert-relation-classification\utils.py", line 281
, in convert_examples_to_features
e11_p = tokens_a.index("#")+1 # the start position of entity1
ValueError: '#' is not in list
The text was updated successfully, but these errors were encountered: