-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Position Id #7
Comments
@Wangpeiyi9979 As you know, in the case of Transformers the only way of providing the self-attention layers with positional information is So why do we rearrange in such a way? For example, in the same batch you have two samples you train an NLI (or other sentence-pair classification) on, and prompt tokens
In the same batch they will appear like:
Now, you can notice that the special tokens are not aligned, and it is not effective to insert prompt embeddings in such positions. However, if we permute tokens, all the special tokens are aligned. Moreover, not only
This trick is performed if the flag BTW: in RoBERTa models |
Hi, thanks for your nice work.
When I read the source code, I have a simple question for the position id used in the code as follow,
I find that the position id is not ordered, and what are the benefits of such a position ID
The text was updated successfully, but these errors were encountered: