You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Was trying out CLIP via sentence-transformers however I have an error following tokenization that the input has not been truncated. Not sure if this is down to the transformer's version so attaching all information below.
Error:
RuntimeError: The size of tensor a (129) must match the size of tensor b (77) at non-singleton dimension 1
Was trying out CLIP via sentence-transformers however I have an error following tokenization that the input has not been truncated. Not sure if this is down to the transformer's version so attaching all information below.
Error:
RuntimeError: The size of tensor a (129) must match the size of tensor b (77) at non-singleton dimension 1
Packages:
This refers to this line ->
sentence-transformers/sentence_transformers/models/CLIPModel.py
Line 71 in a624f0c
where to work I needed to add:
..padding, truncation = True)
The text was updated successfully, but these errors were encountered: