You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the correctness file, the composite data usually includes only 3 or 4 captions rated by humans per image. Some candidate captions look like a paragraph, do I need to truncate it into a short sentence?
for example:person is pulling bow in the back.A person might be wearing helmet in the scene.person is having tattoo.The scene contains grass and well-maintained grass and garden and playhouse.
I process the composite dataset into flickr8k json style, and use compute_correlations.py to compute the human correlation, but I got different scores results compared with the ones in the paper.
Could you give me some guidance about how to process the composite dataset and reproduce the scores on Composite dataset? I would appreciate it if you could take the time to reply to me.
The text was updated successfully, but these errors were encountered:
The downloading path is correct, but we use both correctness and throughness.
Although we didn't shorten the sentences, be aware that when you tokenize the captions with CLIP you'll be limited to have only captions that are at most 77 tokens long, otherwise you'll need to truncate them.
In our case, we extracted candidates captions and their corresponding ratings from the CSV files you previously mentioned.
I want to reproduce the results on Composite dataset
for example:person is pulling bow in the back.A person might be wearing helmet in the scene.person is having tattoo.The scene contains grass and well-maintained grass and garden and playhouse.
Could you give me some guidance about how to process the composite dataset and reproduce the scores on Composite dataset? I would appreciate it if you could take the time to reply to me.
The text was updated successfully, but these errors were encountered: