You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The authors have introduced a textual attack protocol for visually grounded text embeddings (e.g., for image-caption models). A simple yet effective adversarial training algorithm is also proposed in the paper for defense purposes.
The authors have introduced a semantic coverage prior to visually-grounded text embeddings, which relies on the semantic parsing of texts. This technique improves the models' robustness on defending textual attacks proposed by the previous paper.
Many thanks.
The text was updated successfully, but these errors were encountered:
Thanks for your interest and kind advice! However, since we focus on textual adversarial attack and defense, we need to have a careful discussion about whether we should incorporate these papers into the list. (I have roughly read your recommended papers and I think they are suitable for our list, but I need some time to share my opinion with other collaborators.)
Add the recommended articles in a new section "Application of TAAD in other fields". Thank you again for contribution. If you have further problems, please reopen the issue.
Dear authors,
Thanks for the awesome list!
You may also find the following papers interesting.
Learning Visually-Grounded Semantics from Contrastive Adversarial Samples
by Haoyue Shi*, Jiayuan Mao*, Tete Xiao*, Yuning Jiang, and Jian Sun. The paper was published as a conference paper at COLING 2018.
The authors have introduced a textual attack protocol for visually grounded text embeddings (e.g., for image-caption models). A simple yet effective adversarial training algorithm is also proposed in the paper for defense purposes.
A follow-up paper is the following.
Unified Visual-Semantic Embeddings: Bridging Vision and Language with Structured Meaning Representations
by Hao Wu*, Jiayuan Mao*, Yufeng Zhang, Weiwei Sun, Yuning Jiang, Lei Li, and Wei-Ying Ma. The paper was published as a conference paper at CVPR 2019.
The authors have introduced a semantic coverage prior to visually-grounded text embeddings, which relies on the semantic parsing of texts. This technique improves the models' robustness on defending textual attacks proposed by the previous paper.
Many thanks.
The text was updated successfully, but these errors were encountered: