Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering

被引:69
|
作者
Goyal, Yash [1 ]
Khot, Tejas [2 ]
Agrawal, Aishwarya [1 ]
Summers-Stay, Douglas [3 ]
Batra, Dhruv [1 ,4 ]
Parikh, Devi [1 ,4 ]
机构
[1] Georgia Tech, Atlanta, GA 30332 USA
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[3] Army Res Lab, Adelphi, MD USA
[4] Facebook AI Res, Menlo Pk, CA USA
关键词
Visual question answering; VQA; VQA challenge;
D O I
10.1007/s11263-018-1116-0
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The problem of visual question answering (VQA) is of significant importance both as a challenging research question and for the rich set of applications it enables. In this context, however, inherent structure in our world and bias in our language tend to be a simpler signal for learning than visual modalities, resulting in VQA models that ignore visual information, leading to an inflated sense of their capability. We propose to counter these language priors for the task of VQA and make vision (the V in VQA) matter! Specifically, we balance the popular VQA dataset (Antol et al., in: ICCV, 2015) by collecting complementary images such that every question in our balanced dataset is associated with not just a single image, but rather a pair of similar images that result in two different answers to the question. Our dataset is by construction more balanced than the original VQA dataset and has approximately twice the number of image-question pairs. Our complete balanced dataset is publicly available at http://visualqa.org/ as part of the 2nd iteration of the VQA Dataset and Challenge (VQA v2.0). We further benchmark a number of state-of-art VQA models on our balanced dataset. All models perform significantly worse on our balanced dataset, suggesting that these models have indeed learned to exploit language priors. This finding provides the first concrete empirical evidence for what seems to be a qualitative sense among practitioners. We also present interesting insights from analysis of the participant entries in VQA Challenge 2017, organized by us on the proposed VQA v2.0 dataset. The results of the challenge were announced in the 2nd VQA Challenge Workshop at the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017. Finally, our data collection protocol for identifying complementary images enables us to develop a novel interpretable model, which in addition to providing an answer to the given (image, question) pair, also provides a counter-example based explanation. Specifically, it identifies an image that is similar to the original image, but it believes has a different answer to the same question. This can help in building trust for machines among their users.
引用
收藏
页码:398 / 414
页数:17
相关论文
共 50 条
  • [41] Hierarchical Question-Image Co-Attention for Visual Question Answering
    Lu, Jiasen
    Yang, Jianwei
    Batra, Dhruv
    Parikh, Devi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [42] COIN: Counterfactual Image Generation for Visual Question Answering Interpretation
    Boukhers, Zeyd
    Hartmann, Timo
    Juerjens, Jan
    SENSORS, 2022, 22 (06)
  • [43] Enhancing Image Comprehension for Computer Science Visual Question Answering
    Wang, Hongyu
    Qiang, Pengpeng
    Tan, Hongye
    Hu, Jingchang
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT I, 2024, 14425 : 487 - 498
  • [44] Leveraging Visual Question Answering for Image-Caption Ranking
    Lin, Xiao
    Parikh, Devi
    COMPUTER VISION - ECCV 2016, PT II, 2016, 9906 : 261 - 277
  • [45] Post-Disaster Damage Detection using Aerial Footage: Visual Question Answering (VQA) Case Study
    Lowande, Rafael De Sa
    Mahyari, Arash
    Sevil, Hakki Erhan
    2022 IEEE APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP, AIPR, 2022,
  • [46] FTN-VQA: MULTIMODAL REASONING BY LEVERAGING A FULLY TRANSFORMER-BASED NETWORK FOR VISUAL QUESTION ANSWERING
    Wang, Runmin
    Xu, Weixiang
    Zhu, Yanbin
    Zhu, Zhenlin
    Chen, Hua
    Ding, Yajun
    Liu, Jinping
    Gao, Changxin
    Sang, Nong
    FRACTALS-COMPLEX GEOMETRY PATTERNS AND SCALING IN NATURE AND SOCIETY, 2023, 31 (06)
  • [47] BOK-VQA: Bilingual outside Knowledge-Based Visual Question Answering via Graph Representation Pretraining
    Kim, MinJun
    Song, SeungWoo
    Lee, YouHan
    Jang, Haneol
    Lim, KyungTae
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 16, 2024, : 18381 - 18389
  • [48] Auto-Parsing Network for Image Captioning and Visual Question Answering
    Yang, Xu
    Gao, Chongyang
    Zhang, Hanwang
    Cai, Jianfei
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 2177 - 2187
  • [49] MoBVQA: A Modality based Medical Image Visual Question Answering System
    Lubna, A.
    Kalady, Saidalavi
    Lijiya, A.
    PROCEEDINGS OF THE 2019 IEEE REGION 10 CONFERENCE (TENCON 2019): TECHNOLOGY, KNOWLEDGE, AND SOCIETY, 2019, : 727 - 732
  • [50] Using similarity based image caption to aid visual question answering
    Kang, Joonseo
    Lim, Changwon
    KOREAN JOURNAL OF APPLIED STATISTICS, 2021, 34 (02) : 191 - 204