Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering

被引:69
|
作者
Goyal, Yash [1 ]
Khot, Tejas [2 ]
Agrawal, Aishwarya [1 ]
Summers-Stay, Douglas [3 ]
Batra, Dhruv [1 ,4 ]
Parikh, Devi [1 ,4 ]
机构
[1] Georgia Tech, Atlanta, GA 30332 USA
[2] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[3] Army Res Lab, Adelphi, MD USA
[4] Facebook AI Res, Menlo Pk, CA USA
关键词
Visual question answering; VQA; VQA challenge;
D O I
10.1007/s11263-018-1116-0
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The problem of visual question answering (VQA) is of significant importance both as a challenging research question and for the rich set of applications it enables. In this context, however, inherent structure in our world and bias in our language tend to be a simpler signal for learning than visual modalities, resulting in VQA models that ignore visual information, leading to an inflated sense of their capability. We propose to counter these language priors for the task of VQA and make vision (the V in VQA) matter! Specifically, we balance the popular VQA dataset (Antol et al., in: ICCV, 2015) by collecting complementary images such that every question in our balanced dataset is associated with not just a single image, but rather a pair of similar images that result in two different answers to the question. Our dataset is by construction more balanced than the original VQA dataset and has approximately twice the number of image-question pairs. Our complete balanced dataset is publicly available at http://visualqa.org/ as part of the 2nd iteration of the VQA Dataset and Challenge (VQA v2.0). We further benchmark a number of state-of-art VQA models on our balanced dataset. All models perform significantly worse on our balanced dataset, suggesting that these models have indeed learned to exploit language priors. This finding provides the first concrete empirical evidence for what seems to be a qualitative sense among practitioners. We also present interesting insights from analysis of the participant entries in VQA Challenge 2017, organized by us on the proposed VQA v2.0 dataset. The results of the challenge were announced in the 2nd VQA Challenge Workshop at the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017. Finally, our data collection protocol for identifying complementary images enables us to develop a novel interpretable model, which in addition to providing an answer to the given (image, question) pair, also provides a counter-example based explanation. Specifically, it identifies an image that is similar to the original image, but it believes has a different answer to the same question. This can help in building trust for machines among their users.
引用
收藏
页码:398 / 414
页数:17
相关论文
共 50 条
  • [31] A visual question answering model based on image captioning
    Zhou, Kun
    Liu, Qiongjie
    Zhao, Dexin
    MULTIMEDIA SYSTEMS, 2024, 30 (06)
  • [32] Knowledge-aware image understanding with multi-level visual representation enhancement for visual question answering
    Feng Yan
    Zhe Li
    Wushour Silamu
    Yanbing Li
    Machine Learning, 2024, 113 : 3789 - 3805
  • [33] Visual question answering algorithm based on image caption
    Cai, Wenliang
    Qiu, Guoyong
    PROCEEDINGS OF 2019 IEEE 3RD INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC 2019), 2019, : 2076 - 2079
  • [34] Feasibility of Visual Question Answering (VQA) for Post-Disaster Damage Detection Using Aerial Footage
    Lowande, Rafael De Sa
    Sevil, Hakki Erhan
    APPLIED SCIENCES-BASEL, 2023, 13 (08):
  • [35] Knowledge-aware image understanding with multi-level visual representation enhancement for visual question answering
    Yan, Feng
    Li, Zhe
    Silamu, Wushour
    Li, Yanbing
    MACHINE LEARNING, 2024, 113 (06) : 3789 - 3805
  • [36] A CASCADED LONG SHORT-TERM MEMORY (LSTM) DRIVEN GENERIC VISUAL QUESTION ANSWERING (VQA)
    Chowdhury, Iqbal
    Kien Nguyen
    Fookes, Clinton
    Sridharan, Sridha
    2017 24TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2017, : 1842 - 1846
  • [37] Fair-VQA: Fairness-Aware Visual Question Answering Through Sensitive Attribute Prediction
    Park, Sungho
    Hwang, Sunhee
    Hong, Jongkwang
    Byun, Hyeran
    IEEE ACCESS, 2020, 8 : 215091 - 215099
  • [38] VQA-GNN: Reasoning with Multimodal Knowledge via Graph Neural Networks for Visual Question Answering
    Wang, Yanan
    Yasunaga, Michihiro
    Ren, Hongyu
    Wada, Shinya
    Leskovec, Jure
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 21525 - 21535
  • [39] Scene Understanding for Autonomous Driving Using Visual Question Answering
    Wantiez, Adrien
    Qiu, Tianming
    Matthes, Stefan
    Shen, Hao
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [40] On the role of question encoder sequence model in robust visual question answering
    Kv, Gouthaman
    Mittal, Anurag
    PATTERN RECOGNITION, 2022, 131