Is BERT Blind? Exploring the Effect of Vision-and-Language Pretraining on Visual Language Understanding

被引:2
|
作者
Alper, Morris [1 ]
Fiman, Michael [1 ]
Averbuch-Elor, Hadar [1 ]
机构
[1] Tel Aviv Univ, Tel Aviv, Israel
关键词
COLOR;
D O I
10.1109/CVPR52729.2023.00655
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most humans use visual imagination to understand and reason about language, but models such as BERT reason about language using knowledge acquired during text-only pretraining. In this work, we investigate whether vision-and-language pretraining can improve performance on text-only tasks that involve implicit visual reasoning, focusing primarily on zero-shot probing methods. We propose a suite of visual language understanding (VLU) tasks for probing the visual reasoning abilities of text encoder models, as well as various non-visual natural language understanding (NLU) tasks for comparison. We also contribute a novel zero-shot knowledge probing method, Stroop probing, for applying models such as CLIP to text-only tasks without needing a prediction head such as the masked language modelling head of models like BERT. We show that SOTA multimodally trained text encoders outperform unimodally trained text encoders on the VLU tasks while being under-performed by them on the NLU tasks, lending new context to previously mixed results regarding the NLU capabilities of multimodal models. We conclude that exposure to images during pretraining affords inherent visual reasoning knowledge that is reflected in language-only tasks that require implicit visual reasoning. Our findings bear importance in the broader context of multimodal learning, providing principled guidelines for the choice of text encoders used in such contexts.
引用
收藏
页码:6778 / 6788
页数:11
相关论文
共 50 条
  • [1] Effect of Visual Extensions on Natural Language Understanding in Vision-and-Language Models
    Iki, Taichi
    Aizawa, Akiko
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 2189 - 2196
  • [2] VLN(sic)BERT: A Recurrent Vision-and-Language BERT for Navigation
    Hong, Yicong
    Wu, Qi
    Qi, Yuankai
    Rodriguez-Opazo, Cristian
    Gould, Stephen
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 1643 - 1653
  • [3] Airbert: In-domain Pretraining for Vision-and-Language Navigation
    Guhur, Pierre-Louis
    Tapaswi, Makarand
    Chen, Shizhe
    Laptev, Ivan
    Schmid, Cordelia
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 1614 - 1623
  • [4] Does Vision-and-Language Pretraining Improve Lexical Grounding?
    Yun, Tian
    Sun, Chen
    Pavlick, Ellie
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 4357 - 4366
  • [5] Exploring the Effect of Primitives for Compositional Generalization in Vision-and-Language
    Li, Chuanhao
    Li, Zhen
    Jing, Chenchen
    Jia, Yunde
    Wu, Yuwei
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 19092 - 19101
  • [6] Reinforced Vision-and-Language Navigation Based on Historical BERT
    Zhang, Zixuan
    Qi, Shuhan
    Zhou, Zihao
    Zhang, Jiajia
    Yuan, Hao
    Wang, Xuan
    Wang, Lei
    Xiao, Jing
    ADVANCES IN SWARM INTELLIGENCE, ICSI 2023, PT II, 2023, 13969 : 427 - 438
  • [7] KoreALBERT: Pretraining a Lite BERT Model for Korean Language Understanding
    Lee, Hyunjae
    Yoon, Jaewoong
    Hwang, Bonggyu
    Joe, Seongho
    Min, Seungjai
    Gwon, Youngjune
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 5551 - 5557
  • [8] ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks
    Lu, Jiasen
    Batra, Dhruv
    Parikh, Devi
    Lee, Stefan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [9] Exploring Vision Language Pretraining with Knowledge Enhancement via Large Language Model
    Tung, Chuenyuet
    Lin, Yi
    Yin, Jianing
    Ye, Qiaoyuchen
    Chen, Hao
    TRUSTWORTHY ARTIFICIAL INTELLIGENCE FOR HEALTHCARE, TAI4H 2024, 2024, 14812 : 81 - 91
  • [10] Improved VLN-BERT with Reinforcing Endpoint Alignment for Vision-and-Language Navigation
    Jin, Chuan
    Yang, Boyuan
    Liu, Ruonan
    GENERALIZING FROM LIMITED RESOURCES IN THE OPEN WORLD, GLOW-IJCAI 2024, 2024, 2160 : 119 - 133