共 50 条
- [1] Effect of Visual Extensions on Natural Language Understanding in Vision-and-Language Models 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 2189 - 2196
- [2] VLN(sic)BERT: A Recurrent Vision-and-Language BERT for Navigation 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 1643 - 1653
- [3] Airbert: In-domain Pretraining for Vision-and-Language Navigation 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 1614 - 1623
- [4] Does Vision-and-Language Pretraining Improve Lexical Grounding? FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 4357 - 4366
- [5] Exploring the Effect of Primitives for Compositional Generalization in Vision-and-Language 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 19092 - 19101
- [6] Reinforced Vision-and-Language Navigation Based on Historical BERT ADVANCES IN SWARM INTELLIGENCE, ICSI 2023, PT II, 2023, 13969 : 427 - 438
- [7] KoreALBERT: Pretraining a Lite BERT Model for Korean Language Understanding 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 5551 - 5557
- [8] ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
- [9] Exploring Vision Language Pretraining with Knowledge Enhancement via Large Language Model TRUSTWORTHY ARTIFICIAL INTELLIGENCE FOR HEALTHCARE, TAI4H 2024, 2024, 14812 : 81 - 91
- [10] Improved VLN-BERT with Reinforcing Endpoint Alignment for Vision-and-Language Navigation GENERALIZING FROM LIMITED RESOURCES IN THE OPEN WORLD, GLOW-IJCAI 2024, 2024, 2160 : 119 - 133