共 20 条
- [1] Learning to Imagine: Visually-Augmented Natural Language Generation PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 9468 - 9481
- [2] Evaluation of Pretrained Large Language Models in Embodied Planning Tasks ARTIFICIAL GENERAL INTELLIGENCE, AGI 2023, 2023, 13921 : 222 - 232
- [4] The Use of Clinical Language Models Pretrained on Institutional EHR Data for Downstream Tasks 2024 21ST INTERNATIONAL JOINT CONFERENCE ON COMPUTER SCIENCE AND SOFTWARE ENGINEERING, JCSSE 2024, 2024, : 648 - 655
- [5] Visually Grounded Language Learning: a Review of Language Games, Datasets, Tasks, and Models JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2024, 79 : 173 - 239
- [6] Visually Grounded Language Learning: a Review of Language Games, Datasets, Tasks, and Models Journal of Artificial Intelligence Research, 2024, 79 : 173 - 239
- [7] Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021,
- [8] Images in Language Space: Exploring the Suitability of Large Language Models for Vision & Language Tasks FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 14196 - 14210
- [10] From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 11737 - 11762