Compositional Zero-Shot Domain Transfer with Text-to-Text Models

被引:1
|
作者
Liu, Fangyu [1 ]
Liu, Qianchu [2 ]
Bannur, Shruthi [2 ]
Perez-Garcia, Fernando [2 ]
Usuyama, Naoto [3 ]
Zhang, Sheng [3 ]
Naumann, Tristan [3 ]
Nori, Aditya [2 ]
Poon, Hoifung [3 ]
Alvarez-Valle, Javier [2 ]
Oktay, Ozan [2 ]
Hyland, Stephanie L. [2 ]
机构
[1] Univ Cambridge, Cambridge, England
[2] Microsoft Hlth Futures, Cambridge, England
[3] Microsoft Hlth Futures, Redmond, WA USA
关键词
721.1 Computer Theory; Includes Computational Logic; Automata Theory; Switching Theory; Programming Theory - 723.2 Data Processing and Image Processing - 723.4 Artificial Intelligence;
D O I
10.1162/tacl_a_00585
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Label scarcity is a bottleneck for improving task performance in specialized domains. We propose a novel compositional transfer learning framework (DoT51) for zero-shot domain transfer. Without access to in-domain labels, DoT5 jointly learns domain knowledge (from masked language modelling of unlabelled in-domain free text) and task knowledge (from task training on more readily available general-domain data) in a multi-task manner. To improve the transferability of task training, we design a strategy named NLGU: We simultaneously train natural language generation (NLG) for in-domain label-to-data generation, which enables data augmentation for self-finetuning and natural language understanding (NLU) for label prediction. We evaluate DoT5 on the biomedical domain and the resource-lean subdomain of radiology, focusing on natural language inference, text summarization, and embedding learning. DoT5 demonstrates the effectiveness of compositional transfer learning through multi-task learning. In particular, DoT5 outperforms the current state-of-the-art in zero-shot transfer by over 7 absolute points in accuracy on RadNLI. We validate DoT5 with ablations and a case study demonstrating its ability to solve challenging NLI examples requiring in-domain expertise.
引用
收藏
页码:1097 / 1113
页数:17
相关论文
共 50 条
  • [1] Model-Generated Pretraining Signals Improves Zero-Shot Generalization of Text-to-Text Transformers
    Gong, Linyuan
    Xiong, Chenyan
    Liu, Xiaodong
    Bajaj, Payal
    Xie, Yiqing
    Cheung, Alvin
    Gao, Jianfeng
    Song, Xia
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023): LONG PAPERS, VOL 1, 2023, : 12933 - 12950
  • [2] Language-Aware Soft Prompting: Text-to-Text Optimization for Few- and Zero-Shot Adaptation of V &L Models
    Bulat, Adrian
    Tzimiropoulos, Georgios
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (04) : 1108 - 1125
  • [3] Language-Aware Soft Prompting: Text-to-Text Optimization for Few- and Zero-Shot Adaptation of V &L Models
    Adrian Bulat
    Georgios Tzimiropoulos
    International Journal of Computer Vision, 2024, 132 : 1108 - 1125
  • [4] Cross-Domain Transfer of Generative Explanations Using Text-to-Text Models
    Erliksson, Karl Fredrik
    Arpteg, Anders
    Matskin, Mihhail
    Payberah, Amir H.
    NATURAL LANGUAGE PROCESSING AND INFORMATION SYSTEMS (NLDB 2021), 2021, 12801 : 76 - 89
  • [5] Zero-Shot Turkish Text Classification
    Birim, Ahmet
    Erden, Mustafa
    Arslan, Levent M.
    29TH IEEE CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS (SIU 2021), 2021,
  • [6] Text-to-Image Diffusion Models are Zero-Shot Classifiers
    Clark, Kevin
    Jaini, Priyank
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [7] Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators
    Khachatryan, Levon
    Movsisyan, Andranik
    Tadevosyan, Vahram
    Henschel, Roberto
    Wang, Zhangyang
    Navasardyan, Shant
    Shi, Humphrey
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 15908 - 15918
  • [8] LiT : Zero-Shot Transfer with Locked-image text Tuning
    Zhai, Xiaohua
    Wang, Xiao
    Mustafa, Basil
    Steiner, Andreas
    Keysers, Daniel
    Kolesnikov, Alexander
    Beyer, Lucas
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 18102 - 18112
  • [9] MultiCQA: Zero-Shot Transfer of Self-Supervised Text Matching Models on a Massive Scale
    Rueckle, Andreas
    Pfeiffer, Jonas
    Gurevych, Iryna
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 2471 - 2486
  • [10] Zero-Shot Text-to-Image Generation
    Ramesh, Aditya
    Pavlov, Mikhail
    Goh, Gabriel
    Gray, Scott
    Voss, Chelsea
    Radford, Alec
    Chen, Mark
    Sutskever, Ilya
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139