MultiAICL: Multi-task Tuning for Augmented In-Context Learning in Text Style Transfer

被引:0
|
作者
Zhu, Linan [1 ]
Zhou, Zehai [1 ]
Chen, Xiangfan [1 ]
Guo, Xiaolei [1 ]
Kong, Xiangjie [1 ]
机构
[1] Zhejiang Univ Technol, Hangzhou, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
In-Context Learning; Text Style Transfer; Large Language Models;
D O I
10.1007/978-981-97-9437-9_5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In-context learning (ICL) enhances the performance of large language models (LLMs) across various natural language process (NLP) tasks by simply demonstrating a few-shot of examples or instructions during inference. However, ICL still encounters significant challenges on the text style transfer (TST) tasks, which require high levels of model reasoning. The existing ICL ability has not been further developed because LLMs lack the process of training and learning in context. To address these issues, we introduce Multi-Task Tuning for Augmented In-Context Learning (MultiAICL), a framework designed to enhance model ICL ability by simulating LLM's supervised fine-tuning steps. MultiAICL contains three main components: firstly, we construct example instructions for multiple tasks from the text corpus, where these examples are in the form of text-label pairs; secondly, we propose the Multi-Task Tuning (MTT) module, which tunes the model by randomly combining example instructions; and thirdly, we design the Augmented In-context Learning (AICL) module, which incorporates different tasks into example templates to infer the model. MultiAICL improves the ICL ability of LLMs while maintaining their generalization across multiple tasks, thus encouraging models to generate high-quality text. Extensive experiments show that MultiAICL achieves excellent results on all 6 TST tasks, even outperforming larger LLMs. The code and data are available at https://github.com/fuz999/NLPCC-2024- MultiAICL.
引用
收藏
页码:55 / 66
页数:12
相关论文
共 50 条
  • [21] Ask the GRU: Multi-task Learning for Deep Text Recommendations
    Bansal, Trapit
    Belanger, David
    McCallum, Andrew
    PROCEEDINGS OF THE 10TH ACM CONFERENCE ON RECOMMENDER SYSTEMS (RECSYS'16), 2016, : 107 - 114
  • [22] Multi-task Learning with Bidirectional Language Models for Text Classification
    Yang, Qi
    Shang, Lin
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [23] Multi-task learning for historical text normalization: Size matters
    Bollmann, Marcel
    Sogaard, Anders
    Bingel, Joachim
    DEEP LEARNING APPROACHES FOR LOW-RESOURCE NATURAL LANGUAGE PROCESSING (DEEPLO), 2018, : 19 - 24
  • [24] Multi-task learning using a hybrid representation for text classification
    Guangquan Lu
    Jiangzhang Gan
    Jian Yin
    Zhiping Luo
    Bo Li
    Xishun Zhao
    Neural Computing and Applications, 2020, 32 : 6467 - 6480
  • [25] Power text information extraction based on multi-task learning
    Ji, Xin
    Wu, Tongxin
    Yu, Ting
    Dong, Linxiao
    Chen, Yiting
    Mi, Na
    Zhao, Jiakui
    Beijing Hangkong Hangtian Daxue Xuebao/Journal of Beijing University of Aeronautics and Astronautics, 2024, 50 (08): : 2461 - 2469
  • [26] CoTexT: Multi-task Learning with Code-Text Transformer
    Long Phan
    Hieu Tran
    Le, Daniel
    Hieu Nguyen
    Anibal, James
    Peltekian, Alec
    Ye, Yanfang
    NLP4PROG 2021: THE 1ST WORKSHOP ON NATURAL LANGUAGE PROCESSING FOR PROGRAMMING (NLP4PROG 2021), 2021, : 40 - 47
  • [27] Multi-task learning using a hybrid representation for text classification
    Lu, Guangquan
    Gan, Jiangzhang
    Yin, Jian
    Luo, Zhiping
    Li, Bo
    Zhao, Xishun
    NEURAL COMPUTING & APPLICATIONS, 2020, 32 (11): : 6467 - 6480
  • [28] Fact Aware Multi-task Learning for Text Coherence Modeling
    Abhishek, Tushar
    Rawat, Daksh
    Gupta, Manish
    Varma, Vasudeva
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2022, PT II, 2022, 13281 : 340 - 353
  • [29] Multi-Task Learning for Text-dependent Speaker Verification
    Chen, Nanxin
    Qian, Yanmin
    Yu, Kai
    16TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2015), VOLS 1-5, 2015, : 185 - 189
  • [30] Driver Drowsiness Detection by Multi-task and Transfer Learning
    Chang, Yuan
    Kameyama, Wataru
    INTERNATIONAL WORKSHOP ON ADVANCED IMAGING TECHNOLOGY (IWAIT) 2022, 2022, 12177