MultiAICL: Multi-task Tuning for Augmented In-Context Learning in Text Style Transfer

被引:0
|
作者
Zhu, Linan [1 ]
Zhou, Zehai [1 ]
Chen, Xiangfan [1 ]
Guo, Xiaolei [1 ]
Kong, Xiangjie [1 ]
机构
[1] Zhejiang Univ Technol, Hangzhou, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
In-Context Learning; Text Style Transfer; Large Language Models;
D O I
10.1007/978-981-97-9437-9_5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In-context learning (ICL) enhances the performance of large language models (LLMs) across various natural language process (NLP) tasks by simply demonstrating a few-shot of examples or instructions during inference. However, ICL still encounters significant challenges on the text style transfer (TST) tasks, which require high levels of model reasoning. The existing ICL ability has not been further developed because LLMs lack the process of training and learning in context. To address these issues, we introduce Multi-Task Tuning for Augmented In-Context Learning (MultiAICL), a framework designed to enhance model ICL ability by simulating LLM's supervised fine-tuning steps. MultiAICL contains three main components: firstly, we construct example instructions for multiple tasks from the text corpus, where these examples are in the form of text-label pairs; secondly, we propose the Multi-Task Tuning (MTT) module, which tunes the model by randomly combining example instructions; and thirdly, we design the Augmented In-context Learning (AICL) module, which incorporates different tasks into example templates to infer the model. MultiAICL improves the ICL ability of LLMs while maintaining their generalization across multiple tasks, thus encouraging models to generate high-quality text. Extensive experiments show that MultiAICL achieves excellent results on all 6 TST tasks, even outperforming larger LLMs. The code and data are available at https://github.com/fuz999/NLPCC-2024- MultiAICL.
引用
收藏
页码:55 / 66
页数:12
相关论文
共 50 条
  • [1] Bayesian Multi-Task Transfer Learning for Soft Prompt Tuning
    Lee, Haeju
    Jeong, Minchan
    Yun, Se-Young
    Kim, Kee-Eung
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 4942 - 4958
  • [2] What In-Context Learning "Learns" In-Context: Disentangling Task Recognition and Task Learning
    Pan, Jane
    Gao, Tianyu
    Chen, Howard
    Chen, Danqi
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 8298 - 8319
  • [3] Learning Task Relatedness in Multi-Task Learning for Images in Context
    Strezoski, Gjorgji
    van Noord, Nanne
    Worring, Marcel
    ICMR'19: PROCEEDINGS OF THE 2019 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, 2019, : 78 - 86
  • [4] Adversarial Multi-task Learning for Text Classification
    Liu, Pengfei
    Qiu, Xipeng
    Huang, Xuanjing
    PROCEEDINGS OF THE 55TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2017), VOL 1, 2017, : 1 - 10
  • [5] Generative Multi-Task Learning for Text Classification
    Zhao, Wei
    Gao, Hui
    Chen, Shuhui
    Wang, Nan
    IEEE ACCESS, 2020, 8 : 86380 - 86387
  • [6] Multi-Dimensional Evaluation of Text Summarization with In-Context Learning
    Jain, Sameer
    Keshava, Vaishakh
    Sathyendral, Swarnashree Mysore
    Fernandes, Patrick
    Liu, Pengfei
    Neubig, Graham
    Zhou, Chunting
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 8487 - 8495
  • [7] TASK AWARE MULTI-TASK LEARNING FOR SPEECH TO TEXT TASKS
    Indurthi, Sathish
    Zaidi, Mohd Abbas
    Lakumarapu, Nikhil Kumar
    Lee, Beomseok
    Han, Hyojung
    Ahn, Seokchan
    Kim, Sangha
    Kim, Chanwoo
    Hwang, Inchul
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 7723 - 7727
  • [8] MULTI-TASK DISTILLATION: TOWARDS MITIGATING THE NEGATIVE TRANSFER IN MULTI-TASK LEARNING
    Meng, Ze
    Yao, Xin
    Sun, Lifeng
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 389 - 393
  • [9] Inferring Painting Style with Multi-task Dictionary Learning
    Liu, Gaowen
    Yan, Yan
    Ricci, Elisa
    Yang, Yi
    Han, Yahong
    Winkler, Stefan
    Sebe, Nicu
    PROCEEDINGS OF THE TWENTY-FOURTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI), 2015, : 2162 - 2168
  • [10] MT2: Towards a Multi-Task Machine Translation Model with Translation-Specific In-Context Learning
    Li, Chunyou
    Liu, Mingtong
    Zhang, Hongxiao
    Chen, Yufeng
    Xu, Jinan
    Zhou, Ming
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 8616 - 8627