MultiAICL: Multi-task Tuning for Augmented In-Context Learning in Text Style Transfer

被引:0
|
作者
Zhu, Linan [1 ]
Zhou, Zehai [1 ]
Chen, Xiangfan [1 ]
Guo, Xiaolei [1 ]
Kong, Xiangjie [1 ]
机构
[1] Zhejiang Univ Technol, Hangzhou, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
In-Context Learning; Text Style Transfer; Large Language Models;
D O I
10.1007/978-981-97-9437-9_5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In-context learning (ICL) enhances the performance of large language models (LLMs) across various natural language process (NLP) tasks by simply demonstrating a few-shot of examples or instructions during inference. However, ICL still encounters significant challenges on the text style transfer (TST) tasks, which require high levels of model reasoning. The existing ICL ability has not been further developed because LLMs lack the process of training and learning in context. To address these issues, we introduce Multi-Task Tuning for Augmented In-Context Learning (MultiAICL), a framework designed to enhance model ICL ability by simulating LLM's supervised fine-tuning steps. MultiAICL contains three main components: firstly, we construct example instructions for multiple tasks from the text corpus, where these examples are in the form of text-label pairs; secondly, we propose the Multi-Task Tuning (MTT) module, which tunes the model by randomly combining example instructions; and thirdly, we design the Augmented In-context Learning (AICL) module, which incorporates different tasks into example templates to infer the model. MultiAICL improves the ICL ability of LLMs while maintaining their generalization across multiple tasks, thus encouraging models to generate high-quality text. Extensive experiments show that MultiAICL achieves excellent results on all 6 TST tasks, even outperforming larger LLMs. The code and data are available at https://github.com/fuz999/NLPCC-2024- MultiAICL.
引用
收藏
页码:55 / 66
页数:12
相关论文
共 50 条
  • [41] A text guided multi-task learning network for multimodal sentiment analysis
    Luo, Yuanyi
    Wu, Rui
    Liu, Jiafeng
    Tang, Xianglong
    NEUROCOMPUTING, 2023, 560
  • [42] A Multi-task Text Classification Model Based on Label Embedding Learning
    Xu, Yuemei
    Fan, Zuwei
    Cao, Han
    CYBER SECURITY, CNCERT 2021, 2022, 1506 : 211 - 225
  • [43] A Generalized Recurrent Neural Architecture for Text Classification with Multi-Task Learning
    Zhang, Honglun
    Xiao, Liqiang
    Wang, Yongkun
    Jin, Yaohui
    PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 3385 - 3391
  • [44] Multi-task Representation Learning for Enhanced Emotion Categorization in Short Text
    Sen, Anirban
    Sinha, Manjira
    Mannarswamy, Sandya
    Roy, Shourya
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2017, PT II, 2017, 10235 : 324 - 336
  • [45] Multi-Task Learning for Mental Health using Social Media Text
    Benton, Adrian
    Mitchell, Margaret
    Hovy, Dirk
    15TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EACL 2017), VOL 1: LONG PAPERS, 2017, : 152 - 162
  • [46] Explainable Recommendation via Multi-Task Learning in Opinionated Text Data
    Wang, Nan
    Wang, Hongning
    Jia, Yiling
    Yin, Yue
    ACM/SIGIR PROCEEDINGS 2018, 2018, : 165 - 174
  • [47] Multi-task and Generative Adversarial Learning for Robust and Sustainable Text Classification
    Breazzano, Claudia
    Croce, Danilo
    Basili, Roberto
    AIXIA 2021 - ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, 13196 : 228 - 244
  • [48] Multi-task Transfer with Practice
    Pattnaik, Upasana
    Lee, Minwoo
    2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,
  • [49] Optimization of the Abstract Text Summarization Model Based on Multi-Task Learning
    Yao, Ben
    Ding, Gejian
    PROCEEDINGS OF 2023 7TH INTERNATIONAL CONFERENCE ON ELECTRONIC INFORMATION TECHNOLOGY AND COMPUTER ENGINEERING, EITCE 2023, 2023, : 424 - 428
  • [50] A Single-Shot Arbitrarily-Shaped Text Detector based on Context Attended Multi-Task Learning
    Wang, Pengfei
    Zhang, Chengquan
    Qi, Fei
    Huang, Zuming
    En, Mengyi
    Han, Junyu
    Liu, Jingtuo
    Ding, Errui
    Shi, Guangming
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 1277 - 1285