MultiAICL: Multi-task Tuning for Augmented In-Context Learning in Text Style Transfer

被引:0
|
作者
Zhu, Linan [1 ]
Zhou, Zehai [1 ]
Chen, Xiangfan [1 ]
Guo, Xiaolei [1 ]
Kong, Xiangjie [1 ]
机构
[1] Zhejiang Univ Technol, Hangzhou, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
In-Context Learning; Text Style Transfer; Large Language Models;
D O I
10.1007/978-981-97-9437-9_5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In-context learning (ICL) enhances the performance of large language models (LLMs) across various natural language process (NLP) tasks by simply demonstrating a few-shot of examples or instructions during inference. However, ICL still encounters significant challenges on the text style transfer (TST) tasks, which require high levels of model reasoning. The existing ICL ability has not been further developed because LLMs lack the process of training and learning in context. To address these issues, we introduce Multi-Task Tuning for Augmented In-Context Learning (MultiAICL), a framework designed to enhance model ICL ability by simulating LLM's supervised fine-tuning steps. MultiAICL contains three main components: firstly, we construct example instructions for multiple tasks from the text corpus, where these examples are in the form of text-label pairs; secondly, we propose the Multi-Task Tuning (MTT) module, which tunes the model by randomly combining example instructions; and thirdly, we design the Augmented In-context Learning (AICL) module, which incorporates different tasks into example templates to infer the model. MultiAICL improves the ICL ability of LLMs while maintaining their generalization across multiple tasks, thus encouraging models to generate high-quality text. Extensive experiments show that MultiAICL achieves excellent results on all 6 TST tasks, even outperforming larger LLMs. The code and data are available at https://github.com/fuz999/NLPCC-2024- MultiAICL.
引用
收藏
页码:55 / 66
页数:12
相关论文
共 50 条
  • [31] Multi-task Transfer Learning for Bayesian Network Structures
    Benikhlef, Sarah
    Leray, Philippe
    Raschia, Guillaume
    Ben Messaoud, Montassar
    Sakly, Fayrouz
    SYMBOLIC AND QUANTITATIVE APPROACHES TO REASONING WITH UNCERTAINTY, ECSQARU 2021, 2021, 12897 : 217 - 228
  • [32] Episodic memory transfer for multi-task reinforcement learning
    Sorokin, Artyom Y.
    Burtsev, Mikhail S.
    BIOLOGICALLY INSPIRED COGNITIVE ARCHITECTURES, 2018, 26 : 91 - 95
  • [33] Symbol tuning improves in-context learning in language models
    Wei, Jerry
    Hou, Le
    Lampinen, Andrew
    Chen, Xiangning
    Huang, Da
    Tay, Yi
    Chen, Xinyun
    Lu, Yifeng
    Zhou, Denny
    Ma, Tengyu
    Le, Quoc V.
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 968 - 979
  • [34] Multi-Task Reinforcement Learning with Context-based Representations
    Sodhani, Shagun
    Zhang, Amy
    Pineau, Joelle
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [35] Exploring Multi-task Learning in the Context of Masked AES Implementations
    Marquet, Thomas
    Oswald, Elisabeth
    CONSTRUCTIVE SIDE-CHANNEL ANALYSIS AND SECURE DESIGN, COSADE 2024, 2024, 14595 : 93 - 112
  • [36] Parameter Efficient Multi-task Fine-tuning by Learning to Transfer Token-wise Prompts
    Wu, Muling
    Liu, Wenhao
    Xu, Jianhan
    Lv, Changze
    Ling, ZiXuan
    Li, Tianlong
    Huang, LongTao
    Zheng, XiaoQing
    Huang, Xuanjing
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 8734 - 8746
  • [37] Text Augmentation in a Multi-Task View
    Wei, Jason
    Huang, Chengyu
    Xu, Shiqi
    Vosoughi, Soroush
    16TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EACL 2021), 2021, : 2888 - 2894
  • [38] Hierarchical Prompt Tuning for Few-Shot Multi-Task Learning
    Liu, Jingping
    Chen, Tao
    Liang, Zujie
    Jiang, Haiyun
    Xiao, Yanghua
    Wei, Feng
    Qian, Yuxi
    Hao, Zhenghong
    Han, Bing
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 1556 - 1565
  • [39] A GENERAL MULTI-TASK LEARNING FRAMEWORK TO LEVERAGE TEXT DATA FOR SPEECH TO TEXT TASKS
    Tang, Yun
    Pino, Juan
    Wang, Changhan
    Ma, Xutai
    Genzel, Dmitriy
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 6209 - 6213
  • [40] An Empirical Study of Multi-Task Learning on BERT for Biomedical Text Mining
    Peng, Yifan
    Chen, Qingyu
    Lu, Zhiyong
    19TH SIGBIOMED WORKSHOP ON BIOMEDICAL LANGUAGE PROCESSING (BIONLP 2020), 2020, : 205 - 214