Not All Demonstration Examples are Equally Beneficial: Reweighting Demonstration Examples for In-Context Learning

被引:0
|
作者
Yang, Zhe [1 ]
Dai, Damai [1 ]
Wang, Peiyi [1 ]
Sui, Zhifang [1 ]
机构
[1] Peking Univ, Sch Comp Sci, Natl Key Lab Multimedia Informat Proc, Beijing, Peoples R China
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large Language Models (LLMs) have recently gained the In-Context Learning (ICL) ability with the models scaling up, allowing them to quickly adapt to downstream tasks with only a few demonstration examples prepended in the input sequence. Nonetheless, the current practice of ICL treats all demonstration examples equally, which still warrants improvement, as the quality of examples is usually uneven. In this paper, we investigate how to determine approximately optimal weights for demonstration examples and how to apply them during ICL. To assess the quality of weights in the absence of additional validation data, we design a masked self-prediction (MSP) score that exhibits a strong correlation with the final ICL performance. To expedite the weight-searching process, we discretize the continuous weight space and adopt beam search. With approximately optimal weights obtained, we further propose two strategies to apply them to demonstrations at different model positions. Experimental results on 8 text classification tasks show that our approach outperforms conventional ICL by a large margin. Our code are publicly available at https: github.com/Zhe-Young/WICL.
引用
收藏
页码:13209 / 13221
页数:13
相关论文
共 50 条
  • [1] Finding Support Examples for In-Context Learning
    Li, Xiaonan
    Qiu, Xipeng
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS - EMNLP 2023, 2023, : 6219 - 6235
  • [2] Unified Demonstration Retriever for In-Context Learning
    Li, Xiaonan
    Lv, Kai
    Yan, Hang
    Lin, Tianyang
    Wei, Zhu
    Ni, Yuan
    Xie, Guotong
    Wang, Xiaoling
    Qiu, Xipeng
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 4644 - 4668
  • [3] What Makes Good Examples for Visual In-Context Learning?
    Zhang, Yuanhan
    Zhou, Kaiyang
    Liu, Ziwei
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [4] NICE: To Optimize In-Context Examples or Not?
    Srivastava, Pragya
    Golechha, Satvik
    Deshpande, Amit
    Sharma, Amit
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 5494 - 5510
  • [5] What Makes a Good Order of Examples in In-Context Learning
    Guo, Qi
    Wang, Leiyu
    Wang, Yidong
    Ye, Wei
    Zhang, Shikun
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 14892 - 14904
  • [6] Learning to Retrieve In-Context Examples for Large Language Models
    Wang, Liang
    Yang, Nan
    Wei, Furu
    PROCEEDINGS OF THE 18TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 1752 - 1767
  • [7] In-context Examples Selection for Machine Translation
    Agrawal, Sweta
    Zhou, Chunting
    Lewis, Mike
    Zettlemoyer, Luke
    Ghazvininejad, Marjan
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 8857 - 8873
  • [8] Towards the Effect of Examples on In-Context Learning: A Theoretical Case Study
    He, Pengfei
    Cui, Yingqian
    Xu, Han
    Liu, Hui
    Yamada, Makoto
    Tang, Jiliang
    Xing, Yue
    STAT, 2025, 14 (01):
  • [9] Experimental demonstration of adversarial examples in learning topological phases
    Zhang, Huili
    Jiang, Si
    Wang, Xin
    Zhang, Wengang
    Huang, Xianzhi
    Ouyang, Xiaolong
    Yu, Yefei
    Liu, Yanqing
    Deng, Dong-Ling
    Duan, L-M
    NATURE COMMUNICATIONS, 2022, 13 (01)
  • [10] Experimental demonstration of adversarial examples in learning topological phases
    Huili Zhang
    Si Jiang
    Xin Wang
    Wengang Zhang
    Xianzhi Huang
    Xiaolong Ouyang
    Yefei Yu
    Yanqing Liu
    Dong-Ling Deng
    L.-M. Duan
    Nature Communications, 13