Improving Pretrained Language Model Fine-Tuning With Noise Stability Regularization

被引:2
|
作者
Hua, Hang [1 ]
Li, Xingjian [2 ]
Dou, Dejing [3 ]
Xu, Cheng-Zhong [4 ]
Luo, Jiebo [1 ]
机构
[1] Univ Rochester, Dept Comp Sci, Rochester, NY 14627 USA
[2] Carnegie Mellon Univ, Computat Biol Dept, Pittsburgh, PA 15213 USA
[3] BCG Greater China, Beijing 100027, Peoples R China
[4] Univ Macau, State Key Lab IOTSC, Fac Sci & Technol, Macau, Peoples R China
关键词
Stability analysis; Task analysis; Training; Transformers; Gaussian distribution; Standards; Optimization; Domain generalization; fine-tuning; in-domain generalization; pretrained language models (PLMs); regularization; NEURAL-NETWORKS;
D O I
10.1109/TNNLS.2023.3330926
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The advent of large-scale pretrained language models (PLMs) has contributed greatly to the progress in natural language processing (NLP). Despite its recent success and wide adoption, fine-tuning a PLM often suffers from overfitting, which leads to poor generalizability due to the extremely high complexity of the model and the limited training samples from downstream tasks. To address this problem, we propose a novel and effective fine-tuning framework, named layerwise noise stability regularization (LNSR). Specifically, our method perturbs the input of neural networks with the standard Gaussian or in-manifold noise in the representation space and regularizes each layer's output of the language model. We provide theoretical and experimental analyses to prove the effectiveness of our method. The empirical results show that our proposed method outperforms several state-of-the-art algorithms, such as L2 norm and start point (L2-SP), Mixout, FreeLB, and smoothness inducing adversarial regularization and Bregman proximal point optimization (SMART). In addition to evaluating the proposed method on relatively simple text classification tasks, similar to the prior works, we further evaluate the effectiveness of our method on more challenging question-answering (QA) tasks. These tasks present a higher level of difficulty, and they provide a larger amount of training examples for tuning a well-generalized model. Furthermore, the empirical results indicate that our proposed method can improve the ability of language models to domain generalization.
引用
收藏
页码:1898 / 1910
页数:13
相关论文
共 50 条
  • [1] Noise Stability Regularization for Improving BERT Fine-tuning
    Hua, Hang
    Li, Xingjian
    Dou, Dejing
    Xu, Chengzhong
    Luo, Jiebo
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 3229 - 3241
  • [2] CONVFIT: Conversational Fine-Tuning of Pretrained Language Models
    Vulic, Ivan
    Su, Pei-Hao
    Coope, Sam
    Gerz, Daniela
    Budzianowski, Pawel
    Casanueva, Inigo
    Mrksic, Nikola
    Wen, Tsung-Hsien
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 1151 - 1168
  • [3] Noise-Robust Fine-Tuning of Pretrained Language Models via External Guidance
    Wang, Song
    Tan, Zhen
    Guo, Ruocheng
    Li, Jundong
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EMNLP 2023), 2023, : 12528 - 12540
  • [4] DR-Tune: Improving Fine-tuning of Pretrained Visual Models by Distribution Regularization with Semantic Calibration
    Zhou, Nan
    Chen, Jiaxin
    Huang, Di
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 1547 - 1556
  • [5] Improving Universal Language Model Fine-Tuning using Attention Mechanism
    Santos, Flavio A. O.
    Ponce-Guevara, K. L.
    Macedo, David
    Zanchettin, Cleber
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [6] Improving cross-lingual language understanding with consistency regularization-based fine-tuning
    Bo Zheng
    Wanxiang Che
    International Journal of Machine Learning and Cybernetics, 2023, 14 : 3621 - 3639
  • [7] Improving cross-lingual language understanding with consistency regularization-based fine-tuning
    Zheng, Bo
    Che, Wanxiang
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2023, 14 (10) : 3621 - 3639
  • [8] GO BEYOND PLAIN FINE-TUNING: IMPROVING PRETRAINED MODELS FOR SOCIAL COMMONSENSE
    Chang, Ting-Yun
    Liu, Yang
    Gopalakrishnan, Karthik
    Hedayatnia, Behnam
    Zhou, Pei
    Hakkani-Tur, Dilek
    2021 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP (SLT), 2021, : 1028 - 1035
  • [9] Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting
    Chen, Sanyuan
    Hou, Yutai
    Cui, Yiming
    Che, Wanxiang
    Liu, Ting
    Yu, Xiangzhan
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 7870 - 7881
  • [10] Debiased Fine-Tuning for Vision-Language Models by Prompt Regularization
    Zhu, Beier
    Niu, Yulei
    Lee, Saeil
    Hur, Minhoe
    Zhang, Hanwang
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 3, 2023, : 3834 - 3842