共 50 条
- [21] ACTUNE: Uncertainty-Based Active Self-Training for Active Fine-Tuning of Pretrained Language Models NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES, 2022, : 1422 - 1436
- [22] Causal-Debias: Unifying Debiasing in Pretrained Language Models and Fine-tuning via Causal Invariant Learning PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 4227 - 4241
- [24] Towards Low-Resource Automatic Program Repair with Meta-Learning and Pretrained Language Models 2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING, EMNLP 2023, 2023, : 6954 - 6968
- [25] AgglutiFiT: Efficient Low-Resource Agglutinative Language Model Fine-Tuning IEEE ACCESS, 2020, 8 : 148489 - 148499
- [26] Evaluating the Effectiveness of Fine-Tuning Large Language Model for Domain-Specific Task 2024 IEEE INTERNATIONAL CONFERENCE ON INFORMATION REUSE AND INTEGRATION FOR DATA SCIENCE, IRI 2024, 2024, : 176 - 177
- [28] Fine-tuning Happens in Tiny Subspaces: Exploring Intrinsic Task-specific Subspaces of Pre-trained Language Models PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 1701 - 1713
- [29] Rebetiko Singer Identification: Fine-tuning and explaining deep pretrained transformer models PROCEEDINGS OF THE 19TH INTERNATIONAL AUDIO MOSTLY CONFERENCE, AM 2024, 2024, : 285 - 291
- [30] GO BEYOND PLAIN FINE-TUNING: IMPROVING PRETRAINED MODELS FOR SOCIAL COMMONSENSE 2021 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP (SLT), 2021, : 1028 - 1035