共 50 条
- [21] Fine-tuning Pre-trained Models for Robustness under Noisy Labels PROCEEDINGS OF THE THIRTY-THIRD INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2024, 2024, : 3643 - 3651
- [22] Debiasing Pre-Trained Language Models via Efficient Fine-Tuning PROCEEDINGS OF THE SECOND WORKSHOP ON LANGUAGE TECHNOLOGY FOR EQUALITY, DIVERSITY AND INCLUSION (LTEDI 2022), 2022, : 59 - 69
- [23] Exploiting Syntactic Information to Boost the Fine-tuning of Pre-trained Models 2022 IEEE 46TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE (COMPSAC 2022), 2022, : 575 - 582
- [24] FINE-TUNING OF PRE-TRAINED END-TO-END SPEECH RECOGNITION WITH GENERATIVE ADVERSARIAL NETWORKS 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 6204 - 6208
- [25] Gender-tuning: Empowering Fine-tuning for Debiasing Pre-trained Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 5448 - 5458
- [26] Novel Fine-Tuning Strategy on Pre-trained Protein Model Enhances ACP Functional Type Classification BIOINFORMATICS RESEARCH AND APPLICATIONS, PT I, ISBRA 2024, 2024, 14954 : 371 - 382
- [27] The Impact of Padding on Image Classification by Using Pre-trained Convolutional Neural Networks IMAGE ANALYSIS AND PROCESSING - ICIAP 2019, PT II, 2019, 11752 : 337 - 344
- [29] Neural Architecture Search for Parameter-Efficient Fine-tuning of Large Pre-trained Language Models FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 8506 - 8515
- [30] Pathologies of Pre-trained Language Models in Few-shot Fine-tuning PROCEEDINGS OF THE THIRD WORKSHOP ON INSIGHTS FROM NEGATIVE RESULTS IN NLP (INSIGHTS 2022), 2022, : 144 - 153