共 50 条
- [41] Bridge and Hint: Extending Pre-trained Language Models for Long-Range Code PROCEEDINGS OF THE 33RD ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2024, 2024, : 274 - 286
- [42] What Do They Capture? - A Structural Analysis of Pre-Trained Language Models for Source Code 2022 ACM/IEEE 44TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2022), 2022, : 2377 - 2388
- [43] What do pre-trained code models know about code? 2021 36TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING ASE 2021, 2021, : 1332 - 1336
- [44] Diet Code Is Healthy: Simplifying Programs for Pre-trained Models of Code PROCEEDINGS OF THE 30TH ACM JOINT MEETING EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, ESEC/FSE 2022, 2022, : 1073 - 1084
- [46] G-Tuning: Improving Generalization of Pre-trained Language Models with Generative Adversarial Network FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, 2023, : 4747 - 4755
- [47] A Study of Pre-trained Language Models in Natural Language Processing 2020 IEEE INTERNATIONAL CONFERENCE ON SMART CLOUD (SMARTCLOUD 2020), 2020, : 116 - 121
- [48] How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness? ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
- [49] Entity Resolution Based on Pre-trained Language Models with Two Attentions WEB AND BIG DATA, PT III, APWEB-WAIM 2023, 2024, 14333 : 433 - 448