共 50 条
- [41] EFFICIENT UTILIZATION OF LARGE PRE-TRAINED MODELS FOR LOW RESOURCE ASR 2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
- [42] The Emergence of Essential Sparsity in Large Pre-trained Models: The Weights that Matter ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [43] Exploring Accurate and Generic Simile Knowledge from Pre-trained Language Models CHINESE COMPUTATIONAL LINGUISTICS, CCL 2023, 2023, 14232 : 348 - 363
- [45] Synergizing Large Language Models and Pre-Trained Smaller Models for Conversational Intent Discovery FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 14133 - 14147
- [46] From Cloze to Comprehension: Retrofitting Pre-trained Masked Language Models to Pre-trained Machine Reader ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
- [47] Annotating Columns with Pre-trained Language Models PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA (SIGMOD '22), 2022, : 1493 - 1503
- [48] Clinical efficacy of pre-trained large language models through the lens of aphasia SCIENTIFIC REPORTS, 2024, 14 (01):
- [50] Grounding Ontologies with Pre-Trained Large Language Models for Activity Based Intelligence SIGNAL PROCESSING, SENSOR/INFORMATION FUSION, AND TARGET RECOGNITION XXXIII, 2024, 13057