共 50 条
- [1] EFFICIENT UTILIZATION OF LARGE PRE-TRAINED MODELS FOR LOW RESOURCE ASR 2023 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING WORKSHOPS, ICASSPW, 2023,
- [2] ADAPTING PRE-TRAINED LANGUAGE MODELS TO LOW-RESOURCE TEXT SIMPLIFICATION: THE PATH MATTERS CONFERENCE ON LIFELONG LEARNING AGENTS, VOL 199, 2022, 199
- [5] ANALYZING ASR PRETRAINING FOR LOW-RESOURCE SPEECH-TO-TEXT TRANSLATION 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 7909 - 7913
- [6] Pre-Trained Multilingual Sequence-to-Sequence Models: A Hope for Low-Resource Language Translation? FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 58 - 67
- [8] Extremely Low Resource Text simplification with Pre-trained Transformer Language Model PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING (IALP), 2019, : 53 - 58
- [9] Pre-trained Text Embeddings for Enhanced Text-to-Speech Synthesis INTERSPEECH 2019, 2019, : 4430 - 4434
- [10] SPEECH SENTIMENT ANALYSIS VIA PRE-TRAINED FEATURES FROM END-TO-END ASR MODELS 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 7149 - 7153