共 50 条
- [22] Improving Speech Separation with Knowledge Distilled from Self-supervised Pre-trained Models 2022 13TH INTERNATIONAL SYMPOSIUM ON CHINESE SPOKEN LANGUAGE PROCESSING (ISCSLP), 2022, : 329 - 333
- [24] Explore the Use of Self-supervised Pre-trained Acoustic Features on Disguised Speech Detection BIOMETRIC RECOGNITION (CCBR 2021), 2021, 12878 : 483 - 490
- [25] KNOWLEDGE DISTILLATION FOR NEURAL TRANSDUCERS FROM LARGE SELF-SUPERVISED PRE-TRAINED MODELS 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 8527 - 8531