Efficiency-oriented approaches for self-supervised speech representation learning

被引:0
|
作者
Lugo, Luis [1 ]
Vielzeuf, Valentin [1 ]
机构
[1] Orange, 4 Rue du Clos Courtel, Cesson-Sevigne, Brittany,35510, France
关键词
Adversarial machine learning - Contrastive Learning - Federated learning - Knowledge representation - Semi-supervised learning - Speech processing - Transfer learning;
D O I
10.1007/s10772-024-10121-9
中图分类号
学科分类号
摘要
Self-supervised learning enables the training of large neural models without the need for large, labeled datasets. It has been generating breakthroughs in several fields, including computer vision, natural language processing, biology, and speech. In particular, the state-of-the-art in several speech processing applications, such as automatic speech recognition or speaker identification, are models where the latent representation is learned using self-supervised approaches. Several configurations exist in self-supervised learning for speech, including contrastive, predictive, and multilingual approaches. There is, however, a crucial limitation in the majority of existing approaches: their high computational costs. These costs limit the deployment of models, the size of the training dataset, and the number of research groups that can afford research with large self-supervised models. Likewise, we should consider the environmental costs that high energy consumption implies. Efforts in this direction comprise optimization of existing models, neural architecture efficiency, improvements in finetuning for speech processing tasks, and data efficiency. But despite current efforts, more work could be done to address high computational costs in self-supervised representation learning. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.
引用
收藏
页码:765 / 779
页数:14
相关论文
共 50 条
  • [41] Self-Supervised Dense Visual Representation Learning
    Ozcelik, Timoteos Onur
    Gokberk, Berk
    Akarun, Lale
    32ND IEEE SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE, SIU 2024, 2024,
  • [42] Self-supervised speech representation learning based on positive sample comparison and masking reconstruction
    Zhang, Wenlin
    Liu, Xuepeng
    Niu, Tong
    Chen, Qi
    Qu, Dan
    Tongxin Xuebao/Journal on Communications, 2022, 43 (07): : 163 - 171
  • [43] Wav2vec-C: A Self-supervised Model for Speech Representation Learning
    Sadhu, Samik
    He, Di
    Huang, Che-Wei
    Mallidi, Sri Harish
    Wu, Minhua
    Rastrow, Ariya
    Stolcke, Andreas
    Droppo, Jasha
    Maas, Roland
    INTERSPEECH 2021, 2021, : 711 - 715
  • [44] Automatic Data Augmentation Selection and Parametrization in Contrastive Self-Supervised Speech Representation Learning
    Zaiem, Salah
    Parcollet, Titouan
    Essid, Slim
    INTERSPEECH 2022, 2022, : 669 - 673
  • [45] A Joint Speech Enhancement and Self-Supervised Representation Learning Framework for Noise-Robust Speech Recognition
    Zhu, Qiu-Shi
    Zhang, Jie
    Zhang, Zi-Qiang
    Dai, Li-Rong
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2023, 31 : 1927 - 1939
  • [46] LARGE-SCALE SELF-SUPERVISED SPEECH REPRESENTATION LEARNING FOR AUTOMATIC SPEAKER VERIFICATION
    Chen, Zhengyang
    Chen, Sanyuan
    Wu, Yu
    Qian, Yao
    Wang, Chengyi
    Liu, Shujie
    Qian, Yanmin
    Zeng, Michael
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 6147 - 6151
  • [47] Spectral Salt-and-Pepper Patch Masking for Self-Supervised Speech Representation Learning
    Kim, June-Woo
    Chung, Hoon
    Jung, Ho-Young
    MATHEMATICS, 2023, 11 (15)
  • [48] Speech Self-Supervised Representation Benchmarking: Are We Doing it Right?
    Zaiem, Salah
    Kemiche, Youcef
    Parcollet, Titouan
    Essid, Slim
    Ravanelli, Mirco
    INTERSPEECH 2023, 2023, : 2873 - 2877
  • [49] CONTENTVEC: An Improved Self-Supervised Speech Representation by Disentangling Speakers
    Qian, Kaizhi
    Zhang, Yang
    Gao, Heting
    Ni, Junrui
    Lai, Cheng-I Jeff
    Cox, David
    Hasegawa-Johnson, Mark
    Chang, Shiyu
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [50] CHARACTERIZING THE ADVERSARIAL VULNERABILITY OF SPEECH SELF-SUPERVISED LEARNING
    Wu, Haibin
    Zheng, Bo
    Li, Xu
    Wu, Xixin
    Lee, Hung-Yi
    Meng, Helen
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3164 - 3168