Efficiency-oriented approaches for self-supervised speech representation learning

被引:0
|
作者
Lugo, Luis [1 ]
Vielzeuf, Valentin [1 ]
机构
[1] Orange, 4 Rue du Clos Courtel, Cesson-Sevigne, Brittany,35510, France
关键词
Adversarial machine learning - Contrastive Learning - Federated learning - Knowledge representation - Semi-supervised learning - Speech processing - Transfer learning;
D O I
10.1007/s10772-024-10121-9
中图分类号
学科分类号
摘要
Self-supervised learning enables the training of large neural models without the need for large, labeled datasets. It has been generating breakthroughs in several fields, including computer vision, natural language processing, biology, and speech. In particular, the state-of-the-art in several speech processing applications, such as automatic speech recognition or speaker identification, are models where the latent representation is learned using self-supervised approaches. Several configurations exist in self-supervised learning for speech, including contrastive, predictive, and multilingual approaches. There is, however, a crucial limitation in the majority of existing approaches: their high computational costs. These costs limit the deployment of models, the size of the training dataset, and the number of research groups that can afford research with large self-supervised models. Likewise, we should consider the environmental costs that high energy consumption implies. Efforts in this direction comprise optimization of existing models, neural architecture efficiency, improvements in finetuning for speech processing tasks, and data efficiency. But despite current efforts, more work could be done to address high computational costs in self-supervised representation learning. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.
引用
收藏
页码:765 / 779
页数:14
相关论文
共 50 条
  • [21] Understanding Self-Supervised Learning of Speech Representation via Invariance and Redundancy Reduction
    Brima, Yusuf
    Krumnack, Ulf
    Pika, Simone
    Heidemann, Gunther
    INFORMATION, 2024, 15 (02)
  • [22] Conditional Independence for Pretext Task Selection in Self-Supervised Speech Representation Learning
    Zaiem, Salah
    Parcollet, Titouan
    Essid, Slim
    INTERSPEECH 2021, 2021, : 2851 - 2855
  • [23] Emotion-Aware Speech Self-Supervised Representation Learning with Intensity Knowledge
    Liu, Rui
    Ma, Zening
    arXiv,
  • [24] End-to-End Integration of Speech Recognition, Speech Enhancement, and Self-Supervised Learning Representation
    Chang, Xuankai
    Maekaku, Takashi
    Fujita, Yuya
    Watanabe, Shinji
    INTERSPEECH 2022, 2022, : 3819 - 3823
  • [25] Self-Distilled Self-supervised Representation Learning
    Jang, Jiho
    Kim, Seonhoon
    Yoo, Kiyoon
    Kong, Chaerin
    Kim, Jangho
    Kwak, Nojun
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 2828 - 2838
  • [26] Distilling Localization for Self-Supervised Representation Learning
    Zhao, Nanxuan
    Wu, Zhirong
    Lau, Rynson W. H.
    Lin, Stephen
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 10990 - 10998
  • [27] Self-Supervised Relational Reasoning for Representation Learning
    Patacchiola, Massimiliano
    Storkey, Amos
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [28] Self-Supervised Learning for Specified Latent Representation
    Liu, Chicheng
    Song, Libin
    Zhang, Jiwen
    Chen, Ken
    Xu, Jing
    IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2020, 28 (01) : 47 - 59
  • [29] Self-supervised Representation Learning on Document Images
    Cosma, Adrian
    Ghidoveanu, Mihai
    Panaitescu-Liess, Michael
    Popescu, Marius
    DOCUMENT ANALYSIS SYSTEMS, 2020, 12116 : 103 - 117
  • [30] Adaptive Self-Supervised Graph Representation Learning
    Gong, Yunchi
    36TH INTERNATIONAL CONFERENCE ON INFORMATION NETWORKING (ICOIN 2022), 2022, : 254 - 259