Pre-Training Audio Representations With Self-Supervision

被引:31
|
作者
Tagliasacchi, Marco [1 ]
Gfeller, Beat [1 ]
Quitry, Felix de Chaumont [1 ]
Roblek, Dominik [1 ]
机构
[1] Google Res, CH-8002 Zurich, Switzerland
关键词
Task analysis; Decoding; Training; Computer architecture; Spectrogram; Predictive models; Time-frequency analysis; Self-supervised learning; audio processing; EMBEDDINGS;
D O I
10.1109/LSP.2020.2985586
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
We explore self-supervision as a way to learn general purpose audio representations. Specifically, we propose two self-supervised tasks: Audio2Vec, which aims at reconstructing a spectrogram slice from past and future slices and TemporalGap, which estimates the distance between two short audio segments extracted at random from the same audio clip. We evaluate how the representations learned via self-supervision transfer to different downstream tasks, either training a task-specific linear classifier on top of the pretrained embeddings, or fine-tuning a model end-to-end for each downstream task. Our results show that the representations learned with Audio2Vec transfer better than those learned by fully-supervised training on Audioset. In addition, by fine-tuning Audio2Vec representations it is possible to outperform fully-supervised models trained from scratch on each task, when limited data is available, thus improving label efficiency.
引用
收藏
页码:600 / 604
页数:5
相关论文
共 50 条
  • [1] UserBERT: Pre-training User Model with Contrastive Self-supervision
    Wu, Chuhan
    Wu, Fangzhao
    Qi, Tao
    Huang, Yongfeng
    PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 2022, : 2087 - 2092
  • [2] SLIP: Self-supervision Meets Language-Image Pre-training
    Mu, Norman
    Kirillov, Alexander
    Wagner, David
    Xie, Saining
    COMPUTER VISION, ECCV 2022, PT XXVI, 2022, 13686 : 529 - 544
  • [3] Multilingual Pre-training with Self-supervision from Global Co-occurrence
    Ai, Xi
    Fang, Bin
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 7526 - 7543
  • [4] PTUM: Pre-training User Model from Unlabeled User Behaviors via Self-supervision
    Wu, Chuhan
    Wu, Fangzhao
    Qi, Tao
    Lian, Jianxun
    Huang, Yongfeng
    Xie, Xing
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 1939 - 1944
  • [5] Anomalies, representations, and self-supervision
    Dillon, Barry M.
    Favaro, Luigi
    Feiden, Friedrich
    Modak, Tanmoy
    Plehn, Tilman
    SCIPOST PHYSICS CORE, 2024, 7 (03):
  • [6] LiRA: Learning Visual Speech Representations from Audio through Self-supervision
    Ma, Pingchuan
    Mira, Rodrigo
    Petridis, Stavros
    Schuller, Bjorn W.
    Pantic, Maja
    INTERSPEECH 2021, 2021, : 3011 - 3015
  • [7] CTAL: Pre-training Cross-modal Transformer for Audio-and-Language Representations
    Li, Hang
    Ding, Wenbiao
    Kang, Yu
    Liu, Tianqiao
    Wu, Zhongqin
    Liu, Zitao
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 3966 - 3977
  • [8] Audio-Visual Contrastive Learning with Temporal Self-Supervision
    Jenni, Simon
    Black, Alexander
    Collomosse, John
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 7, 2023, : 7996 - 8004
  • [9] THE FEASIBILITY OF SELF-SUPERVISION
    Hudelson, Earl
    JOURNAL OF EDUCATIONAL RESEARCH, 1952, 45 (05): : 335 - 347
  • [10] Pre-training Mention Representations in Coreference Models
    Varkel, Yuval
    Globerson, Amir
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 8534 - 8540