LEARNING CONTEXTUAL TAG EMBEDDINGS FOR CROSS-MODAL ALIGNMENT OF AUDIO AND TAGS

被引:3
|
作者
Favory, Xavier [1 ]
Drossos, Konstantinos [2 ]
Virtanen, Tuomas [2 ]
Serra, Xavier [1 ]
机构
[1] Univ Pompeu Fabra, Mus Technol Grp, Barcelona, Spain
[2] Tampere Univ, Audio Res Grp, Tampere, Finland
关键词
representation learning; multimodal contrastive learning; audio classification;
D O I
10.1109/ICASSP39728.2021.9414638
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Self-supervised audio representation learning offers an attractive alternative for obtaining generic audio embeddings, capable to be employed into various downstream tasks. Published approaches that consider both audio and words/tags associated with audio do not employ text processing models that are capable to generalize to tags unknown during training. In this work we propose a method for learning audio representations using an audio autoencoder (AAE), a general word embeddings model (WEM), and a multi-head self-attention (MHA) mechanism. MHA attends on the output of the WEM, providing a contextualized representation of the tags associated with the audio, and we align the output of MHA with the output of the encoder of AAE using a contrastive loss. We jointly optimize AAE and MHA and we evaluate the audio representations (i.e. the output of the encoder of AAE) by utilizing them in three different downstream tasks, namely sound, music genre, and music instrument classification. Our results show that employing multi-head self-attention with multiple heads in the tag-based network can induce better learned audio representations.
引用
收藏
页码:596 / 600
页数:5
相关论文
共 50 条
  • [31] Discriminative Dictionary Learning With Common Label Alignment for Cross-Modal Retrieval
    Deng, Cheng
    Tang, Xu
    Yan, Junchi
    Liu, Wei
    Gao, Xinbo
    IEEE TRANSACTIONS ON MULTIMEDIA, 2016, 18 (02) : 208 - 218
  • [32] Learning Aligned Cross-Modal and Cross-Product Embeddings for Generating the Topics of Shopping Needs
    Tsai, Yi-Ru
    Cheng, Pu-Jen
    PROCEEDINGS OF THE 2023 ACM SIGIR INTERNATIONAL CONFERENCE ON THE THEORY OF INFORMATION RETRIEVAL, ICTIR 2023, 2023, : 189 - 198
  • [33] Audio-to-Image Cross-Modal Generation
    Zelaszczyk, Maciej
    Mandziuk, Jacek
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [34] Cross-modal retrieval of scripted speech audio
    Owen, CB
    Makedon, F
    MULTIMEDIA COMPUTING AND NETWORKING 1998, 1997, 3310 : 226 - 235
  • [35] Learning Cross-Aligned Latent Embeddings for Zero-Shot Cross-Modal Retrieval
    Lin, Kaiyi
    Xu, Xing
    Gao, Lianli
    Wang, Zheng
    Shen, Heng Tao
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 11515 - 11522
  • [36] HCMSL: Hybrid Cross-modal Similarity Learning for Cross-modal Retrieval
    Zhang, Chengyuan
    Song, Jiayu
    Zhu, Xiaofeng
    Zhu, Lei
    Zhang, Shichao
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2021, 17 (01)
  • [37] Improving Cross-Modal Retrieval with Set of Diverse Embeddings
    Kim, Dongwon
    Kim, Namyup
    Kwak, Suha
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 23422 - 23431
  • [38] Learnable PINs: Cross-modal Embeddings for Person Identity
    Nagrani, Arsha
    Albanie, Samuel
    Zisserman, Andrew
    COMPUTER VISION - ECCV 2018, PT XIII, 2018, 11217 : 73 - 89
  • [39] AUDIO-TO-SYMBOLIC ARRANGEMENT VIA CROSS-MODAL MUSIC REPRESENTATION LEARNING
    Wang, Ziyu
    Xu, Dejing
    Xia, Gus
    Shan, Ying
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 181 - 185
  • [40] Self-Supervised Learning by Cross-Modal Audio-Video Clustering
    Alwassel, Humam
    Mahajan, Dhruv
    Korbar, Bruno
    Torresani, Lorenzo
    Ghanem, Bernard
    Tran, Du
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33