Multimodal music emotion recognition method based on multi-source data fusion

被引:0
|
作者
Liu B. [1 ]
机构
[1] Library, Hunan College of Information, ChangSha
关键词
emotional recognition; inverse text frequency IDF; Mel frequency cepstrum coefficient; MFCC; multi-source data fusion; multimodal music;
D O I
10.1504/IJRIS.2024.139838
中图分类号
学科分类号
摘要
Aiming at the problems of low recognition accuracy and long recognition time in traditional multimodal music emotion recognition methods, a multimodal music emotion recognition method based on multi-source data fusion is proposed. First, build a multimodal music emotion model, then use TF-IDF to extract lyric modal emotion features, and use Mel frequency cepstrum coefficient to extract audio modal emotion features. Then, after preprocessing the extracted multimodal features, fuse the two multi-source data features of lyric mode and audio mode, and finally calculate the probability distribution of a song in the emotional space according to the fusion results. The emotion category with the highest corresponding value is taken as the emotion category to which the music belongs, so as to achieve the purpose of emotion recognition of multimodal music. Simulation results show that the proposed method has higher accuracy and shorter recognition time for multimodal music emotion recognition. Copyright © 2024 Inderscience Enterprises Ltd.
引用
收藏
页码:187 / 194
页数:7
相关论文
共 50 条
  • [21] Evaluation method for the comprehensive quality of students based on multi-source data fusion
    Wang, Zhangfu
    ASIA PACIFIC EDUCATION REVIEW, 2024,
  • [22] Generator condition monitoring method based on SAE and multi-source data fusion
    Xing, Chao
    Xi, Xinze
    He, Xin
    Liu, Mingqun
    FRONTIERS IN ENERGY RESEARCH, 2023, 11
  • [23] Multi-source Test Data Fusion and Evaluation Based on Improved ρ-Bayesian Method
    Ning Xiaolei
    Liang Jiwen
    Zhang Hailin
    Hao Tiaofeng
    Zhao Xin
    2019 CHINESE AUTOMATION CONGRESS (CAC2019), 2019, : 169 - 173
  • [24] Resident Travel Characteristics Analysis Method Based on Multi-source Data Fusion
    Su Y.-J.
    Wen H.-Y.
    Wei Q.-B.
    Wu D.-X.
    Jiaotong Yunshu Xitong Gongcheng Yu Xinxi/Journal of Transportation Systems Engineering and Information Technology, 2020, 20 (05): : 56 - 63
  • [25] Acoustic Vector Sensor Multi-Source Detection Based on Multimodal Fusion
    Chen, Yang
    Zhang, Guangyuan
    Wang, Rui
    Rong, Hailong
    Yang, Biao
    SENSORS, 2023, 23 (03)
  • [26] Multi-source and multimodal data fusion for improved management of a wastewater treatment plant
    Strelet, Eugeniu
    Peng, You
    Castillo, Ivan
    Rendall, Ricardo
    Wang, Zhenyu
    Joswiak, Mark
    Braun, Birgit
    Chiang, Leo
    Reis, Marco S.
    JOURNAL OF ENVIRONMENTAL CHEMICAL ENGINEERING, 2023, 11 (06):
  • [27] Multi-source rainfall fusion method based on ConvLSTM
    Yang X.
    Zhang J.
    Zhou J.
    Fang W.
    Huazhong Keji Daxue Xuebao (Ziran Kexue Ban)/Journal of Huazhong University of Science and Technology (Natural Science Edition), 2022, 50 (08): : 33 - 39
  • [28] Research on Multimodal Music Emotion Recognition Method Based on Image Sequence
    Yu, Zhao
    SCIENTIFIC PROGRAMMING, 2021, 2021
  • [29] Multi-Source Data Fusion Method for Indoor Localization System
    Cui, Jishi
    Li, Bin
    Yang, Lyuxiao
    Wu, Nan
    2020 IEEE/CIC INTERNATIONAL CONFERENCE ON COMMUNICATIONS IN CHINA (ICCC), 2020, : 29 - 33
  • [30] Research on Data Fusion Method of Multi-source Complex System
    Cai, Yuxiang
    JOURNAL OF WEB ENGINEERING, 2021, 20 (05): : 1553 - 1571