Emotion classification during music listening from forehead biosignals

被引:0
|
作者
Mohsen Naji
Mohammd Firoozabadi
Parviz Azadfallah
机构
[1] Islamic Azad University,Department of Biomedical Engineering, Science and Research Branch
[2] Tarbiat Modares University,Department of Medical Physics
[3] Tarbiat Modares University,Department of Psychology
来源
关键词
Forehead biosignals; Arousal; Valence; Emotion recognition;
D O I
暂无
中图分类号
学科分类号
摘要
Emotion recognition systems are helpful in human–machine interactions and clinical applications. This paper investigates the feasibility of using 3-channel forehead biosignals (left temporalis, frontalis, and right temporalis channel) as informative channels for emotion recognition during music listening. Classification of four emotional states (positive valence/low arousal, positive valence/high arousal, negative valence/high arousal, and negative valence/low arousal) in arousal–valence space was performed by employing two parallel cascade-forward neural networks as arousal and valence classifiers. The inputs of the classifiers were obtained by applying a fuzzy rough model feature evaluation criterion and sequential forward floating selection algorithm. An averaged classification accuracy of 87.05 % was achieved, corresponding to average valence classification accuracy of 93.66 % and average arousal classification accuracy of 93.29 %.
引用
收藏
页码:1365 / 1375
页数:10
相关论文
共 50 条
  • [21] Exploiting Online Music Tags for Music Emotion Classification
    Lin, Yu-Ching
    Yang, Yi-Hsuan
    Chen, Homer H.
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2011, 7 (01)
  • [22] Emotion-based music classification
    Zhu, W., 1600, Asian Network for Scientific Information (12):
  • [23] EXPLOITING GENRE FOR MUSIC EMOTION CLASSIFICATION
    Lin, Yu-Ching
    Yang, Yi-Hsuan
    Chen, Homer H.
    Liao, I-Bin
    Ho, Yeh-Chin
    ICME: 2009 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, VOLS 1-3, 2009, : 618 - +
  • [24] Music emotion classification: A regression approach
    Yang, Yi-Hsuan
    Lin, Yu-Ching
    Su, Ya-Fan
    Chen, Homer H.
    2007 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, VOLS 1-5, 2007, : 208 - 211
  • [25] Music from Plant Biosignals: A Conceptual and Analytical Orientation
    Miller, Paul V.
    Cox, Christopher
    MUSIC THEORY ONLINE, 2024, 30 (01):
  • [26] Emotion-Specific Dichotomous Classification and Feature-Level Fusion of Multichannel Biosignals for Automatic Emotion Recognition
    Kim, Jonghwa
    Andre, Elisabeth
    2008 IEEE INTERNATIONAL CONFERENCE ON MULTISENSOR FUSION AND INTEGRATION FOR INTELLIGENT SYSTEMS, VOLS 1 AND 2, 2008, : 673 - 678
  • [27] Automatic ECG-Based Emotion Recognition in Music Listening
    Hsu, Yu-Liang
    Wang, Jeen-Shing
    Chiang, Wei-Chun
    Hung, Chien-Han
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2020, 11 (01) : 85 - 99
  • [28] Feature Selection and Comparison for the Emotion Recognition According to Music Listening
    Byun, Sung-Woo
    Lee, Seok-Pil
    Han, Hyuk Soo
    2017 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION SCIENCES (ICRAS), 2017, : 172 - 176
  • [29] THE ROLE OF ANNOTATION FUSION METHODS IN THE STUDY OF HUMAN-REPORTED EMOTION EXPERIENCE DURING MUSIC LISTENING
    Greer, Timothy
    Mundnich, Karel
    Sachs, Matthew
    Narayanan, Shrikanth
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 776 - 780
  • [30] Experiences during listening to music in school
    Vidulin, Sabina
    Zauhar, Valnea
    Plavsic, Marlena
    MUSIC EDUCATION RESEARCH, 2022, 24 (04) : 512 - 529