Emotion classification during music listening from forehead biosignals

被引:0
|
作者
Mohsen Naji
Mohammd Firoozabadi
Parviz Azadfallah
机构
[1] Islamic Azad University,Department of Biomedical Engineering, Science and Research Branch
[2] Tarbiat Modares University,Department of Medical Physics
[3] Tarbiat Modares University,Department of Psychology
来源
关键词
Forehead biosignals; Arousal; Valence; Emotion recognition;
D O I
暂无
中图分类号
学科分类号
摘要
Emotion recognition systems are helpful in human–machine interactions and clinical applications. This paper investigates the feasibility of using 3-channel forehead biosignals (left temporalis, frontalis, and right temporalis channel) as informative channels for emotion recognition during music listening. Classification of four emotional states (positive valence/low arousal, positive valence/high arousal, negative valence/high arousal, and negative valence/low arousal) in arousal–valence space was performed by employing two parallel cascade-forward neural networks as arousal and valence classifiers. The inputs of the classifiers were obtained by applying a fuzzy rough model feature evaluation criterion and sequential forward floating selection algorithm. An averaged classification accuracy of 87.05 % was achieved, corresponding to average valence classification accuracy of 93.66 % and average arousal classification accuracy of 93.29 %.
引用
收藏
页码:1365 / 1375
页数:10
相关论文
共 50 条
  • [41] Music Emotion Classification with Deep Neural Nets
    Pandeya, Yagya Raj
    Bhattarai, Bhuwan
    Lee, Joonwhoan
    PROCEEDINGS OF 2021 6TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING TECHNOLOGIES (ICMLT 2021), 2021, : 132 - 137
  • [42] An Approach of Genetic Programming for Music Emotion Classification
    Bang, Sung-Woo
    Kim, Jaekwang
    Lee, Jee-Hyong
    INTERNATIONAL JOURNAL OF CONTROL AUTOMATION AND SYSTEMS, 2013, 11 (06) : 1290 - 1299
  • [43] Neuroticism and emotion regulation through music listening: A meta-analysis
    Miranda, Dave
    Blais-Rochette, Camille
    MUSICAE SCIENTIAE, 2020, 24 (03) : 342 - 355
  • [44] Exploring Perception Uncertainty for Emotion Recognition in Dyadic Conversation and Music Listening
    Han, Jing
    Zhang, Zixing
    Ren, Zhao
    Schuller, Bjoern
    COGNITIVE COMPUTATION, 2021, 13 (02) : 231 - 240
  • [45] Exploring Perception Uncertainty for Emotion Recognition in Dyadic Conversation and Music Listening
    Jing Han
    Zixing Zhang
    Zhao Ren
    Björn Schuller
    Cognitive Computation, 2021, 13 : 231 - 240
  • [46] RENICA BASED MUSIC SOURCE SEPARATION FOR AUTOMATIC MUSIC EMOTION CLASSIFICATION
    Rosli, Nurlaila
    Rajaee, Nordiana
    Bong, David
    INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING INFORMATION AND CONTROL, 2018, 14 (06): : 2325 - 2333
  • [47] Experiencing Coincidence during Digital Music Listening
    Leong, Tuck W.
    Vetere, Frank
    Howard, Steve
    ACM TRANSACTIONS ON COMPUTER-HUMAN INTERACTION, 2012, 19 (01)
  • [48] Music in the Brain: From Listening to Playing
    Satoh, Masayuki
    Evers, Stefan
    Furuya, Shinichi
    Ono, Kentaro
    BEHAVIOURAL NEUROLOGY, 2015, 2015
  • [49] Research on Music Emotion Classification Based on the Feature Model of Emotion Category Contribution
    Zhong, Jin
    Zhong, Wen-Zhe
    Wu, Wei
    2018 INTERNATIONAL CONFERENCE ON BIG DATA AND ARTIFICIAL INTELLIGENCE (BDAI 2018), 2018, : 58 - 61
  • [50] Automatic Emotion Recognition Based on EEG and ECG Signals While Listening to Quranic Recitation Compared With Listening to Music
    Al-Galal, Sabaa Ahmed Yahya
    Alshaikhli, Imad Fakhri Taha
    Rahman, Abdul Wahab bin Abdul
    2016 6TH INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGY FOR THE MUSLIM WORLD (ICT4M), 2016, : 269 - 274