Multimodal Prediction of Alexithymia from Physiological and Audio Signals

被引:0
|
作者
Filippou, Valeria [1 ]
Nicolaou, Mihalis A. [1 ]
Theodosiou, Nikolas [1 ]
Panayiotou, Georgia [2 ]
Contantinou, Elena [2 ]
Theodorou, Marios [2 ]
Panteli, Maria [2 ]
机构
[1] Cyprus Inst, CASTORC, Nicosia, Cyprus
[2] Univ Cyprus, Dept Psychol, Nicosia, Cyprus
关键词
Affective Computing; Multimodal Machine Learning; Alexithymia; TIME;
D O I
10.1109/ACIIW59127.2023.10388211
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Alexithymia is a trait that reflects a person's difficulty in recognising and expressing their emotions, which has been associated with various forms of mental illness. Identifying alexithymia can have therapeutic, preventive, and diagnostic benefits. However, there has been limited research on proposing predictive models for alexithymia, and literature on multimodal approaches is almost non-existent. In this light, we present a novel predictive framework that utilises multimodal physiological and audio signals, such as heart rate, skin conductance level, facial electromyograms, and speech recordings to detect and classify alexithymia. To this end, two novel datasets were collected through an emotion processing imagery experiment, and subsequently utilised on the task of alexithymia classification by adopting the TAS-20 (Toronto Alexithymia Scale). Furthermore, we developed a set of temporal features that both capture spectral information and are localised in the time-domain (e.g., via wavelets). Using the extracted features, simple machine learning classifiers can be used in the proposed framework, achieving up to 96% f1-score - even when using data from only one of the 12 stages of the experiment. Interestingly, we also find that combining auditory and physiological features in a multimodal manner further improves classification outcomes. The datasets are made available on request by following the provided github link
引用
收藏
页数:8
相关论文
共 50 条
  • [41] TAGformer: A Multimodal Physiological Signals Fusion Network for Pilot Stress Recognition
    Wang, Shaofan
    Li, Yuangan
    Zhang, Tao
    Li, Ke
    IEEE SENSORS JOURNAL, 2024, 24 (13) : 20842 - 20854
  • [42] Emotion Arousal Assessment Based on Multimodal Physiological Signals for Game Users
    Li, Rongyang
    Ding, Jianguo
    Ning, Huansheng
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (04) : 2582 - 2594
  • [43] Deep Representation Learning for Multimodal Emotion Recognition Using Physiological Signals
    Zubair, Muhammad
    Woo, Sungpil
    Lim, Sunhwan
    Yoon, Changwoo
    IEEE ACCESS, 2024, 12 : 106605 - 106617
  • [44] An Ensemble Learning Method for Emotion Charting Using Multimodal Physiological Signals
    Awan, Amna Waheed
    Usman, Syed Muhammad
    Khalid, Shehzad
    Anwar, Aamir
    Alroobaea, Roobaea
    Hussain, Saddam
    Almotiri, Jasem
    Ullah, Syed Sajid
    Akram, Muhammad Usman
    SENSORS, 2022, 22 (23)
  • [45] Multimodal Physiological Signals and Machine Learning for Stress Detection by Wearable Devices
    Zhu, Lili
    Spachos, Petros
    Gregori, Stefano
    2022 IEEE INTERNATIONAL SYMPOSIUM ON MEDICAL MEASUREMENTS AND APPLICATIONS (MEMEA 2022), 2022,
  • [46] Multimodal machine learning approach for emotion recognition using physiological signals
    Ramadan, Mohamad A.
    Salem, Nancy M.
    Mahmoud, Lamees N.
    Sadek, Ibrahim
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 96
  • [47] Feature-Level Fusion of Multimodal Physiological Signals for Emotion Recognition
    Chen, Jing
    Ru, Bin
    Xu, Lixin
    Moore, Philip
    Su, Yun
    PROCEEDINGS 2015 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE, 2015, : 395 - 399
  • [48] CHAMPS: Cardiac health Hypergraph Analysis using Multimodal Physiological Signals
    Choudhury, Anirban Dutta
    Chowdhury, Ananda S.
    2019 41ST ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC), 2019, : 4640 - 4645
  • [49] Multimodal Fusion of Physiological Signals and Facial Action Units for Pain Recognition
    Hinduja, Saurabh
    Canavan, Shaun
    Kaur, Gurmeet
    2020 15TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2020), 2020, : 577 - 581
  • [50] A multimodal investigation of emotional responding in alexithymia
    Luminet, O
    Rimé, B
    Bagby, RM
    Taylor, GJ
    COGNITION & EMOTION, 2004, 18 (06) : 741 - 766