Calibration of Transformer-Based Models for Identifying Stress and Depression in Social Media

被引:23
|
作者
Ilias, Loukas [1 ]
Mouzakitis, Spiros [1 ]
Askounis, Dimitris [1 ]
机构
[1] Natl Tech Univ Athens, Decis Support Syst Lab, Schoolof Elect & Comp Engn, Athens 15780, Greece
关键词
~Calibration; depression; emotion; mental health; stress; transformers; EMOTION;
D O I
10.1109/TCSS.2023.3283009
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In today's fast-paced world, the rates of stress and depression present a surge. People use social media for expressing their thoughts and feelings through posts. Therefore, social media provide assistance for the early detection of mental health conditions. Existing methods mainly introduce feature extraction approaches and train shallow machine learning (ML) classifiers. For addressing the need of creating a large feature set and obtaining better performance, other research studies use deep neural networks or language models based on transformers. Despite the fact that transformer-based models achieve noticeable improvements, they cannot often capture rich factual knowledge. Although there have been proposed a number of studies aiming to enhance the pretrained transformer-based models with extra information or additional modalities, no prior work has exploited these modifications for detecting stress and depression through social media. In addition, although the reliability of a machine learning (ML) model's confidence in its predictions is critical for high-risk applications, there is no prior work taken into consideration the model calibration. To resolve the above issues, we present the first study in the task of depression and stress detection in social media, which injects extra-linguistic information in transformer-based models, namely, bidirectional encoder representations from transformers (BERT) and MentalBERT. Specifically, the proposed approach employs a multimodal adaptation gate for creating the combined embeddings, which are given as input to a BERT (or MentalBERT) model. For taking into account the model calibration, we apply label smoothing. We test our proposed approaches in three publicly available datasets and demonstrate that the integration of linguistic features into transformer-based models presents a surge in performance. Also, the usage of label smoothing contributes to both the improvement of the model's performance and the calibration of the model. We finally perform a linguistic analysis of the posts and show differences in language between stressful and nonstressful texts, as well as depressive and nondepressive posts.
引用
收藏
页码:1979 / 1990
页数:12
相关论文
共 50 条
  • [1] Detection of Depression Severity in Social Media Text Using Transformer-Based Models
    Qasim, Amna
    Mehak, Gull
    Hussain, Nisar
    Gelbukh, Alexander
    Sidorov, Grigori
    INFORMATION, 2025, 16 (02)
  • [2] Depression detection in social media posts using transformer-based models and auxiliary features
    Kerasiotis, Marios
    Ilias, Loukas
    Askounis, Dimitris
    SOCIAL NETWORK ANALYSIS AND MINING, 2024, 14 (01)
  • [3] Identifying suicidal emotions on social media through transformer-based deep learning
    Dheeraj Kodati
    Ramakrishnudu Tene
    Applied Intelligence, 2023, 53 : 11885 - 11917
  • [4] Identifying suicidal emotions on social media through transformer-based deep learning
    Kodati, Dheeraj
    Tene, Ramakrishnudu
    APPLIED INTELLIGENCE, 2023, 53 (10) : 11885 - 11917
  • [5] Adaptation of Transformer-Based Models for Depression Detection
    Adebanji, Olaronke O.
    Ojo, Olumide E.
    Calvo, Hiram
    Gelbukh, Irina
    Sidorov, Grigori
    COMPUTACION Y SISTEMAS, 2024, 28 (01): : 151 - 165
  • [6] Using transformer-based models and social media posts for heat stroke detection
    Anno, Sumiko
    Kimura, Yoshitsugu
    Sugita, Satoru
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [7] Transformer-based deep learning models for the sentiment analysis of social media data
    Kokab, Sayyida Tabinda
    Asghar, Sohail
    Naz, Shehneela
    ARRAY, 2022, 14
  • [8] Transformer-Based Extractive Social Media Question Answering on TweetQA
    Butt, Sabur
    Ashraf, Noman
    Fahim, Hammad
    Sidorov, Grigori
    Gelbukh, Alexander
    COMPUTACION Y SISTEMAS, 2021, 25 (01): : 23 - 32
  • [9] Transformer-based deep learning models for predicting permeability of porous media
    Meng, Yinquan
    Jiang, Jianguo
    Wu, Jichun
    Wang, Dong
    ADVANCES IN WATER RESOURCES, 2023, 179
  • [10] Identifying Critical Tokens for Accurate Predictions in Transformer-Based Medical Imaging Models
    Kang, Solha
    Vankerschaver, Joris
    Ozbulak, Utku
    MACHINE LEARNING IN MEDICAL IMAGING, PT II, MLMI 2024, 2025, 15242 : 169 - 179