Items from Psychometric Tests as Training Data for Personality Profiling Models of Twitter Users

被引:0
|
作者
Kreuter, Anne [1 ]
Sassenberg, Kai [2 ,3 ]
Klinger, Roman [1 ]
机构
[1] Univ Stuttgart, Inst Maschinelle Sprachverarbeitung, Stuttgart, Germany
[2] Leibniz Inst Wissensmedien, Tubingen, Germany
[3] Univ Tubingen, Tubingen, Germany
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine-learned models for author profiling in social media often rely on data acquired via self-reporting-based psychometric tests (questionnaires) filled out by social media users. This is an expensive but accurate data collection strategy. Another, less costly alternative, which leads to potentially more noisy and biased data, is to rely on labels inferred from publicly available information in the profiles of the users, for instance self-reported diagnoses or test results. In this paper, we explore a third strategy, namely to directly use a corpus of items from validated psychometric tests as training data. Items from psychometric tests often consist of sentences from an I-perspective (e.g., "I make friends easily."). Such corpora of test items constitute 'small data', but their availability for many concepts is a rich resource. We investigate this approach for personality profiling, and evaluate BERT classifiers fine-tuned on such psychometric test items for the big five personality traits (openness, conscientiousness, extraversion, agreeableness, neuroticism) and analyze various augmentation strategies regarding their potential to address the challenges coming with such a small corpus. Our evaluation on a publicly available Twitter corpus shows a comparable performance to in-domain training for 4/5 personality traits with T5-based data augmentation.
引用
收藏
页码:315 / 323
页数:9
相关论文
共 50 条
  • [31] An em algorithm for training wideband acoustic models from mixed-bandwidth training data
    Seltzer, ML
    Acero, A
    2005 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), 2005, : 197 - 202
  • [32] Protecting Machine Learning Models from Training Data Set Extraction
    Kalinin, M. O.
    Muryleva, A. A.
    Platonov, V. V.
    AUTOMATIC CONTROL AND COMPUTER SCIENCES, 2024, 58 (08) : 1234 - 1241
  • [33] Extracting Targeted Training Data from ASR Models, and How to Mitigate It
    Amid, Ehsan
    Thakkar, Om
    Narayanan, Arun
    Mathews, Rajiv
    Beaufays, Francoise
    INTERSPEECH 2022, 2022, : 2803 - 2807
  • [34] Reconstructing Training Data from Diverse ML Models by Ensemble Inversion
    Wang, Qian
    Kurz, Daniel
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 3870 - 3878
  • [35] Understanding and countering the spread of conspiracy theories in social networks: Evidence from epidemiological models of Twitter data
    Kauk, Julian
    Kreysa, Helene
    Schweinberger, Stefan R.
    PLOS ONE, 2021, 16 (08):
  • [36] HYPOTHESIS TESTS FOR MARKOV PROCESS MODELS ESTIMATED FROM AGGREGATE FREQUENCY DATA
    KELTON, WD
    KELTON, CML
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 1984, 79 (388) : 922 - 928
  • [37] Hidden Markov Models revealing the household thermal profiling from smart meter data
    Ulmeanu, Anatoli Paul
    Barbu, Vlad Stefan
    Tanasiev, Vladimir
    Badea, Adrian
    ENERGY AND BUILDINGS, 2017, 154 : 127 - 140
  • [38] Training Generative Models From Privatized Data via Entropic Optimal Transport
    Reshetova, Daria
    Chen, Wei-Ning
    Ozgur, Ayfer
    IEEE JOURNAL ON SELECTED AREAS IN INFORMATION THEORY, 2024, 5 : 221 - 235
  • [39] Building and training radiographic models for flexible object identification from incomplete data
    Girard, S
    Dinten, JM
    Chalmond, B
    IEE PROCEEDINGS-VISION IMAGE AND SIGNAL PROCESSING, 1996, 143 (04): : 257 - 264
  • [40] From Zero to Hero: Generating Training Data for Question-To-Cypher Models
    Opitz, Dominik
    Hochgeschwender, Nico
    2022 IEEE/ACM 1ST INTERNATIONAL WORKSHOP ON NATURAL LANGUAGE-BASED SOFTWARE ENGINEERING (NLBSE 2022), 2022, : 17 - 20