Decoding EEG Brain Activity for Multi-Modal Natural Language Processing

被引:15
|
作者
Hollenstein, Nora [1 ]
Renggli, Cedric [2 ]
Glaus, Benjamin [2 ]
Barrett, Maria [3 ]
Troendle, Marius [4 ]
Langer, Nicolas [4 ]
Zhang, Ce [2 ]
机构
[1] Univ Copenhagen, Dept Nord Studies & Linguist, Copenhagen, Denmark
[2] Swiss Fed Inst Technol, Swiss Fed Inst Technol, Dept Comp Sci, Zurich, Switzerland
[3] IT Univ Copenhagen, Dept Comp Sci, Copenhagen, Denmark
[4] Univ Zurich, Dept Psychol, Zurich, Switzerland
来源
关键词
EEG; natural language processing; frequency bands; brain activity; machine learning; multi-modal learning; physiological data; neural network; REGRESSION-BASED ESTIMATION; COGNITIVE NEUROSCIENCE; EYE-MOVEMENTS; THETA; SPEECH; NEUROBIOLOGY; OSCILLATIONS; RESPONSES; MODELS;
D O I
10.3389/fnhum.2021.659410
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Until recently, human behavioral data from reading has mainly been of interest to researchers to understand human cognition. However, these human language processing signals can also be beneficial in machine learning-based natural language processing tasks. Using EEG brain activity for this purpose is largely unexplored as of yet. In this paper, we present the first large-scale study of systematically analyzing the potential of EEG brain activity data for improving natural language processing tasks, with a special focus on which features of the signal are most beneficial. We present a multi-modal machine learning architecture that learns jointly from textual input as well as from EEG features. We find that filtering the EEG signals into frequency bands is more beneficial than using the broadband signal. Moreover, for a range of word embedding types, EEG data improves binary and ternary sentiment classification and outperforms multiple baselines. For more complex tasks such as relation detection, only the contextualized BERT embeddings outperform the baselines in our experiments, which raises the need for further research. Finally, EEG data shows to be particularly promising when limited training data is available.
引用
收藏
页数:19
相关论文
共 50 条
  • [41] Analogy and multi-modal exploration in the teaching of language theory
    Jeffries, L
    STYLE, 2003, 37 (01) : 67 - +
  • [42] Skeleton aware multi-modal sign language recognition
    Jiang, Songyao
    Sun, Bin
    Wang, Lichen
    Bai, Yue
    Li, Kunpeng
    Fu, Yun
    IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2021, : 3408 - 3418
  • [43] Multi-modal Dialogue System with Sign Language Capabilities
    Hruz, M.
    Campr, P.
    Krnoul, Z.
    Zelezny, M.
    Aran, Oya
    Santemiz, Pinar
    ASSETS 11: PROCEEDINGS OF THE 13TH INTERNATIONAL ACM SIGACCESS CONFERENCE ON COMPUTERS AND ACCESSIBILITY, 2011, : 265 - 266
  • [44] Mobile language learning with multimedia and multi-modal interfaces
    Joseph, Sam
    Uther, Maria
    FOURTH IEEE INTERNATIONAL WORKSHOP ON WIRELESS, MOBILE AND UBIQUITOUS TECHNOLOGY IN EDUCATION, PROCEEDINGS, 2006, : 124 - +
  • [45] On the use of Multi-Modal Sensing in Sign Language Classification
    Sharma, Sneha
    Gupta, Rinki
    Kumar, Arun
    2019 6TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND INTEGRATED NETWORKS (SPIN), 2019, : 495 - 500
  • [46] Visual Hallucinations of Multi-modal Large Language Models
    Huang, Wen
    Liu, Hongbin
    Guo, Minxin
    Gong, Neil Zhenqiang
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 9614 - 9631
  • [47] Skeleton Aware Multi-modal Sign Language Recognition
    Jiang, Songyao
    Sun, Bin
    Wang, Lichen
    Bai, Yue
    Li, Kunpeng
    Fu, Yun
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 3408 - 3418
  • [48] A Model and Query Language for Multi-modal Hybrid Query
    Hu, Chuan
    Mao, Along
    Zhao, Zihao
    Shen, Zhihong
    SCIENTIFIC AND STATISTICAL DATABASE MANAGEMENT 36TH INTERNATIONAL CONFERENCE, SSDBM 2024, 2024,
  • [49] Skeleton aware multi-modal sign language recognition
    Jiang, Songyao
    Sun, Bin
    Wang, Lichen
    Bai, Yue
    Li, Kunpeng
    Fu, Yun
    arXiv, 2021,
  • [50] Speakers' emotional facial expressions modulate subsequent multi-modal language processing: ERP evidence
    Maquate, Katja
    Kissler, Johanna
    Knoeferle, Pia
    LANGUAGE COGNITION AND NEUROSCIENCE, 2023, 38 (10) : 1492 - 1513