Estimating Self-Confidence in Video-Based Learning Using Eye-Tracking and Deep Neural Networks

被引:0
|
作者
Bhatt, Ankur [1 ,2 ]
Watanabe, Ko [1 ,2 ]
Santhosh, Jayasankar [1 ,2 ]
Dengel, Andreas [1 ,2 ]
Ishimaru, Shoya [3 ]
机构
[1] RPTU Kaiserslautern Landau, D-67663 Kaiserslautern, Germany
[2] German Res Ctr Artificial Intelligence DFKI, D-67663 Kaiserslautern, Germany
[3] Osaka Metropolitan Univ, Naka Ku, Sakai, Osaka 5998531, Japan
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Gaze tracking; Support vector machines; Reviews; Long short term memory; Data collection; Feature extraction; Estimation; Electroencephalography; Random forests; Radio frequency; Eye-tracking; learning augmentation; self-confidence estimation; SKILLS; MOTIVATION; ATTENTION; EFFICACY;
D O I
10.1109/ACCESS.2024.3515838
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Self-confidence is a crucial trait that significantly influences performance across various life domains, leading to positive outcomes by enabling quick decision-making and prompt action. Estimating self-confidence in video-based learning is essential as it provides personalized feedback, thereby enhancing learners' experiences and confidence levels. This study addresses the challenge of self-confidence estimation by comparing traditional machine-learning techniques with advanced deep-learning models. Our study involved a diverse group of thirteen participants (N=13), each of whom viewed and provided responses to seven distinct videos, generating eye-tracking data that was subsequently analyzed to gain insights into their visual attention and behavior. To assess the collected data, we compare three different algorithms: a Long Short-Term Memory (LSTM), a Support Vector Machine (SVM), and a Random Forest (RF), thereby providing a comprehensive evaluation of the data. The achieved outcomes demonstrated that the LSTM model outperformed conventional hand-crafted feature-based methods, achieving the highest accuracy of 76.9% with Leave-One-Category-Out Cross-Validation (LOCOCV) and 70.3% with Leave-One-Participant-Out Cross-Validation (LOPOCV). Our results underscore the superior performance of the deep-learning model in estimating self-confidence in video-based learning contexts compared to hand-crafted feature-based methods. The outcomes of this research pave the way for more personalized and effective educational interventions, ultimately contributing to improved learning experiences and outcomes.
引用
收藏
页码:192219 / 192229
页数:11
相关论文
共 50 条
  • [31] Video-based reflection on neonatal interventions during COVID-19 using eye-tracking glasses: an observational study
    Wagner, Michael
    den Boer, Maria C.
    Jansen, Sophie
    Groepel, Peter
    Visser, Remco
    Witlox, Ruben S. G. M.
    Bekker, Vincent
    Lopriore, Enrico
    Berger, Angelika
    Te Pas, Arjan B.
    ARCHIVES OF DISEASE IN CHILDHOOD-FETAL AND NEONATAL EDITION, 2022, 107 (02): : 156 - 160
  • [32] Reactivity effects in video-based classroom research: an investigation using teacher and student questionnaires as well as teacher eye-tracking
    Praetorius, Anna-Katharina
    McIntyre, Nora A.
    Klassen, Robert M.
    ZEITSCHRIFT FUR ERZIEHUNGSWISSENSCHAFT, 2017, 20 : 49 - 74
  • [33] An eye-tracking algorithm for nystagmus detection in videonystagmography based on convolutional neural networks
    Lee, Yerin
    Lee, Sena
    Han, Junghun
    Wang, Hyeong Jun
    Seo, Young Joon
    Yang, Sejung
    OPHTHALMIC TECHNOLOGIES XXXIII, 2023, 12360
  • [34] Deep Neural Network based Optical Monitor Providing Self-Confidence as Auxiliary Output
    Tanimura, Takahito
    Kato, Tomoyuki
    Watanabe, Shigeki
    Hoshida, Takeshi
    2018 EUROPEAN CONFERENCE ON OPTICAL COMMUNICATION (ECOC), 2018,
  • [35] Sentences Prediction Based on Automatic Lip-Reading Detection with Deep Learning Convolutional Neural Networks Using Video-Based Features
    Mahboob, Khalid
    Nizami, Hafsa
    Ali, Fayyaz
    Alvi, Farrukh
    SOFT COMPUTING IN DATA SCIENCE, SCDS 2021, 2021, 1489 : 42 - 53
  • [36] A Video-Based Fire Detection Using Deep Learning Models
    Kim, Byoungjun
    Lee, Joonwhoan
    APPLIED SCIENCES-BASEL, 2019, 9 (14):
  • [37] Classification and staging of Parkinson's disease using video-based eye tracking
    Brien, Donald C.
    Riek, Heidi C.
    Yep, Rachel
    Huang, Jeff
    Coe, Brian
    Areshenkoff, Corson
    Grimes, David
    Jog, Mandar
    Lang, Anthony
    Marras, Connie
    Masellis, Mario
    McLaughlin, Paula
    Peltsch, Alicia
    Roberts, Angela
    Tan, Brian
    Beaton, Derek
    Lou, Wendy
    Swartz, Richard
    Munoz, Douglas P.
    PARKINSONISM & RELATED DISORDERS, 2023, 110
  • [38] Emotion classification on eye-tracking and electroencephalograph fused signals employing deep gradient neural networks
    Wu, Qun
    Dey, Nilanjan
    Shi, Fuqian
    Gonzalez Crespo, Ruben
    Sherratt, R. Simon
    APPLIED SOFT COMPUTING, 2021, 110 (110)
  • [39] Unconstrained Still/Video-Based Face Verification with Deep Convolutional Neural Networks
    Chen, Jun-Cheng
    Ranjan, Rajeev
    Sankaranarayanan, Swami
    Kumar, Amit
    Chen, Ching-Hui
    Patel, Vishal M.
    Castillo, Carlos D.
    Chellappa, Rama
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2018, 126 (2-4) : 272 - 291
  • [40] Unconstrained Still/Video-Based Face Verification with Deep Convolutional Neural Networks
    Jun-Cheng Chen
    Rajeev Ranjan
    Swami Sankaranarayanan
    Amit Kumar
    Ching-Hui Chen
    Vishal M. Patel
    Carlos D. Castillo
    Rama Chellappa
    International Journal of Computer Vision, 2018, 126 : 272 - 291