Estimating Self-Confidence in Video-Based Learning Using Eye-Tracking and Deep Neural Networks

被引:0
|
作者
Bhatt, Ankur [1 ,2 ]
Watanabe, Ko [1 ,2 ]
Santhosh, Jayasankar [1 ,2 ]
Dengel, Andreas [1 ,2 ]
Ishimaru, Shoya [3 ]
机构
[1] RPTU Kaiserslautern Landau, D-67663 Kaiserslautern, Germany
[2] German Res Ctr Artificial Intelligence DFKI, D-67663 Kaiserslautern, Germany
[3] Osaka Metropolitan Univ, Naka Ku, Sakai, Osaka 5998531, Japan
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Gaze tracking; Support vector machines; Reviews; Long short term memory; Data collection; Feature extraction; Estimation; Electroencephalography; Random forests; Radio frequency; Eye-tracking; learning augmentation; self-confidence estimation; SKILLS; MOTIVATION; ATTENTION; EFFICACY;
D O I
10.1109/ACCESS.2024.3515838
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Self-confidence is a crucial trait that significantly influences performance across various life domains, leading to positive outcomes by enabling quick decision-making and prompt action. Estimating self-confidence in video-based learning is essential as it provides personalized feedback, thereby enhancing learners' experiences and confidence levels. This study addresses the challenge of self-confidence estimation by comparing traditional machine-learning techniques with advanced deep-learning models. Our study involved a diverse group of thirteen participants (N=13), each of whom viewed and provided responses to seven distinct videos, generating eye-tracking data that was subsequently analyzed to gain insights into their visual attention and behavior. To assess the collected data, we compare three different algorithms: a Long Short-Term Memory (LSTM), a Support Vector Machine (SVM), and a Random Forest (RF), thereby providing a comprehensive evaluation of the data. The achieved outcomes demonstrated that the LSTM model outperformed conventional hand-crafted feature-based methods, achieving the highest accuracy of 76.9% with Leave-One-Category-Out Cross-Validation (LOCOCV) and 70.3% with Leave-One-Participant-Out Cross-Validation (LOPOCV). Our results underscore the superior performance of the deep-learning model in estimating self-confidence in video-based learning contexts compared to hand-crafted feature-based methods. The outcomes of this research pave the way for more personalized and effective educational interventions, ultimately contributing to improved learning experiences and outcomes.
引用
收藏
页码:192219 / 192229
页数:11
相关论文
共 50 条
  • [41] Video-Based Face Recognition Using Ensemble of Haar-Like Deep Convolutional Neural Networks
    Parchami, Mostafa
    Bashbaghi, Saman
    Granger, Eric
    2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 4625 - 4632
  • [42] Using Deep Learning to Increase Eye-Tracking Robustness, Accuracy, and Precision in Virtual Reality
    Barkevich, Kevin
    Bailey, Reynold
    Diaz, Gabriel J.
    PROCEEDINGS OF THE ACM ON COMPUTER GRAPHICS AND INTERACTIVE TECHNIQUES, 2024, 7 (02)
  • [43] Author Correction: Magnetic resonance-based eye tracking using deep neural networks
    Markus Frey
    Matthias Nau
    Christian F. Doeller
    Nature Neuroscience, 2023, 26 : 1127 - 1127
  • [44] Video-Based Human Activity Recognition Using Deep Learning Approaches
    Surek, Guilherme Augusto Silva
    Seman, Laio Oriel
    Stefenon, Stefano Frizzo
    Mariani, Viviana Cocco
    Coelho, Leandro dos Santos
    SENSORS, 2023, 23 (14)
  • [45] Video-Based Facial Expression Recognition Using a Deep Learning Approach
    Jangid, Mahesh
    Paharia, Pranjul
    Srivastava, Sumit
    ADVANCES IN COMPUTER COMMUNICATION AND COMPUTATIONAL SCIENCES, IC4S 2018, 2019, 924 : 653 - 660
  • [46] Eye-Tracking Based Autism Spectrum Disorder Diagnosis Using Chaotic Butterfly Optimization with Deep Learning Model
    Thanarajan, Tamilvizhi
    Alotaibi, Youseef
    Rajendran, Surendran
    Nagappan, Krishnaraj
    CMC-COMPUTERS MATERIALS & CONTINUA, 2023, 76 (02): : 1995 - 2013
  • [47] Eye-Tracking Based Autism Spectrum Disorder Diagnosis Using Chaotic Butterfly Optimization with Deep Learning Model
    Thanarajan T.
    Alotaibi Y.
    Rajendran S.
    Nagappan K.
    Computers, Materials and Continua, 2023, 76 (02): : 1995 - 2013
  • [48] A Video-Based, Eye-Tracking Study to Investigate the Effect of eHMI Modalities and Locations on Pedestrian-Automated Vehicle Interaction
    Guo, Fu
    Lyu, Wei
    Ren, Zenggen
    Li, Mingming
    Liu, Ziming
    SUSTAINABILITY, 2022, 14 (09)
  • [49] Video-based real-time assessment and diagnosis of autism spectrum disorder using deep neural networks
    Prakash, Varun Ganjigunte
    Kohli, Manu
    Prathosh, Aragulla Prasad
    Juneja, Monica
    Gupta, Manushree
    Sairam, Smitha
    Sitaraman, Sadasivan
    Bangalore, Anjali Sanjeev
    Kommu, John Vijay Sagar
    Saini, Lokesh
    Utage, Prashant Ramesh
    Goyal, Nishant
    EXPERT SYSTEMS, 2025, 42 (01)
  • [50] A Strategy for Enhancing English Learning Achievement, Based on the Eye-Tracking Technology with Self-Regulated Learning
    Kuo, Yu-Chen
    Yao, Ching-Bang
    Wu, Chen-Yu
    SUSTAINABILITY, 2022, 14 (23)