Inferring Human Knowledgeability from Eye Gaze in Mobile Learning Environments

被引:4
|
作者
Celiktutan, Oya [1 ]
Demiris, Yiannis [1 ]
机构
[1] Imperial Coll London, Dept Elect & Elect Engn, Personal Robot Lab, London, England
来源
COMPUTER VISION - ECCV 2018 WORKSHOPS, PT VI | 2019年 / 11134卷
基金
欧盟地平线“2020”;
关键词
Assistive mobile applications; Noninvasive gaze tracking; Analysis of eye movements; Human knowledgeability prediction;
D O I
10.1007/978-3-030-11024-6_13
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
What people look at during a visual task reflects an interplay between ocular motor functions and cognitive processes. In this paper, we study the links between eye gaze and cognitive states to investigate whether eye gaze reveal information about an individual's knowledgeability. We focus on a mobile learning scenario where a user and a virtual agent play a quiz game using a hand-held mobile device. To the best of our knowledge, this is the first attempt to predict user's knowledgeability from eye gaze using a noninvasive eye tracking method on mobile devices: we perform gaze estimation using front-facing camera of mobile devices in contrast to using specialised eye tracking devices. First, we define a set of eye movement features that are discriminative for inferring user's knowledgeability. Next, we train a model to predict users' knowledgeability in the course of responding to a question. We obtain a classification performance of 59.1% achieving human performance, using eye movement features only, which has implications for (1) adapting behaviours of the virtual agent to user's needs (e.g., virtual agent can give hints); (2) personalising quiz questions to the user's perceived knowledgeability.
引用
收藏
页码:193 / 209
页数:17
相关论文
共 50 条
  • [21] Estimating the eye gaze from one eye
    Wang, HG
    Sung, E
    Venkateswarlu, R
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2005, 98 (01) : 83 - 103
  • [22] Learning an eye or gaze target tracking task
    Salas, C
    Alvarez, E
    Rodriguez, F
    López, JC
    Vargas, JP
    Broglio, C
    Portavella, M
    EUROPEAN JOURNAL OF NEUROSCIENCE, 2000, 12 : 84 - 84
  • [23] Analyzing Eye Gaze of Users with Learning Disability
    Saluja, Kamalpreet Singh
    Dv, JeevithaShree
    Arjun, Somnath
    Biswas, Pradipta
    Paul, Teena
    ICGSP '19 - PROCEEDINGS OF THE 2019 3RD INTERNATIONAL CONFERENCE ON GRAPHICS AND SIGNAL PROCESSING, 2019, : 95 - 99
  • [24] Towards Measuring and Inferring User Interest from Gaze
    Li, Yixuan
    Xu, Pingmei
    Lagun, Dmitry
    Navalpakkam, Vidhya
    WWW'17 COMPANION: PROCEEDINGS OF THE 26TH INTERNATIONAL CONFERENCE ON WORLD WIDE WEB, 2017, : 525 - 533
  • [25] Inferring Art Preferences from Gaze Exploration in a Museum
    Castagnos, Sylvain
    Marchal, Florian
    Bertrand, Alexandre
    Colle, Morgane
    Mahmoudi, Djalila
    ADJUNCT PUBLICATION OF THE 27TH CONFERENCE ON USER MODELING, ADAPTATION AND PERSONALIZATION (ACM UMAP '19 ADJUNCT), 2019, : 425 - 430
  • [26] Inferring Social Gaze from Conversational Structure and Timing
    Murphy, Robin R.
    Gonzales, Jessica
    Srinivasan, Vasant
    PROCEEDINGS OF THE 6TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTIONS (HRI 2011), 2011, : 209 - 210
  • [27] Classifying Eye Gaze Patterns and Inferring Individual Preferences Using Hidden Markov Models
    Chan, Antoni B.
    Coutrot, Antoine
    I-PERCEPTION, 2017, 8 : 20 - 21
  • [28] Learning spaces in mobile learning environments
    Solvberg, Astrid M.
    Rismark, Marit
    ACTIVE LEARNING IN HIGHER EDUCATION, 2012, 13 (01) : 23 - 33
  • [29] A Spiral into the Mind: Gaze Spiral Visualization for Mobile Eye Tracking
    Koch, Maurice
    Weiskopf, Daniel
    Kurzhals, Kuno
    PROCEEDINGS OF THE ACM ON COMPUTER GRAPHICS AND INTERACTIVE TECHNIQUES, 2022, 5 (02)
  • [30] Teleoperation of a mobile robot based on eye-gaze tracking
    Gego, Daniel
    Carreto, Carlos
    Figueiredo, Luis
    2017 12TH IBERIAN CONFERENCE ON INFORMATION SYSTEMS AND TECHNOLOGIES (CISTI), 2017,