A Gaze-Contingent Intention Decoding Engine for human augmentation

被引:3
|
作者
Orlov, Pavel [1 ]
Shafti, Ali [1 ]
Auepanwiriyakul, Chaiyawan [1 ]
Songur, Noyan [1 ]
Faisal, A. Aldo [1 ]
机构
[1] Imperial Coll London, Brain & Behav Lab, London, England
基金
欧盟地平线“2020”;
关键词
Assistive robotics; Eye-movements; Eye hand interaction; Gaze-contingent systems;
D O I
10.1145/3204493.3208350
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Humans process high volumes of visual information to perform everyday tasks. In a reaching task, the brain estimates the distance and position of the object of interest, to reach for it. Having a grasp intention in mind, human eye-movements produce specific relevant patterns. Our Gaze-Contingent Intention Decoding Engine uses eye-movement data and gaze-point position to indicate the hidden intention. We detect the object of interest using deep convolution neural networks and estimate its position in a physical space using 3D gaze vectors. Then we trigger the possible actions from an action grammar database to perform an assistive movement of the robotic arm, improving action performance in physically disabled people. This document is a short report to accompany the Gaze-contingent Intention Decoding Engine demonstrator, providing details of the setup used and results obtained.
引用
收藏
页数:3
相关论文
共 50 条
  • [21] GAZE-CONTINGENT PRISM ADAPTATION - OPTICAL AND FACTORS
    HAY, JC
    PICK, HL
    JOURNAL OF EXPERIMENTAL PSYCHOLOGY, 1966, 72 (05): : 640 - &
  • [22] The Effectiveness of Gaze-Contingent Control in Computer Games
    Orlov, Paul A.
    Apraksin, Nikolay
    PERCEPTION, 2015, 44 (8-9) : 1136 - 1145
  • [23] Gaze-contingent multiresolutional displays: An integrative review
    Reingold, EM
    Loschky, LC
    McConkie, GW
    Stampe, DM
    HUMAN FACTORS, 2003, 45 (02) : 307 - 328
  • [24] Look and Learn: A Model of Gaze-Contingent Learning
    Murakami, Max
    Bolhuis, Jantina
    Kolling, Thorsten
    Knopf, Monika
    Triesch, Jochen
    2016 JOINT IEEE INTERNATIONAL CONFERENCE ON DEVELOPMENT AND LEARNING AND EPIGENETIC ROBOTICS (ICDL-EPIROB), 2016, : 284 - 285
  • [25] Depth Perception with Gaze-contingent Depth of Field
    Mauderer, Michael
    Conte, Simone
    Nacenta, Miguel A.
    Vishwanath, Dhanraj
    32ND ANNUAL ACM CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2014), 2014, : 217 - 226
  • [26] Autofocals: Evaluating gaze-contingent eyeglasses for presbyopes
    Padmanaban, Nitish
    Konrad, Robert
    Wetzstein, Gordon
    SCIENCE ADVANCES, 2019, 5 (06)
  • [27] Autofocals: Evaluating Gaze-Contingent Eyeglasses for Presbyopes
    Padmanaban, Nitish
    Konrad, Robert K.
    Wetzstein, Gordon
    SIGGRAPH '19 -ACM SIGGRAPH 2019 TALKS, 2019,
  • [28] Gaze-contingent contrast sensitivity on natural movies
    Dorr, M.
    Bex, P.
    PERCEPTION, 2010, 39 : 100 - 101
  • [29] Direct measurement of the system latency of gaze-contingent displays
    Saunders, Daniel R.
    Woods, Russell L.
    BEHAVIOR RESEARCH METHODS, 2014, 46 (02) : 439 - 447
  • [30] Eye movements on a display with gaze-contingent temporal resolution
    Dorr, M
    Böhme, M
    Martinetz, T
    Gegenfurtner, KR
    Barth, E
    PERCEPTION, 2005, 34 : 50 - 51