A Gaze-Contingent Intention Decoding Engine for human augmentation

被引:3
|
作者
Orlov, Pavel [1 ]
Shafti, Ali [1 ]
Auepanwiriyakul, Chaiyawan [1 ]
Songur, Noyan [1 ]
Faisal, A. Aldo [1 ]
机构
[1] Imperial Coll London, Brain & Behav Lab, London, England
基金
欧盟地平线“2020”;
关键词
Assistive robotics; Eye-movements; Eye hand interaction; Gaze-contingent systems;
D O I
10.1145/3204493.3208350
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Humans process high volumes of visual information to perform everyday tasks. In a reaching task, the brain estimates the distance and position of the object of interest, to reach for it. Having a grasp intention in mind, human eye-movements produce specific relevant patterns. Our Gaze-Contingent Intention Decoding Engine uses eye-movement data and gaze-point position to indicate the hidden intention. We detect the object of interest using deep convolution neural networks and estimate its position in a physical space using 3D gaze vectors. Then we trigger the possible actions from an action grammar database to perform an assistive movement of the robotic arm, improving action performance in physically disabled people. This document is a short report to accompany the Gaze-contingent Intention Decoding Engine demonstrator, providing details of the setup used and results obtained.
引用
收藏
页数:3
相关论文
共 50 条
  • [41] Direct measurement of the system latency of gaze-contingent displays
    Daniel R. Saunders
    Russell L. Woods
    Behavior Research Methods, 2014, 46 : 439 - 447
  • [42] Cybersickness Reduction via Gaze-Contingent Image Deformation
    Groth, Colin
    Magnor, Marcus
    Grogorick, Steve
    Eisemann, Martin
    Didyk, Piotr
    ACM TRANSACTIONS ON GRAPHICS, 2024, 43 (04):
  • [43] The effect of gaze-contingent stimulus elimination on preference judgments
    Morii, Masahiro
    Sakagami, Takayuki
    FRONTIERS IN PSYCHOLOGY, 2015, 6
  • [44] Gaze-contingent training enhances perceptual skill acquisition
    Ryu, Donghyun
    Mann, David L.
    Abernethy, Bruce
    Poolton, Jamie M.
    JOURNAL OF VISION, 2016, 16 (02): : 1 - 21
  • [45] Fast Gaze-Contingent Optimal Decompositions for Multifocal Displays
    Mercier, Olivier
    Sulai, Yusufu
    Mackenzie, Kevin
    Zannoli, Marina
    Hillis, James
    Nowrouzezahrai, Derek
    Lanman, Douglas
    ACM TRANSACTIONS ON GRAPHICS, 2017, 36 (06):
  • [46] The gaze-contingent moving window in reading: Development and review
    Rayner, Keith
    VISUAL COGNITION, 2014, 22 (3-4) : 242 - 258
  • [47] Gaze-contingent ASR for spontaneous, conversational speech: An evaluation
    Cooke, Neil
    Russell, Martin
    2008 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, VOLS 1-12, 2008, : 4433 - 4436
  • [48] Saccade Landing Position Prediction for Gaze-Contingent Rendering
    Arabadzhiyska, Elena
    Tursun, Okan Tarhan
    Myszkowski, Karol
    Seidel, Hans-Peter
    Didyk, Piotr
    ACM TRANSACTIONS ON GRAPHICS, 2017, 36 (04):
  • [49] A gaze-contingent paradigm for studying continuous saccadic adaptation
    Garaas, Tyler W.
    Nieuwenhuis, Tyson
    Pomplun, Marc
    JOURNAL OF NEUROSCIENCE METHODS, 2008, 168 (02) : 334 - 340
  • [50] Saliency of peripheral targets in gaze-contingent multiresolutional displays
    Reingold, EM
    Loschky, LC
    BEHAVIOR RESEARCH METHODS INSTRUMENTS & COMPUTERS, 2002, 34 (04): : 491 - 499