EHTask: Recognizing User Tasks From Eye and Head Movements in Immersive Virtual Reality

被引:16
|
作者
Hu, Zhiming [1 ]
Bulling, Andreas [2 ]
Li, Sheng [1 ,3 ]
Wang, Guoping [1 ,3 ]
机构
[1] Peking Univ, Sch Comp Sci, Beijing 100871, Peoples R China
[2] Univ Stuttgart, D-70174 Stuttgart, Germany
[3] Peking Univ, Natl Biomed Imaging Ctr, Beijing 100871, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划; 欧洲研究理事会;
关键词
Task analysis; Videos; Head; Visualization; Virtual reality; Magnetic heads; Solid modeling; Visual attention; task recognition; eye movements; head movements; deep learning; virtual reality; GAZE PREDICTION;
D O I
10.1109/TVCG.2021.3138902
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Understanding human visual attention in immersive virtual reality (VR) is crucial for many important applications, including gaze prediction, gaze guidance, and gaze-contingent rendering. However, previous works on visual attention analysis typically only explored one specific VR task and paid less attention to the differences between different tasks. Moreover, existing task recognition methods typically focused on 2D viewing conditions and only explored the effectiveness of human eye movements. We first collect eye and head movements of 30 participants performing four tasks, i.e., Free viewing, Visual search, Saliency, and Track, in 15 360-degree VR videos. Using this dataset, we analyze the patterns of human eye and head movements and reveal significant differences across different tasks in terms of fixation duration, saccade amplitude, head rotation velocity, and eye-head coordination. We then propose EHTask- a novel learning-based method that employs eye and head movements to recognize user tasks in VR. We show that our method significantly outperforms the state-of-the-art methods derived from 2D viewing conditions both on our dataset (accuracy of 84.4% versus 62.8%) and on a real-world dataset (61.9% versus 44.1%). As such, our work provides meaningful insights into human visual attention under different VR tasks and guides future work on recognizing user tasks in VR.
引用
收藏
页码:1992 / 2004
页数:13
相关论文
共 50 条
  • [31] From Virtual Reality to Immersive Analytics in Bioinformatics
    Sommer, Bjoern
    Baaden, Marc
    Krone, Michael
    Woods, Andrew
    JOURNAL OF INTEGRATIVE BIOINFORMATICS, 2018, 15 (02)
  • [32] The Importance of Postural Cues for Determining Eye Height in Immersive Virtual Reality
    Leyrer, Markus
    Linkenauger, Sally A.
    Buelthoff, Heinrich H.
    Mohler, Betty J.
    PLOS ONE, 2015, 10 (05):
  • [33] Prospective on Eye-Tracking-based Studies in Immersive Virtual Reality
    Li, Fan
    Lee, Ching-Hung
    Feng, Shanshan
    Trappey, Amy
    Gilani, Fazal
    PROCEEDINGS OF THE 2021 IEEE 24TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN (CSCWD), 2021, : 861 - 866
  • [34] Immersive control of a quadruped robot with Virtual Reality Eye-wear
    Yousefi, Ali
    Betta, Zoe
    Mottola, Giovanni
    Recchiuto, Carmine Tommaso
    Sgorbissa, Antonio
    2024 33RD IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, ROMAN 2024, 2024, : 49 - 55
  • [35] Acceptance of immersive head-mounted virtual reality in older adults
    Huygelier, Hanne
    Schraepen, Brenda
    van Ee, Raymond
    Vanden Abeele, Vero
    Gillebert, Celine R.
    SCIENTIFIC REPORTS, 2019, 9 (1)
  • [36] Acceptance of immersive head-mounted virtual reality in older adults
    Hanne Huygelier
    Brenda Schraepen
    Raymond van Ee
    Vero Vanden Abeele
    Céline R. Gillebert
    Scientific Reports, 9
  • [38] Language-driven anticipatory eye movements in virtual reality
    Eichert, Nicole
    Peeters, David
    Hagoort, Peter
    BEHAVIOR RESEARCH METHODS, 2018, 50 (03) : 1102 - 1115
  • [39] Language-driven anticipatory eye movements in virtual reality
    Nicole Eichert
    David Peeters
    Peter Hagoort
    Behavior Research Methods, 2018, 50 : 1102 - 1115
  • [40] Analyzing Eye Movements in Interview Communication with Virtual Reality Agents
    Tian, Fuhui
    Okada, Shogo
    Nitta, Katsumi
    PROCEEDINGS OF THE 7TH INTERNATIONAL CONFERENCE ON HUMAN-AGENT INTERACTION (HAI'19), 2019, : 3 - 10