Fast Gaze-Contingent Optimal Decompositions for Multifocal Displays

被引:46
|
作者
Mercier, Olivier [1 ,2 ]
Sulai, Yusufu [2 ]
Mackenzie, Kevin [2 ]
Zannoli, Marina [2 ]
Hillis, James [2 ]
Nowrouzezahrai, Derek [3 ]
Lanman, Douglas [2 ]
机构
[1] Univ Montreal, Montreal, PQ, Canada
[2] Oculus Res, Pittsburgh, PA 15213 USA
[3] McGill Univ, Montreal, PQ, Canada
来源
ACM TRANSACTIONS ON GRAPHICS | 2017年 / 36卷 / 06期
关键词
computational displays; multifocal displays; multiview rendering; vergence-accommodation conflict; ACCOMMODATION; CALIBRATION;
D O I
10.1145/3130800.3130846
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
As head-mounted displays (HMDs) commonly present a single, fixed-focus display plane, a conflict can be created between the vergence and accommodation responses of the viewer. Multifocal HMDs have long been investigated as a potential solution in which multiple image planes span the viewer's accommodation range. Such displays require a scene decomposition algorithm to distribute the depiction of objects across image planes, and previous work has shown that simple decompositions can be achieved in real-time. However, recent optimal decompositions further improve image quality, particularly with complex content. Such decompositions are more computationally involved and likely require better alignment of the image planes with the viewer's eyes, which are potential barriers to practical applications. Our goal is to enable interactive optimal decomposition algorithms capable of driving a vergence-and accommodation-tracked multifocal testbed. Ultimately, such a testbed is necessary to establish the requirements for the practical use of multifocal displays, in terms of computational demand and hardware accuracy. To this end, we present an efficient algorithm for optimal decompositions, incorporating insights from vision science. Our method is amenable to GPU implementations and achieves a three-orders-of-magnitude speedup over previous work. We further show that eye tracking can be used for adequate plane alignment with efficient image-based deformations, adjusting for both eye rotation and head movement relative to the display. We also build the first binocular multifocal testbed with integrated eye tracking and accommodation measurement, paving the way to establish practical eye tracking and rendering requirements for this promising class of display. Finally, we report preliminary results from a pilot user study utilizing our testbed, investigating the accommodation response of users to dynamic stimuli presented under optimal decomposition.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] GAZE-CONTINGENT ADAPTATION TO PRISMATIC SPECTACLES
    PICK, HL
    HAY, JC
    AMERICAN JOURNAL OF PSYCHOLOGY, 1966, 79 (03): : 443 - &
  • [32] Gaze-Contingent Rendering in Virtual Reality
    Zhu, Fang
    Lu, Ping
    Li, Pin
    Sheng, Bin
    Mao, Lijuan
    ADVANCES IN COMPUTER GRAPHICS, CGI 2020, 2020, 12221 : 16 - 23
  • [33] Development of a gaze-contingent electroretinogram system
    Aghajari, Sara
    Bex, Peter
    Vera-Diaz, Fuensanta
    Panorgias, Thanasis
    INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE, 2021, 62 (08)
  • [34] Implicit learning of gaze-contingent events
    Beesley, Tom
    Pearson, Daniel
    Le Pelley, Mike
    PSYCHONOMIC BULLETIN & REVIEW, 2015, 22 (03) : 800 - 807
  • [35] Gaze-contingent efficient hologram compression for foveated near-eye holographic displays
    Dong, Zhenxing
    Ling, Yuye
    Xu, Chao
    Li, Yan
    Su, Yikai
    DISPLAYS, 2023, 79
  • [36] Stereoscopic fusion with gaze-contingent blur
    Maiello, G.
    Chessa, M.
    Solari, F.
    Bex, P.
    PERCEPTION, 2013, 42 : 117 - 118
  • [37] Gaze-contingent video compression with targeted gaze containment performance
    Komogortsev, Oleg V.
    JOURNAL OF ELECTRONIC IMAGING, 2009, 18 (03)
  • [38] GAZE-CONTINGENT PRISM ADAPTATION - OPTICAL AND FACTORS
    HAY, JC
    PICK, HL
    JOURNAL OF EXPERIMENTAL PSYCHOLOGY, 1966, 72 (05): : 640 - &
  • [39] The Effectiveness of Gaze-Contingent Control in Computer Games
    Orlov, Paul A.
    Apraksin, Nikolay
    PERCEPTION, 2015, 44 (8-9) : 1136 - 1145
  • [40] Look and Learn: A Model of Gaze-Contingent Learning
    Murakami, Max
    Bolhuis, Jantina
    Kolling, Thorsten
    Knopf, Monika
    Triesch, Jochen
    2016 JOINT IEEE INTERNATIONAL CONFERENCE ON DEVELOPMENT AND LEARNING AND EPIGENETIC ROBOTICS (ICDL-EPIROB), 2016, : 284 - 285