User-Defined Gestures with Physical Props in Virtual Reality

被引:4
|
作者
Moran-Ledesma M. [1 ]
Schneider O. [2 ]
Hancock M. [2 ]
机构
[1] Systems Design Engineering, University of Waterloo, Waterloo, ON
[2] Management Sciences, University of Waterloo, Waterloo, ON
基金
加拿大自然科学与工程研究理事会;
关键词
3d physical props; agreement score; elicitation technique; gestural input; similarity measures; immersive interaction; virtual reality;
D O I
10.1145/3486954
中图分类号
学科分类号
摘要
When interacting with virtual reality (VR) applications like CAD and open-world games, people may want to use gestures as a means of leveraging their knowledge from the physical world. However, people may prefer physical props over handheld controllers to input gestures in VR. We present an elicitation study where 21 participants chose from 95 props to perform manipulative gestures for 20 CAD-like and open-world game-like referents. When analyzing this data, we found existing methods for elicitation studies were insufficient to describe gestures with props, or to measure agreement with prop selection (i.e., agreement between sets of items). We proceeded by describing gestures as context-free grammars, capturing how different props were used in similar roles in a given gesture. We present gesture and prop agreement scores using a generalized agreement score that we developed to compare multiple selections rather than a single selection. We found that props were selected based on their resemblance to virtual objects and the actions they afforded; that gesture and prop agreement depended on the referent, with some referents leading to similar gesture choices, while others led to similar prop choices; and that a small set of carefully chosen props can support multiple gestures. © 2021 ACM.
引用
收藏
相关论文
共 50 条
  • [21] A Framework for User-Defined Body Gestures to Control a Humanoid Robot
    Mohammad Obaid
    Felix Kistler
    Markus Häring
    René Bühling
    Elisabeth André
    International Journal of Social Robotics, 2014, 6 : 383 - 396
  • [22] Face Commands - User-Defined Facial Gestures for Smart Glasses
    Masai, Katsutoshi
    Kunze, Kai
    Sakamoto, Daisuke
    Sugiura, Yuta
    Sugimoto, Maki
    2020 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY (ISMAR 2020), 2020, : 374 - 386
  • [23] Gesture-Based Interaction for Virtual Reality Environments Through User-Defined Commands
    Cespedes-Hernandez, David
    Gonzalez-Calleros, Juan Manuel
    Guerrero-Garcia, Josefina
    Rodriguez-Vizzuett, Liliana
    HUMAN-COMPUTER INTERACTION, HCI-COLLAB 2018, 2019, 847 : 143 - 157
  • [24] Afordance-Based and User-Defined Gestures for Spatial Tangible Interaction
    Gong, Weilun
    Santosa, Stephanie
    Grossman, Tovi
    Glueck, Michael
    Clarke, Daniel
    Lai, Frances
    DESIGNING INTERACTIVE SYSTEMS CONFERENCE, DIS 2023, 2023, : 1500 - 1514
  • [25] Effects of holding postures on user-defined touch gestures for tablet interaction
    Tu, Huawei
    Huang, Qihan
    Zhao, Yanchao
    Gao, Boyu
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2020, 141
  • [26] Investigating user-defined flipping gestures for dual-display phones
    Yang, Zhican
    Yu, Chun
    Chen, Xin
    Luo, Jingjia
    Shi, Yuanchun
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2022, 163
  • [27] Exploring User-Defined Gestures as Input for Hearables and Recognizing Ear-Level Gestures with IMUs
    Sato, Yukina
    Amesaka, Takashi
    Yamamoto, Takumi
    Watanabe, Hiroki
    Sugiura, Yuta
    Proceedings of the ACM on Human-Computer Interaction, 2024, 8 (MHCI)
  • [28] User-defined mapping functions and collision detection to improve the user-friendliness of navigation in a virtual reality environment
    Opriessnig, G
    SEVENTH INTERNATIONAL CONFERENCE ON INFORMATION VISUALIZATION, PROCEEDINGS, 2003, : 446 - 451
  • [29] Composable user-defined operators that can express user-defined literals
    Ichikawa, Kazuhiro
    Chiba, Shigeru
    MODULARITY 2014 - Proceedings of the 13th International Conference on Modularity (Formerly AOSD), 2014, : 13 - 23
  • [30] User-defined mid-air gestures for multiscale GIS interface interaction
    Zhou, Xiaozhou
    Bai, Ruidong
    CARTOGRAPHY AND GEOGRAPHIC INFORMATION SCIENCE, 2023, 50 (05) : 481 - 494