Eliciting Multimodal Gesture plus Speech Interactions in a Multi-Object Augmented Reality Environment

被引:4
|
作者
Zhou, Xiaoyan [1 ]
Williams, Adam S. [1 ]
Ortega, Francisco R. [1 ]
机构
[1] Colorado State Univ, Ft Collins, CO USA
基金
美国国家科学基金会;
关键词
elicitation; multimodal interaction; augmented reality; gesture and speech interaction; multi-object AR environment;
D O I
10.1145/3562939.3565637
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
As augmented reality (AR) technology and hardware become more mature and affordable, researchers have been exploring more intuitive and discoverable interaction techniques for immersive environments. This paper investigates multimodal interaction for 3D object manipulation in a multi-object AR environment. To identify the user-defined gestures, we conducted an elicitation study involving 24 participants and 22 referents using an augmented reality headset. It yielded 528 proposals and generated a winning gesture set with 25 gestures after binning and ranking all gesture proposals. We found that for the same task, the same gesture was preferred for both one and two-object manipulation, although both hands were used in the two-object scenario. We present the gestures and speech results, and the differences compared to similar studies in a single object AR environment. The study also explored the association between speech expressions and gesture stroke during object manipulation, which could improve the recognizer efficiency in augmented reality headsets.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] Virtual object manipulation in collaborative augmented reality environment
    Nini, B
    Batouche, MC
    2004 IEEE International Conference on Industrial Technology (ICIT), Vols. 1- 3, 2004, : 1607 - 1611
  • [22] Virtual object manipulation in collaborative augmented reality environment
    Nini, B
    Batouche, MC
    2004 IEEE International Conference on Industrial Technology (ICIT), Vols. 1- 3, 2004, : 1204 - 1208
  • [23] Audio Augmented Reality for Human-Object Interactions
    Yang, Jing
    Mattern, Friedemann
    UBICOMP/ISWC'19 ADJUNCT: PROCEEDINGS OF THE 2019 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2019 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, 2019, : 408 - 412
  • [24] Multimodal human-computer interaction for emmersive visualization: Integrating speech-gesture recognitions and augmented reality for indoor environments
    Malkawi, AM
    Srinivasan, RS
    PROCEEDINGS OF THE SEVENTH IASTED INTERNATIONAL CONFERENCE ON COMPUTER GRAPHICS AND IMAGING, 2004, : 171 - 176
  • [25] A method to deliver multi-object content in a ubiquitous environment
    Mori, T
    Katsumoto, M
    MULTIMEDIA COMPUTING AND NETWORKING 2006, 2006, 6071
  • [26] Gesture-Based Manipulation of Virtual Terrains on an Augmented Reality Environment
    Ribeiro, Allan Amaral
    Oliveira, Douglas C. B.
    Silva, Rodrigo L. S.
    2017 19TH SYMPOSIUM ON VIRTUAL AND AUGMENTED REALITY (SVR), 2017, : 1 - 7
  • [27] Probabilistic Information Matrix Fusion in a Multi-Object Environment
    Yang, Kaipei
    Bar-Shalom, Yaakov
    2022 IEEE INTERNATIONAL CONFERENCE ON MULTISENSOR FUSION AND INTEGRATION FOR INTELLIGENT SYSTEMS (MFI), 2022,
  • [28] A Multi-Object Grasp Technique for Placement of Objects in Virtual Reality
    Fernandez, Unai J.
    Elizondo, Sonia
    Iriarte, Naroa
    Morales, Rafael
    Ortiz, Amalia
    Marichal, Sebastian
    Ardaiz, Oscar
    Marzo, Asier
    APPLIED SCIENCES-BASEL, 2022, 12 (09):
  • [29] Skillab - A Multimodal Augmented Reality Environment for Learning Manual Tasks
    Shahu, Ambika
    Dorfbauer, Sonja
    Wintersberger, Philipp
    Michahelles, Florian
    HUMAN-COMPUTER INTERACTION - INTERACT 2023, PT III, 2023, 14144 : 588 - 607
  • [30] [DEMO] G-SIAR: Gesture-Speech Interface for Augmented Reality
    Piumsomboon, Thammathip
    Clark, Adrian
    Billinghurst, Mark
    2014 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY (ISMAR) - SCIENCE AND TECHNOLOGY, 2014, : 365 - 366