Hands and Speech in Space: Multimodal Interaction with Augmented Reality interfaces

被引:8
|
作者
Billinghurst, Mark [1 ]
机构
[1] Univ Canterbury, Human Interface Technol Lab New Zealand, Ilam Rd, Christchurch, New Zealand
关键词
Augmented Reality; Multimodal Interfaces; Speech; Gesture;
D O I
10.1145/2522848.2532202
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Augmented Reality (AR) is technology that allows virtual imagery to be seamlessly integrated into the real world. Although first developed in the 1960's it has only been recently that AR has become widely available, through platforms such as the web and mobile phones. However most AR interfaces have very simple interaction, such as using touch on phone screens or camera tracking from real images. New depth sensing and gesture tracking technologies such as Microsoft Kinect or Leap Motion have made is easier than ever before to track hands in space. Combined with speech recognition and AR tracking and viewing software it is possible to create interfaces that allow users to manipulate 3D graphics in space through a natural combination of speech and gesture. In this paper I will review previous research in multimodal AR interfaces and give an overview of the significant research questions that need to be addressed before speech and gesture interaction can become commonplace.
引用
收藏
页码:379 / 380
页数:2
相关论文
共 50 条
  • [21] Review of Multimodal Interaction in Optical See-Through Augmented Reality
    Lazaro, May Jorella
    Kim, Sungho
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2025,
  • [22] Comparing Single-modal and Multimodal Interaction in an Augmented Reality System
    Wang, Zhimin
    Yu, Huangyue
    Wang, Haofei
    Wang, Zongji
    Lu, Feng
    ADJUNCT PROCEEDINGS OF THE 2020 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY (ISMAR-ADJUNCT 2020), 2020, : 165 - 166
  • [23] Vision-Based Technique and Issues for Multimodal Interaction in Augmented Reality
    Ismail, Ajune Wanis
    Billinghurst, Mark
    Sunar, Mohd Shahrizal
    8TH INTERNATIONAL SYMPOSIUM ON VISUAL INFORMATION COMMUNICATION AND INTERACTION (VINCI 2015), 2015, : 75 - 82
  • [24] Multimodal Feedback for Finger-Based Interaction in Mobile Augmented Reality
    Hurst, Wolfgang
    Vriens, Kevin
    ICMI'16: PROCEEDINGS OF THE 18TH ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2016, : 302 - 306
  • [25] Designing augmented reality interfaces
    Billinghurst, M
    Grasset, R
    Looser, J
    COMPUTER GRAPHICS-US, 2005, 39 (01): : 17 - 22
  • [26] Voice Interaction for Augmented Reality Navigation Interfaces with Natural Language Understanding
    Zhao, Junhong
    Parry, Christopher James
    dos Anjos, Rafael
    Anslow, Craig
    Rhee, Taehyun
    2020 35TH INTERNATIONAL CONFERENCE ON IMAGE AND VISION COMPUTING NEW ZEALAND (IVCNZ), 2020,
  • [27] Effects of Interaction Method, Size, and Distance to Object on Augmented Reality Interfaces
    Hussain, Muhammad
    Park, Jaehyun
    Kim, Hyun K.
    INTERACTING WITH COMPUTERS, 2023, 35 (01) : 1 - 11
  • [28] Interaction With Gaze, Gesture, and Speech in a Flexibly Configurable Augmented Reality System
    Wang, Zhimin
    Wang, Haofei
    Yu, Huangyue
    Lu, Feng
    IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2021, 51 (05) : 524 - 534
  • [29] Multimodal human-computer interaction for emmersive visualization: Integrating speech-gesture recognitions and augmented reality for indoor environments
    Malkawi, AM
    Srinivasan, RS
    PROCEEDINGS OF THE SEVENTH IASTED INTERNATIONAL CONFERENCE ON COMPUTER GRAPHICS AND IMAGING, 2004, : 171 - 176
  • [30] Using Semantics to Automatically Generate Speech Interfaces for Wearable Virtual and Augmented Reality Applications
    Lamberti, Fabrizio
    Manuri, Federico
    Paravati, Gianluca
    Piumatti, Giovanni
    Sanna, Andrea
    IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2017, 47 (01) : 152 - 164