Hands and Speech in Space: Multimodal Interaction with Augmented Reality interfaces

被引:8
|
作者
Billinghurst, Mark [1 ]
机构
[1] Univ Canterbury, Human Interface Technol Lab New Zealand, Ilam Rd, Christchurch, New Zealand
关键词
Augmented Reality; Multimodal Interfaces; Speech; Gesture;
D O I
10.1145/2522848.2532202
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Augmented Reality (AR) is technology that allows virtual imagery to be seamlessly integrated into the real world. Although first developed in the 1960's it has only been recently that AR has become widely available, through platforms such as the web and mobile phones. However most AR interfaces have very simple interaction, such as using touch on phone screens or camera tracking from real images. New depth sensing and gesture tracking technologies such as Microsoft Kinect or Leap Motion have made is easier than ever before to track hands in space. Combined with speech recognition and AR tracking and viewing software it is possible to create interfaces that allow users to manipulate 3D graphics in space through a natural combination of speech and gesture. In this paper I will review previous research in multimodal AR interfaces and give an overview of the significant research questions that need to be addressed before speech and gesture interaction can become commonplace.
引用
收藏
页码:379 / 380
页数:2
相关论文
共 50 条
  • [1] Hands in Space Gesture Interaction with Augmented-Reality Interfaces
    Billinghurst, Mark
    Piumsomboon, Tham
    Bai, Huidong
    IEEE COMPUTER GRAPHICS AND APPLICATIONS, 2014, 34 (01) : 77 - 81
  • [2] Multimodal Interaction in Augmented Reality
    Chen, Zhaorui
    Li, Jinzhou
    Hua, Yifan
    Shen, Rui
    Basu, Anup
    2017 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2017, : 206 - 209
  • [3] Multimodal interaction with a wearable augmented reality system
    Kölsch, M
    Bane, R
    Höllerer, T
    Turk, M
    IEEE COMPUTER GRAPHICS AND APPLICATIONS, 2006, 26 (03) : 62 - 71
  • [4] Multimodal Fusion: Gesture and Speech Input in Augmented Reality Environment
    Ismail, Ajune Wanis
    Sunar, Mohd Shahrizal
    COMPUTATIONAL INTELLIGENCE IN INFORMATION SYSTEMS, 2015, 331 : 245 - 254
  • [5] Using Hand Gesture and Speech in a Multimodal Augmented Reality Environment
    Dias, Miguel Sales
    Bastos, Rafael
    Fernandes, Joao
    Tavares, Joao
    Santos, Pedro
    GESTURE-BASED HUMAN-COMPUTER INTERACTION AND SIMULATION, 2009, 5085 : 175 - +
  • [6] Multimodal Interaction Concepts for Mobile Augmented Reality Applications
    Hurst, Wolfgang
    van Wezel, Casper
    ADVANCES IN MULTIMEDIA MODELING, PT II, 2011, 6524 : 157 - 167
  • [7] Multimodal, Touchless Interaction in Spatial Augmented Reality Environments
    Elepfandt, Monika
    Suenderhauf, Marcelina
    DIGITAL HUMAN MODELING, 2011, 6777 : 263 - 271
  • [8] ARZombie: A Mobile Augmented Reality Game with Multimodal Interaction
    Cordeiro, Diogo
    Correia, Nuno
    Jesus, Rui
    PROCEEDINGS OF THE 2015 7TH INTERNATIONAL CONFERENCE ON INTELLIGENT TECHNOLOGIES FOR INTERACTIVE ENTERTAINMENT, 2015, : 22 - 31
  • [9] Multimodal Interaction Framework for Collaborative Augmented Reality in Education
    Asiri, Dalia Mohammed Eissa
    Allehaibi, Khalid Hamed
    Basori, Ahmad Hoirul
    INTERNATIONAL JOURNAL OF COMPUTER SCIENCE AND NETWORK SECURITY, 2022, 22 (07): : 268 - 282
  • [10] Hands-Free Interaction for Augmented Reality in Vascular Interventions
    Grinshpoon, Alon
    Sadri, Shirin
    Loeb, Gabrielle J.
    Elvezio, Carmine
    Feiner, Steven K.
    25TH 2018 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES (VR), 2018, : 751 - 752