Hands and Speech in Space: Multimodal Interaction with Augmented Reality interfaces

被引:8
|
作者
Billinghurst, Mark [1 ]
机构
[1] Univ Canterbury, Human Interface Technol Lab New Zealand, Ilam Rd, Christchurch, New Zealand
关键词
Augmented Reality; Multimodal Interfaces; Speech; Gesture;
D O I
10.1145/2522848.2532202
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Augmented Reality (AR) is technology that allows virtual imagery to be seamlessly integrated into the real world. Although first developed in the 1960's it has only been recently that AR has become widely available, through platforms such as the web and mobile phones. However most AR interfaces have very simple interaction, such as using touch on phone screens or camera tracking from real images. New depth sensing and gesture tracking technologies such as Microsoft Kinect or Leap Motion have made is easier than ever before to track hands in space. Combined with speech recognition and AR tracking and viewing software it is possible to create interfaces that allow users to manipulate 3D graphics in space through a natural combination of speech and gesture. In this paper I will review previous research in multimodal AR interfaces and give an overview of the significant research questions that need to be addressed before speech and gesture interaction can become commonplace.
引用
收藏
页码:379 / 380
页数:2
相关论文
共 50 条
  • [31] ARtention: A design space for gaze-adaptive user interfaces in augmented reality
    Pfeuffer, Ken
    Abdrabou, Yasmeen
    Esteves, Augusto
    Rivu, Radiah
    Abdelrahman, Yomna
    Meitner, Stefanie
    Saadi, Amr
    Alt, Florian
    COMPUTERS & GRAPHICS-UK, 2021, 95 : 1 - 12
  • [32] Model-based Design of Multimodal Interaction for Augmented Reality Web Applications
    Feuerstack, Sebastian
    de Oliveira, Allan C. M.
    Anjo, Mauro dos Santos
    Araujo, Regina B.
    Pizzolato, Ednaldo B.
    WEB3D 2015, 2015, : 259 - 267
  • [33] Mind the Mix: Exploring the Cognitive Underpinnings of Multimodal Interaction in Augmented Reality Systems
    Lazaro, May Jorella
    Kim, Sungho
    EXTENDED ABSTRACTS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2024, 2024,
  • [34] Multimodal augmented reality tangible gaming
    Fotis Liarokapis
    Louis Macan
    Gary Malone
    Genaro Rebolledo-Mendez
    Sara de Freitas
    The Visual Computer, 2009, 25 : 1109 - 1120
  • [35] Multimodal augmented reality tangible gaming
    Liarokapis, Fotis
    Macan, Louis
    Malone, Gary
    Rebolledo-Mendez, Genaro
    de Freitas, Sara
    VISUAL COMPUTER, 2009, 25 (12): : 1109 - 1120
  • [36] Usability of Cross-Device Interaction Interfaces for Augmented Reality in Physical Tasks
    Zhang, Xiaotian
    He, Weiping
    Billinghurst, Mark
    Liu, Daisong
    Yang, Lingxiao
    Feng, Shuo
    Liu, Yizhe
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER INTERACTION, 2024, 40 (09) : 2361 - 2379
  • [37] Augmented Reality Interfaces for Additive Manufacturing
    Eiriksson, Eythor R.
    Pedersen, David B.
    Frisvad, Jeppe R.
    Skovmand, Linda
    Heun, Valentin
    Maes, Pattie
    Aanaes, Henrik
    IMAGE ANALYSIS, SCIA 2017, PT I, 2017, 10269 : 515 - 525
  • [38] Rapid Prototyping of Augmented Reality & Virtual Reality Interfaces
    Nebeling, Michael
    CHI EA '19 EXTENDED ABSTRACTS: EXTENDED ABSTRACTS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2019,
  • [39] Speech-centric multimodal interfaces
    Flanagan, JL
    IEEE SIGNAL PROCESSING MAGAZINE, 2004, 21 (06) : 76 - 81
  • [40] Understanding Gesture and Speech Multimodal Interactions for Manipulation Tasks in Augmented Reality Using Unconstrained Elicitation
    Williams A.S.
    Ortega F.R.
    Proceedings of the ACM on Human-Computer Interaction, 2020, 4 (ISS)