Constraining user response via multimodal dialog interface

被引:3
|
作者
Baker K. [1 ]
Mckenzie A. [2 ]
Biermann A. [2 ]
Webelhuth G. [3 ]
机构
[1] Linguistics Department, Ohio State University, 222 Oxley Hall, Columbus, OH 43210-1298
[2] Department of Computer Science, Duke University, Durham, NC 27708-0129
[3] Sem. für Englische Philologie, Georg-August-Univ. Göttingen, 37073 Göttingen
关键词
Constrain user response; Multimodal dialog interface; Speech recognition;
D O I
10.1023/B:IJST.0000037069.82313.57
中图分类号
学科分类号
摘要
This paper presents the results of an experiment comparing two different designs of an automated dialog interface. We compare a multimodal design utilizing text displays coordinated with spoken prompts to a voice-only version of the same application. Our results show that the text-coordinated version is more efficient in terms of word recognition and number of out-of-grammar responses, and is equal to the voice-only version in terms of user satisfaction. We argue that this type of multimodal dialog interface effectively constrains user response to allow for better speech recognition without increasing cognitive load or compromising the naturalness of the interaction.
引用
收藏
页码:251 / 258
页数:7
相关论文
共 50 条
  • [1] ENGINEERING USER MODELS TO ENHANCE MULTIMODAL DIALOG
    CHAPPEL, HR
    WILSON, MD
    CAHOUR, B
    JARVINEN, P
    KAZMAN, R
    SCHNEIDERHUFSCHMIDT, M
    KAKEHI, K
    CAUTAZ, J
    STIEGLER, H
    HARRISON, M
    COCKTON, G
    IFIP TRANSACTIONS A-COMPUTER SCIENCE AND TECHNOLOGY, 1992, 18 : 297 - 315
  • [2] User performance improvement via multimodal interface fusion augmentation
    Plano, S
    Blasch, EP
    FUSION 2003: PROCEEDINGS OF THE SIXTH INTERNATIONAL CONFERENCE OF INFORMATION FUSION, VOLS 1 AND 2, 2003, : 514 - 521
  • [3] Augmented Reality Dialog Interface for Multimodal Teleoperation
    Pereira, Andre
    Carter, Elizabeth J.
    Leite, Iolanda
    Mars, John
    Lehman, Jill Fain
    2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2017, : 764 - 771
  • [4] Multimodal user interface for internet
    Dong, S.H.
    Xiao, B.
    Wang, G.P.
    Jisuanji Xuebao/Chinese Journal of Computers, 2000, 23 (12): : 1270 - 1275
  • [5] A novel dialog model for the design of multimodal user interfaces
    Schaefer, R
    Bleul, S
    Mueller, W
    ENGINEERING HUMAN COMPUTER INTERACTION AND INTERACTIVE SYSTEMS, 2005, 3425 : 221 - 223
  • [6] User Attention-guided Multimodal Dialog Systems
    Cui, Chen
    Wang, Wenjie
    Song, Xuemeng
    Huang, Minlie
    Xu, Xin-Shun
    Nie, Liqiang
    PROCEEDINGS OF THE 42ND INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '19), 2019, : 445 - 454
  • [7] Supporting multiple user types with a multimodal dialog agent
    Groble, Michael
    Thompson, Will
    PROCEEDING OF THE 2007 IEEE/WIC/ACM INTERNATIONAL CONFERENCE ON WEB INTELLIGENCE AND INTELLIGENT AGENT TECHNOLOGY, WORKSHOPS, 2007, : 329 - 332
  • [8] Dialog model clustering for user interface adaptation
    Menkhaus, G
    Fischmeister, S
    WEB ENGINEERING, PROCEEDINGS, 2003, 2722 : 194 - 203
  • [9] A tangible user interface with multimodal feedback
    Kim, Laehyun
    Cho, Hyunchul
    Park, Sehyung
    Han, Manchul
    HUMAN-COMPUTER INTERACTION, PT 3, PROCEEDINGS, 2007, 4552 : 94 - +
  • [10] Multimodal user interface for the communication of the disabled
    Savvas Argyropoulos
    Konstantinos Moustakas
    Alexey A. Karpov
    Oya Aran
    Dimitrios Tzovaras
    Thanos Tsakiris
    Giovanna Varni
    Byungjun Kwon
    Journal on Multimodal User Interfaces, 2008, 2 : 105 - 116