Constraining user response via multimodal dialog interface

被引:3
|
作者
Baker K. [1 ]
Mckenzie A. [2 ]
Biermann A. [2 ]
Webelhuth G. [3 ]
机构
[1] Linguistics Department, Ohio State University, 222 Oxley Hall, Columbus, OH 43210-1298
[2] Department of Computer Science, Duke University, Durham, NC 27708-0129
[3] Sem. für Englische Philologie, Georg-August-Univ. Göttingen, 37073 Göttingen
关键词
Constrain user response; Multimodal dialog interface; Speech recognition;
D O I
10.1023/B:IJST.0000037069.82313.57
中图分类号
学科分类号
摘要
This paper presents the results of an experiment comparing two different designs of an automated dialog interface. We compare a multimodal design utilizing text displays coordinated with spoken prompts to a voice-only version of the same application. Our results show that the text-coordinated version is more efficient in terms of word recognition and number of out-of-grammar responses, and is equal to the voice-only version in terms of user satisfaction. We argue that this type of multimodal dialog interface effectively constrains user response to allow for better speech recognition without increasing cognitive load or compromising the naturalness of the interaction.
引用
收藏
页码:251 / 258
页数:7
相关论文
共 50 条
  • [41] Construction of Multimodal Dialog System via Knowledge Graph in Travel Domain
    Wan, Jing
    Yuan, Minghui
    Dong, Zhenhao
    Hou, Lei
    Xie, Jiawang
    Zhu, Hongyin
    Wen, Qinghua
    WEB AND BIG DATA, PT IV, APWEB-WAIM 2023, 2024, 14334 : 422 - 437
  • [42] A Spoken Dialog System with Redundant Response to Prevent User Misunderstanding
    Yamaoka, Masaki
    Hara, Sunao
    Abe, Masanobu
    2015 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA), 2015, : 229 - 232
  • [43] Modular multimodal user interface for distributed ambient intelligence architectures
    La Tona, Giuseppe
    Petitti, Antonio
    Lorusso, Adele
    Colella, Roberto
    Milella, Annalisa
    Attolico, Giovanni
    INTERNET TECHNOLOGY LETTERS, 2018, 1 (02):
  • [44] Ensuring a Robust Multimodal Conversational User Interface During MaintenanceWork
    Fleiner, Christian
    Riedel, Till
    Beigl, Michael
    Ruoff, Marcel
    MENSCH AND COMPUTER 2021 (MUC 21), 2021, : 79 - 91
  • [45] Research on the key techniques and developing trends of multimodal user interface
    Pu, J.-T.
    Chen, W.-G.
    Wang, H.
    Dong, S.-H.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2001, 38 (06): : 684 - 690
  • [46] A Cascaded Multimodal Natural User Interface to Reduce Driver Distraction
    Kim, Myeongseop
    Seong, Eunjin
    Jwa, Younkyung
    Lee, Jieun
    Kim, Seungjun
    IEEE ACCESS, 2020, 8 : 112969 - 112984
  • [47] Distributed speech processing in MiPad's multimodal user interface
    Deng, L
    Wang, KS
    Acero, A
    Hon, HW
    Droppo, J
    Boulis, C
    Wang, YY
    Jacoby, D
    Mahajan, M
    Chelba, C
    Huang, XD
    IEEE TRANSACTIONS ON SPEECH AND AUDIO PROCESSING, 2002, 10 (08): : 605 - 619
  • [48] Multimodal user interface for traffic incident management in control room
    Choi, E. H. C.
    Taib, R.
    Shi, Y.
    Chen, F.
    IET INTELLIGENT TRANSPORT SYSTEMS, 2007, 1 (01) : 27 - 36
  • [49] Bypassing Bluetooth device discovery using a multimodal user interface
    Engelsma, Jonathan R.
    Ferrans, James C.
    2007 FOURTH ANNUAL INTERNATIONAL CONFERENCE ON MOBILE AND UBIQUITOUS SYSTEMS: NETWORKING & SERVICES, 2007, : 26 - 34
  • [50] Maintenance support - Case study for a multimodal mobile user interface
    Fuchs, G
    Reichart, D
    Schumann, H
    Forbrig, P
    MULTIMEDIA ON MOBILE DEVICES II, 2006, 6074