PrISM-Q&A: Step-Aware Voice Assistant on a Smartwatch Enabled by Multimodal Procedure Tracking and Large Language Models

被引:0
|
作者
Arakawa, Riku [1 ]
Lehman, Jill Fain [1 ]
Goel, Mayank [1 ]
机构
[1] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
来源
PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT | 2024年 / 8卷 / 04期
基金
美国安德鲁·梅隆基金会;
关键词
context-aware; procedure tracking; task assistance; large language models; question answering;
D O I
10.1145/3699759
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Voice assistants capable of answering user queries during various physical tasks have shown promise in guiding users through complex procedures. However, users often find it challenging to articulate their queries precisely, especially when unfamiliar with the specific terminologies required for machine-oriented tasks. We introduce PrISM-Q&A, a novel question- answering (Q&A) interaction termed step-aware Q&A, which enhances the functionality of voice assistants on smartwatches by incorporating Human Activity Recognition (HAR) and providing the system with user context. It continuously monitors user behavior during procedural tasks via audio and motion sensors on the watch and estimates which step the user is performing. When a question is posed, this contextual information is supplied to Large Language Models (LLMs) as part of the context used to generate a response, even in the case of inherently vague questions like "What should I do next with this?" Our studies confirmed that users preferred the convenience of our approach compared to existing voice assistants. Our real-time assistant represents the first Q&A system that provides contextually situated support during tasks without camera use, paving the way for the ubiquitous, intelligent assistant.
引用
收藏
页数:26
相关论文
共 1 条
  • [1] Making Large Language Models Better Reasoners with Step-Aware Verifier
    Li, Yifei
    Lin, Zeqi
    Zhang, Shizhuo
    Fu, Qiang
    Chen, Bei
    Lou, Jian-Guang
    Chen, Weizhu
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 5315 - 5333