Acoustic-based Alphanumeric Input Interface for Earables

被引:0
|
作者
Wang, Yilin [1 ]
Wang, Zi [2 ]
Yang, Jie [1 ]
机构
[1] Florida State Univ, Dept Comp Sci, Tallahassee, FL 32306 USA
[2] Augusta Univ, Sch Comp & Cyber Sci, Augusta, GA 30912 USA
关键词
Earable; Face and Ear Interaction; Gestures Recognition; Acoustic Sensing;
D O I
10.1109/ICCCN61486.2024.10637602
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
As earables gain popularity, there emerges a need for intuitive user interfaces that adapt to diverse daily scenarios. Traditional methods like touchscreens and voice control often fall short in environments like movie theatres, where silence and darkness are required, or on busy streets where visual distraction introduces extra risk. We propose an innovative earable-based system utilizing unique acoustic friction generated by fingers for alphanumeric input. Our approach digs into the acoustic friction theory, applying this knowledge to better understand the transformation from 2D handwriting into a 1D acoustic time series. This theoretical foundation guides our system design and feature extraction. Specifically, we have redesigned certain characters to enhance their acoustic distinctiveness without compromising the natural handwriting style of users, ensuring the system user-friendly. Our system combines DenseNet and GRU architectures in a multimodal model, refined through transfer learning to adapt to diverse user behaviors. Tested in real-world scenarios with 10 participants, our system achieves a 95% accuracy in recognizing both letters and numbers.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] EchoWrite: An Acoustic-Based Finger Input System Without Training
    Wu, Kaishun
    Yang, Qiang
    Yuan, Baojie
    Zou, Yongpan
    Ruby, Rukhsana
    Li, Mo
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2021, 20 (05) : 1789 - 1803
  • [2] EchoWrite: An Acoustic-based Finger Input System Without Training
    Zou, Yongpan
    Yang, Qiang
    Ruby, Rukhsana
    Han, Yetong
    Wu, Sicheng
    Li, Mo
    Wu, Kaishun
    2019 39TH IEEE INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2019), 2019, : 778 - 787
  • [3] EchoWhisper: Exploring an Acoustic-based Silent Speech Interface for Smartphone Users
    Gao, Yang
    Jin, Yincheng
    Li, Jiyang
    Choi, Seokmin
    Jin, Zhanpeng
    PROCEEDINGS OF THE ACM ON INTERACTIVE MOBILE WEARABLE AND UBIQUITOUS TECHNOLOGIES-IMWUT, 2020, 4 (03):
  • [4] Acoustic-based sensing and applications: A survey
    Bai, Yang
    Lu, Li
    Cheng, Jerry
    Liu, Jian
    Chen, Yingying
    Yu, Jiadi
    COMPUTER NETWORKS, 2020, 181 (181)
  • [5] An Acoustic-Based Encounter Profiling System
    Zhang, Huanle
    Du, Wan
    Zhou, Pengfei
    Li, Mo
    Mohapatra, Prasant
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2018, 17 (08) : 1750 - 1763
  • [6] Acoustic-based damage detection method
    Arora, V.
    Wijnant, Y. H.
    de Boer, A.
    APPLIED ACOUSTICS, 2014, 80 : 23 - 27
  • [7] Robust Acoustic-Based Syllable Detection
    Xie, Zhimin
    Niyogi, Partha
    INTERSPEECH 2006 AND 9TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, VOLS 1-5, 2006, : 1571 - 1574
  • [8] Your table can be an input panel: Acoustic-based device-free interaction recognition
    Chen, Mingshi
    Yang, Panlong
    Xiong, Jie
    Zhang, Maotian
    Lee, Youngki
    Xiang, Chaocan
    Tian, Chang
    Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2019, 3 (01):
  • [9] DEVELOPMENT OF AN ACOUSTIC-BASED RISER MONITORING SYSTEM
    Wei, Dai
    Bai, Yong
    OMAE 2009, VOL 3: PIPELINE AND RISER TECHNOLOGY, 2009, : 759 - 766
  • [10] Acoustic-Based Cetacean Detection in Irish Waters
    McKeown, Eugene
    EFFECTS OF NOISE ON AQUATIC LIFE, 2012, 730 : 589 - 592