DeepNAVI: A deep learning based smartphone navigation assistant for people with visual impairments

被引:19
|
作者
Kuriakose, Bineeth [1 ]
Shrestha, Raju [1 ]
Sandnes, Frode Eika [1 ]
机构
[1] Oslo Metropolitan Univ, Dept Comp Sci, Oslo, Norway
关键词
Navigation assistant; Deep learning; Blind; Visual impairment; Portable; Smartphone;
D O I
10.1016/j.eswa.2022.118720
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Navigation assistance is an active research area, where one aim is to foster independent living for people with vision impairments. Despite the fact that many navigation assistants use advanced technologies and methods, we found that they did not explicitly address two essential requirements in a navigation assistant -portability and convenience. It is equally imperative in designing a navigation assistant for the visually impaired that the device is portable and convenient to use without much training. Some navigation assistants do not provide users with detailed information about the obstacle types that can be detected, which is essential to make informed decisions when navigating in real-time. To address these gaps, we propose DeepNAVI, a smartphone-based navigation assistant that leverages deep learning competence. Besides providing information about the type of obstacles present, our system can also provide information about their position, distance from the user, motion status, and scene information. All this information is offered to users through audio mode without compromising portability and convenience. With a small model size and rapid inference time, our navigation assistant can be deployed on a portable device such as a smartphone and work seamlessly in a real-time environment. We conducted a pilot test with a user to assess the usefulness and practicality of the system. Our testing results indicate that our system has the potential to be a practical and useful navigation assistant for the visually impaired.
引用
收藏
页数:16
相关论文
共 50 条
  • [21] Visual Navigation With Multiple Goals Based on Deep Reinforcement Learning
    Rao, Zhenhuan
    Wu, Yuechen
    Yang, Zifei
    Zhang, Wei
    Lu, Shijian
    Lu, Weizhi
    Zha, ZhengJun
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (12) : 5445 - 5455
  • [22] Visual Recognition Based on Deep Learning for Navigation Mark Classification
    Pan, Mingyang
    Liu, Yisai
    Cao, Jiayi
    Li, Yu
    Li, Chao
    Chen, Chi-Hua
    IEEE ACCESS, 2020, 8 : 32767 - 32775
  • [23] Deep learning based decomposition for visual navigation in industrial platforms
    Djenouri, Youcef
    Hatleskog, Johan
    Hjelmervik, Jon
    Bjorne, Elias
    Utstumo, Trygve
    Mobarhan, Milad
    APPLIED INTELLIGENCE, 2022, 52 (07) : 8101 - 8117
  • [24] A DEEP LEARNING BASED MODEL TO ASSIST BLIND PEOPLE IN THEIR NAVIGATION
    Kumar, Nitin
    Jain, Anuj
    JOURNAL OF INFORMATION TECHNOLOGY EDUCATION-INNOVATIONS IN PRACTICE, 2022, 21 : 95 - 114
  • [25] Deep learning-based visual control assistant for assembly in Industry 4.0
    Zamora-Hernandez, Mauricio-Andres
    Castro-Vargas, John Alejandro
    Azorin-Lopez, Jorge
    Garcia-Rodriguez, Jose
    COMPUTERS IN INDUSTRY, 2021, 131 (131)
  • [26] Challenges in Acceptance of Smartphone-Based Assistive Technologies: Extending the UTAUT Model for People With Visual Impairments
    Theodorou, Paraskevi
    Tsiligkos, Kleomenis
    Meliones, Apostolos
    JOURNAL OF VISUAL IMPAIRMENT & BLINDNESS, 2024, 118 (01) : 18 - 30
  • [27] Door recognition and deep learning algorithm for visual based robot navigation
    Chen, Wei
    Qu, Ting
    Zhou, Yimin
    Weng, Kaijian
    Wang, Gang
    Fu, Guoqiang
    2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND BIOMIMETICS IEEE-ROBIO 2014, 2014, : 1793 - 1798
  • [28] Visual Semantic Navigation Based on Deep Learning for Indoor Mobile Robots
    Wang, Li
    Zhao, Lijun
    Huo, Guanglei
    Li, Ruifeng
    Hou, Zhenghua
    Luo, Pan
    Sun, Zhenye
    Wang, Ke
    Yang, Chenguang
    COMPLEXITY, 2018,
  • [29] Integrated Online Localization and Navigation for People with Visual Impairments using Smart Phones
    Apostolopoulos, Ilias
    Fallah, Navid
    Folmer, Eelke
    Bekris, Kostas E.
    2012 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2012, : 1322 - 1329
  • [30] Integrated Online Localization and Navigation for People with Visual Impairments Using Smart Phones
    Apostolopoulos, Ilias
    Fallah, Navid
    Folmer, Eelke
    Bekris, Kostas E.
    ACM TRANSACTIONS ON INTERACTIVE INTELLIGENT SYSTEMS, 2014, 3 (04)