Performance Assessment of Fine-Tuned Barrier Recognition Models in Varying Conditions

被引:0
|
作者
Thoma, Marios [1 ,2 ]
Partaourides, Harris [1 ]
Sreedharan, Ieswaria [1 ]
Theodosiou, Zenonas [1 ,3 ]
Michael, Loizos [1 ,2 ]
Lanitis, Andreas [1 ,4 ]
机构
[1] CYENS Ctr Excellence, Nicosia, Cyprus
[2] Open Univ Cyprus, Nicosia, Cyprus
[3] Cyprus Univ Technol, Dept Commun & Internet Studies, Limassol, Cyprus
[4] Cyprus Univ Technol, Dept Multimedia & Graph Arts, Limassol, Cyprus
关键词
Pedestrian Safety; Egocentric Dataset; Barrier Recognition; Deep Learning;
D O I
10.1007/978-3-031-44240-7_17
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Walking has been widely promoted by various medical institutions as a major contributor to physical activity that keeps people healthy. However, pedestrian safety remains a critical concern due to barriers present on sidewalks, such as bins, poles, and trees. Although pedestrians are generally cautious, these barriers can pose a significant risk to vulnerable groups, such as the visually impaired and elderly. To address this issue, accurate and robust computer vision models can be used to detect barriers on pedestrian pathways in real-time. In this study, we assess the performance of fine-tuned egocentric barrier recognition models under various conditions, such as lighting variations, angles of view, video frame rates and levels of obstruction. In this context, we collected a dataset of different barriers, and fine-tuned two representative image recognition models, assessing their performances on a set of videos taken from a predefined route. Our findings provide guidelines for retaining model performance for applications using barrier recognition models in varying environmental conditions.
引用
收藏
页码:172 / 181
页数:10
相关论文
共 50 条
  • [1] Fine-Tuned Face Recognition Models for Sibling Discrimination
    Goel, Rita
    Alamgir, Maida
    Wahab, Haroon
    Mehmood, Irfan
    Ugail, Hassan
    4TH INTERDISCIPLINARY CONFERENCE ON ELECTRICS AND COMPUTER, INTCEC 2024, 2024,
  • [2] Exploring Memorization in Fine-tuned Language Models
    Zeng, Shenglai
    Li, Yaxin
    Ren, Jie
    Liu, Yiding
    Xu, Han
    He, Pengfei
    Xing, Yue
    Wang, Shuaiqiang
    Tang, Jiliang
    Yin, Dawei
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 3917 - 3948
  • [3] Fingerprinting Fine-tuned Language Models in the Wild
    Diwan, Nirav
    Chakravorty, Tanmoy
    Shafiq, Zubair
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 4652 - 4664
  • [5] Exploring the performance and explainability of fine-tuned BERT models for neuroradiology protocol assignment
    Talebi, Salmonn
    Tong, Elizabeth
    Li, Anna
    Yamin, Ghiam
    Zaharchuk, Greg
    Mofrad, Mohammad R. K.
    BMC MEDICAL INFORMATICS AND DECISION MAKING, 2024, 24 (01)
  • [6] Exploring the performance and explainability of fine-tuned BERT models for neuroradiology protocol assignment
    Salmonn Talebi
    Elizabeth Tong
    Anna Li
    Ghiam Yamin
    Greg Zaharchuk
    Mohammad R. K. Mofrad
    BMC Medical Informatics and Decision Making, 24
  • [7] On the Importance of Data Size in Probing Fine-tuned Models
    Mehrafarin, Houman
    Rajaee, Sara
    Pilehvar, Mohammad Taher
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 228 - 238
  • [8] Fine-tuned CLIP Models are Efficient Video Learners
    Rasheed, Hanoona
    Khattak, Muhammad Uzair
    Maaz, Muhammad
    Khan, Salman
    Khan, Fahad Shahbaz
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 6545 - 6554
  • [9] Fine-Tuned Pre-Trained Model for Script Recognition
    Bisht, Mamta
    Gupta, Richa
    INTERNATIONAL JOURNAL OF MATHEMATICAL ENGINEERING AND MANAGEMENT SCIENCES, 2021, 6 (05) : 1297 - 1314
  • [10] Fostering Judiciary Applications with New Fine-Tuned Models for Legal Named Entity Recognition in Portuguese
    Zanuz, Luciano
    Rigo, Sandro Jose
    COMPUTATIONAL PROCESSING OF THE PORTUGUESE LANGUAGE, PROPOR 2022, 2022, 13208 : 219 - 229