Robust Visual Recognition in Poor Visibility Conditions: A Prior Knowledge-Guided Adversarial Learning Approach

被引:2
|
作者
Yang, Jiangang [1 ]
Yang, Jianfei [2 ]
Luo, Luqing [1 ]
Wang, Yun [3 ]
Wang, Shizheng [4 ]
Liu, Jian [1 ]
机构
[1] Chinese Acad Sci, Inst Microelect, Beijing 100029, Peoples R China
[2] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore
[3] Guangdong Greater Bay Area Inst Integrated Circuit, Guangzhou 510535, Peoples R China
[4] Chinese Acad Sci, R&D Ctr Internet Things, Wuxi 214200, Peoples R China
关键词
robust visual recognition; poor visibility conditions; unsupervised domain adaptation; image restoration; IMAGE; NETWORK;
D O I
10.3390/electronics12173711
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning has achieved remarkable success in numerous computer vision tasks. However, recent research reveals that deep neural networks are vulnerable to natural perturbations from poor visibility conditions, limiting their practical applications. While several studies have focused on enhancing model robustness in poor visibility conditions through techniques such as image restoration, data augmentation, and unsupervised domain adaptation, these efforts are predominantly confined to specific scenarios and fail to address multiple poor visibility scenarios encountered in real-world settings. Furthermore, the valuable prior knowledge inherent in poor visibility images is seldom utilized to aid in resolving high-level computer vision tasks. In light of these challenges, we propose a novel deep learning paradigm designed to bolster the robustness of object recognition across diverse poor visibility scenes. By observing the prior information in diverse poor visibility scenes, we integrate a feature matching module based on this prior knowledge into our proposed learning paradigm, aiming to facilitate deep models in learning more robust generic features at shallow levels. Moreover, to further enhance the robustness of deep features, we employ an adversarial learning strategy based on mutual information. This strategy combines the feature matching module to extract task-specific representations from low visibility scenes in a more robust manner, thereby enhancing the robustness of object recognition. We evaluate our approach on self-constructed datasets containing diverse poor visibility scenes, including visual blur, fog, rain, snow, and low illuminance. Extensive experiments demonstrate that our proposed method yields significant improvements over existing solutions across various poor visibility conditions.
引用
收藏
页数:19
相关论文
共 33 条
  • [1] Robust Visual Recognition in Poor Visibility Conditions: A Prior Knowledge-Guided Adversarial Learning Approach (Vol 12, 3711, 2023)
    Yang, Jiangang
    Yang, Jianfei
    Luo, Luqing
    Wang, Yun
    Wang, Shizheng
    Liu, Jian
    ELECTRONICS, 2024, 13 (03)
  • [2] A prior knowledge-guided distributionally robust optimization-based adversarial training strategy for medical image classification
    Jiang, Shancheng
    Wu, Zehui
    Yang, Haiqiong
    Xiang, Kun
    Ding, Weiping
    Chen, Zhen-Song
    INFORMATION SCIENCES, 2024, 673
  • [3] A knowledge-guided process planning approach with reinforcement learning
    Zhang, Lijun
    Wu, Hongjin
    Chen, Yelin
    Wang, Xuesong
    Peng, Yibing
    JOURNAL OF ENGINEERING DESIGN, 2024,
  • [4] Prior Knowledge-Guided Deep Learning Algorithms for Metantenna Design (Invited)
    Liu, Peiqin
    Chen, Zhi Ning
    2024 IEEE INTERNATIONAL WORKSHOP ON ANTENNA TECHNOLOGY, IWAT, 2024, : 11 - 13
  • [5] Learning Multi-Scale Knowledge-Guided Features for Text-Guided Face Recognition
    Hasan, Md Mahedi
    Sami, Shoaib Meraj
    Nasrabadi, Nasser M.
    Dawson, Jeremy
    IEEE TRANSACTIONS ON BIOMETRICS, BEHAVIOR, AND IDENTITY SCIENCE, 2025, 7 (02): : 195 - 209
  • [6] MLIP: Enhancing Medical Visual Representation with Divergence Encoder and Knowledge-guided Contrastive Learning
    Li, Zhe
    Yang, Laurence T.
    Ren, Bocheng
    Nie, Xin
    Gao, Zhangyang
    Tan, Cheng
    Li, Stan Z.
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 11704 - 11714
  • [7] Enhancing zero-shot object detection with external knowledge-guided robust contrast learning
    Duan, Lijuan
    Liu, Guangyuan
    En, Qing
    Liu, Zhaoying
    Gong, Zhi
    Ma, Bian
    PATTERN RECOGNITION LETTERS, 2024, 185 : 152 - 159
  • [8] Robust Inverse Framework using Knowledge-guided Self-Supervised Learning: An application to Hydrology
    Ghosh, Rahul
    Renganathan, Arvind
    Tayal, Kshitij
    Li, Xiang
    Khandelwal, Ankush
    Jia, Xiaowei
    Duffy, Christopher
    Nieber, John
    Kumar, Vipin
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 465 - 474
  • [9] A semi-automatic cardiovascular annotation and quantification toolbox utilizing prior knowledge-guided feature learning
    Zhang, Wenzhen
    Cao, Yankun
    Hu, Xifeng
    Mi, Jia
    Zhang, Pengfei
    Sun, Guanjie
    Mukhopadhyay, Subhas Chandra
    Li, Yujun
    Liu, Zhi
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2025, 102
  • [10] KG-Unet: a knowledge-guided deep learning approach for seismic facies segmentation
    Zhang, Xiang-Ye
    Wang, Wan-Li
    Hu, Guang-Min
    Yao, Xing-Miao
    EARTH SCIENCE INFORMATICS, 2024, 17 (03) : 1967 - 1981