Pay "Attention" to Adverse Weather: Weather-aware Attention-based Object Detection

被引:13
|
作者
Chaturvedi, Saket S. [1 ]
Zhang, Lan [2 ]
Yuan, Xiaoyong [1 ]
机构
[1] Michigan Technol Univ, Coll Comp, Houghton, MI 49931 USA
[2] Michigan Technol Univ, Dept Elect & Comp Engn, Houghton, MI 49931 USA
基金
美国国家科学基金会;
关键词
Object Detection; Adverse Weather; Multimodal Fusion; Attention Neural Network;
D O I
10.1109/ICPR56361.2022.9956149
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Despite the recent advances of deep neural networks, object detection for adverse weather remains challenging due to the poor perception of some sensors in adverse weather. Instead of relying on one single sensor, multimodal fusion has been one promising approach to provide redundant detection information based on multiple sensors. However, most existing multimodal fusion approaches are ineffective in adjusting the focus of different sensors under varying detection environments in dynamic adverse weather conditions. Moreover, it is critical to simultaneously observe local and global information under complex weather conditions, which has been neglected in most early or late-stage multimodal fusion works. In view of these, this paper proposes a Global-Local Attention (GLA) framework to adaptively fuse the multi-modality sensing streams, i.e., camera, gated, and lidar data, at two fusion stages. Specifically, GLA integrates an early-stage fusion via a local attention network and a late-stage fusion via a global attention network to deal with both local and global information, which automatically allocates higher weights to the modality with better detection features at the late-stage fusion to cope with the specific weather condition adaptively. Experimental results demonstrate the superior performance of the proposed GLA compared with state-of-the-art fusion approaches under various adverse weather conditions, such as light fog, dense fog, and snow.
引用
收藏
页码:4573 / 4579
页数:7
相关论文
共 50 条
  • [1] Pothole detection in adverse weather: leveraging synthetic images and attention-based object detection methods
    Jakubec M.
    Lieskovska E.
    Bucko B.
    Zabovska K.
    Multimedia Tools and Applications, 2024, 83 (39) : 86955 - 86982
  • [2] An Effective Attention-based CNN Model for Fire Detection in Adverse Weather Conditions
    Yar, Hikmat
    Ullah, Waseem
    Khan, Zulfiqar Ahmad
    Baik, Sung Wook
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2023, 206 (335-346) : 335 - 346
  • [3] Who Cares about the Weather? Inferring Weather Conditions for Weather-Aware Object Detection in Thermal Images
    Johansen, Anders Skaarup
    Nasrollahi, Kamal
    Escalera, Sergio
    Moeslund, Thomas B.
    APPLIED SCIENCES-BASEL, 2023, 13 (18):
  • [4] Weather-aware object detection method for maritime surveillance systems
    Chen, Mingkang
    Sun, Jingtao
    Aida, Kento
    Takefusa, Atsuko
    FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 151 (111-123): : 111 - 123
  • [5] MFA-YOLO: Multi-Scale Fusion and Attention-Based Object Detection for Autonomous Driving in Extreme Weather
    Li, Zhongyang
    Fan, Haiju
    ELECTRONICS, 2025, 14 (05):
  • [6] Fog-Aware Adaptive YOLO for Object Detection in Adverse Weather
    Abbasi, Hasan
    Amini, Marzieh
    Yu, F. Richard
    2023 IEEE SENSORS APPLICATIONS SYMPOSIUM, SAS, 2023,
  • [7] Progressive Domain Adaptive Object Detection Based on Self-Attention in Foggy Weather
    Lin, Meng
    Zhou, Gang
    Yang, Yawei
    Shi, Jun
    IEEJ TRANSACTIONS ON ELECTRICAL AND ELECTRONIC ENGINEERING, 2023, 18 (12) : 1923 - 1931
  • [8] DAFA: Diversity-Aware Feature Aggregation for Attention-Based Video Object Detection
    Roh, Si-Dong
    Chung, Ki-Seok
    IEEE ACCESS, 2022, 10 : 93453 - 93463
  • [9] Weather to pay attention to energy efficiency on the housing market
    Fang, Ximeng
    Singhal, Puja
    ECONOMICS LETTERS, 2024, 245
  • [10] Attention-based fusion factor in FPN for object detection
    Li, Yuancheng
    Zhou, Shenglong
    Chen, Hui
    APPLIED INTELLIGENCE, 2022, 52 (13) : 15547 - 15556