A Low-Light Object Detection Method Based on SAM-MSFF Network

被引:0
|
作者
Jiang Z.-T. [1 ]
Li H. [1 ]
Lei X.-C. [1 ]
Zhu L.-H. [2 ]
Shi D.-Q. [1 ]
Zhai F.-S. [1 ]
机构
[1] The Key Laboratory of Image and Graphic Intelligent Processing in Guangxi, Guilin University of Electronic Technology, Guangxi, Guilin
[2] Nanchang Hangkong University, Jiangxi, Nanchang
来源
关键词
low-light images; multi-scale feature fusion; multiple receptive field enhancement module; object detection; spatial-aware attention mechanism;
D O I
10.12263/DZXB.20220666
中图分类号
学科分类号
摘要
The existing object detection methods are insufficient for low-light images due to their intrinsic property such as low contrast, detail loss and high noise. To solve this problem, a low-light object detection method that combines spatial-aware attention mechanism with multi-scale feature fusion (SAM-MSFF) is proposed. Firstly, multi-scale features are fused by multi-scale interactive memory pyramid to enhance effective information under low-illumination condition, and features of memory vector storage samples are set to capture potential correlation between samples. Then, a spatial-aware attention mechanism is introduced to obtain long-distance context information and local information of features in spatial domain, thereby enhancing the object features in low-light images and suppressing the interference of background information and noise. Finally, multiple receptive field enhancement module is used to expand receptive field of the features, and the features with different receptive fields are grouped and re-weighted, so that detection network can adaptively adjust the size of receptive field according to input multi-scale information. Experimental results on the ExDark dataset show that mAP (mean Average Precision) of the proposed method reaches 77.04%, which is 2.6%~14.34% higher than existing mainstream object detection methods. © 2024 Chinese Institute of Electronics. All rights reserved.
引用
收藏
页码:81 / 93
页数:12
相关论文
共 39 条
  • [1] ZHOU Y, WEN S J, WANG D L, Et al., Object detection in autonomous driving scenarios based on an improved Faster-RCNN, Applied Sciences, 11, 24, (2021)
  • [2] GORSCHLUTER F, ROJTBERG P, POLLABAUER T., A survey of 6D object detection based on 3D models for industrial applications, Journal of Imaging, 8, 3, (2022)
  • [3] HU Y Y, WU X J, ZHENG G D, Et al., Object detection of UAV for anti-UAV based on improved YOLO v3, 2019 Chinese Control Conference (CCC), pp. 8386-8390, (2019)
  • [4] ZHANG Y E, YE L, FANG L, Et al., Benchmarking the robustness of object detection based on near-real military scenes, Wireless Communications and Mobile Computing, 2022, (2022)
  • [5] REN S Q, HE K M, GIRSHICK R, Et al., Faster R-CNN: Towards real-time object detection with region proposal networks, IEEE Transactions on Pattern Analysis and Machine Intelligence, 39, 6, pp. 1137-1149, (2017)
  • [6] REDMON J, DIVVALA S, GIRSHICK R, Et al., You only look once: Unified, real-time object detection
  • [7] REDMON J, FARHADI A., YOLO9000: Better, faster, stronger, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6517-6525, (2017)
  • [8] REDMON J, FARHADI A., YOLOv3: An incremental improvement
  • [9] BOCHKOVSKIY A, WANG C Y, LIAO H Y M., YOLOv4: Optimal speed and accuracy of object detection
  • [10] GE Z, LIU S T, WANG F, Et al., YOLOX: Exceeding YOLO series in 2021