Improved YOLOv5 infrared tank target detection method under ground background

被引:7
|
作者
Liang, Chao [1 ,2 ]
Yan, Zhengang [2 ]
Ren, Meng [2 ]
Wu, Jiangpeng [2 ]
Tian, Liping [2 ]
Guo, Xuan [2 ]
Li, Jie [2 ]
机构
[1] Xidian Univ, Sch Artificial Intelligence, Xian 710071, Peoples R China
[2] Xian Modern Control Technol Res Inst, Xian 710065, Peoples R China
关键词
OBJECT RECOGNITION;
D O I
10.1038/s41598-023-33552-x
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
The detection precision of infrared seeker directly affects the guidance precision of infrared guidance system. To solve the problem of low target detection accuracy caused by the change of imaging scale, complex ground background and inconspicuous infrared target characteristics when infrared image seeker detects ground tank targets. In this paper, a You Only Look Once, Transform Head Squeeze-and-Excitation (YOLOv5s-THSE) model is proposed based on the YOLOv5s model. A multi-head attention mechanism is added to the backbone and neck of the network, and deeper target features are extracted using the multi-head attention mechanism. The Cross Stage Partial, Squeeze-and-Exclusion module is added to the neck of the network to suppress the complex background and make the model pay more attention to the target. A small object detection head is introduced into the head of the network, and the CIoU loss function is used in the model to improve the detection accuracy of small objects and obtain more stable training regression. Through these several improvement measures, the background of the infrared target is suppressed, and the detection ability of infrared tank targets is improved. Experiments on infrared tank target datasets show that our proposed model can effectively improve the detection performance of infrared tank targets under ground background compared with existing methods, such as YOLOv5s, YOLOv5s + SE, and YOLOV 5 s + Convective Block Attention Module.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Improved YOLOv5 infrared tank target detection method under ground background
    Chao Liang
    Zhengang Yan
    Meng Ren
    Jiangpeng Wu
    Liping Tian
    Xuan Guo
    Jie Li
    Scientific Reports, 13
  • [2] A Camouflaged Target Detection Method with Improved YOLOv5 Algorithm
    Peng, Ruihui
    Lai, Jie
    Sun, Dianxing
    Li, Mang
    Yan, Ruyu
    Li, Xue
    Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2024, 46 (08): : 3324 - 3333
  • [3] An infrared vehicle detection method based on improved YOLOv5
    Zhang X.
    Zhao H.
    Liu W.
    Zhao Y.
    Guan S.
    Hongwai yu Jiguang Gongcheng/Infrared and Laser Engineering, 2023, 52 (08):
  • [4] Road traffic target detection method based on improved YOLOv5
    Zhou, Huichun
    Xue, Yuming
    2024 5TH INTERNATIONAL CONFERENCE ON COMPUTER ENGINEERING AND APPLICATION, ICCEA 2024, 2024, : 1124 - 1128
  • [5] Small target tea bud detection based on improved YOLOv5 in complex background
    Wang, Mengjie
    Li, Yang
    Meng, Hewei
    Chen, Zhiwei
    Gui, Zhiyong
    Li, Yaping
    Dong, Chunwang
    FRONTIERS IN PLANT SCIENCE, 2024, 15
  • [6] Hand target detection based on improved YOLOv5
    Xu Z.
    Meng J.
    Fang J.
    International Journal of Wireless and Mobile Computing, 2023, 25 (04) : 353 - 361
  • [7] UAV Target Detection Algorithm with Improved Yolov5
    Chen, Fankai
    Li, Shixin
    Computer Engineering and Applications, 2023, 59 (18): : 218 - 225
  • [8] Improved YOLOv5 Method for Fall Detection
    Peng, Jun
    He, Yuanmin
    Jin, Shangzhu
    Dai, Haojun
    Peng, Fei
    Zhang, Yuhao
    2022 IEEE 17TH CONFERENCE ON INDUSTRIAL ELECTRONICS AND APPLICATIONS (ICIEA), 2022, : 504 - 509
  • [9] An improved target detection method based on YOLOv5 in natural orchard environments
    Zhang, Jiachuang
    Tian, Mimi
    Yang, Zengrong
    Li, Junhui
    Zhao, Longlian
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2024, 219
  • [10] Improved Light-Weight Target Detection Method Based on YOLOv5
    Shi, Tao
    Zhu, Wenxu
    Su, Yanjie
    IEEE ACCESS, 2023, 11 : 38604 - 38613