Strawberry Defect Identification Using Deep Learning Infrared-Visible Image Fusion

被引:5
|
作者
Lu, Yuze [1 ]
Gong, Mali [1 ]
Li, Jing [2 ]
Ma, Jianshe [3 ]
机构
[1] Tsinghua Univ, Key Lab Photon Control Technol, Minist Educ, Beijing 100083, Peoples R China
[2] Yunnan Agr Univ, Int Joint Res Ctr Smart Agr & Water Secur Yunnan P, Kunming 650201, Peoples R China
[3] Tsinghua Univ, Tsinghua Shenzhen Int Grad Sch, Div Adv Mfg, Shenzhen 518055, Peoples R China
来源
AGRONOMY-BASEL | 2023年 / 13卷 / 09期
关键词
fruit feature detection; image fusion; VGG-19; infrared image; RGB image; RIPENESS; APPLES; PERFORMANCE; NETWORK; BRUISES; DAMAGE; COLOR; TIME;
D O I
10.3390/agronomy13092217
中图分类号
S3 [农学(农艺学)];
学科分类号
0901 ;
摘要
Feature detection of strawberry multi-type defects and the ripeness stage faces huge challenges because of color diversity and visual similarity. Images from hyperspectral near-infrared (NIR) information sources are also limited by their low spatial resolution. In this study, an accurate RGB image (with a spatial resolution of 2048x1536 pixels) and NIR image (ranging from 700-1100 nm in wavelength, covering 146 bands, and with a spatial resolution of 696x700 pixels) fusion method was proposed to improve the detection of defects and features in strawberries. This fusion method was based on a pretrained VGG-19 model. The high-frequency parts of original RGB and NIR image pairs were filtered and fed into the pretrained VGG-19 simultaneously. The high-frequency features were extracted and output into ReLU layers; the l1-norm was used to fuse multiple feature maps into one feature map, and area pixel averaging was introduced to avoid the effect of extreme pixels. The high- and low-frequency parts of RGB and NIR were summed into one image according to the information weights at the end. In the validation section, the detection dataset included expanded 4000 RGB images and 4000 NIR images (training and testing set ratio was 4:1) from 240 strawberry samples labeled as mud contaminated, bruised, both defects, defect-free, ripe, half-ripe, and unripe. The detection neural network YOLOv3-tiny operated on RGB-only, NIR-only, and fused image input modes, achieving the highest mean average precision of 87.18% for the proposed method. Finally, the effects of different RGB and NIR weights on the detection results were also studied. This research demonstrated that the proposed fusion method can greatly improve the defect and feature detection of strawberry samples.
引用
收藏
页数:19
相关论文
共 50 条
  • [21] Infrared-Visible Synthetic Data from Game Engine for Image Fusion Improvement
    Gu, Xinjie
    Liu, Gang
    Zhang, Xiangbo
    Tang, Lili
    Zhou, Xihong
    Qiu, Weifang
    IEEE TRANSACTIONS ON GAMES, 2024, 16 (02) : 291 - 302
  • [22] FIRe-GAN: a novel deep learning-based infrared-visible fusion method for wildfire imagery
    Ciprian-Sanchez, J. F.
    Ochoa-Ruiz, G.
    Gonzalez-Mendoza, M.
    Rossi, L.
    NEURAL COMPUTING & APPLICATIONS, 2023, 35 (25): : 18201 - 18213
  • [23] Visible and infrared image fusion using NSST and deep Boltzmann machine
    Wu, Wei
    Qiu, Zongming
    Zhao, Min
    Huang, Qiuhong
    Lei, Yang
    OPTIK, 2018, 157 : 334 - 342
  • [24] FIRe-GAN: a novel deep learning-based infrared-visible fusion method for wildfire imagery
    J. F. Ciprián-Sánchez
    G. Ochoa-Ruiz
    M. Gonzalez-Mendoza
    L. Rossi
    Neural Computing and Applications, 2023, 35 : 18201 - 18213
  • [25] Explainable analysis of infrared and visible light image fusion based on deep learning
    Yuan, Bo
    Sun, Hongyu
    Guo, Yinjing
    Liu, Qiang
    Zhan, Xinghao
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [26] Infrared and Visible Image Fusion: A Region-Based Deep Learning Method
    Xie, Chunyu
    Li, Xinde
    INTELLIGENT ROBOTICS AND APPLICATIONS, ICIRA 2019, PT V, 2019, 11744 : 604 - 615
  • [27] A Deep Learning Framework for Infrared and Visible Image Fusion Without Strict Registration
    Huafeng Li
    Junyu Liu
    Yafei Zhang
    Yu Liu
    International Journal of Computer Vision, 2024, 132 : 1625 - 1644
  • [28] A Deep Learning Framework for Infrared and Visible Image Fusion Without Strict Registration
    Li, Huafeng
    Liu, Junyu
    Zhang, Yafei
    Liu, Yu
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (05) : 1625 - 1644
  • [29] Denoiser Learning for Infrared and Visible Image Fusion
    Liu, Jinyang
    Li, Shutao
    Tan, Lishan
    Dian, Renwei
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024,
  • [30] An interactively reinforced paradigm for joint infrared-visible image fusion and saliency object detection
    Wang, Di
    Liu, Jinyuan
    Liu, Risheng
    Fan, Xin
    INFORMATION FUSION, 2023, 98