Strawberry Defect Identification Using Deep Learning Infrared-Visible Image Fusion

被引:5
|
作者
Lu, Yuze [1 ]
Gong, Mali [1 ]
Li, Jing [2 ]
Ma, Jianshe [3 ]
机构
[1] Tsinghua Univ, Key Lab Photon Control Technol, Minist Educ, Beijing 100083, Peoples R China
[2] Yunnan Agr Univ, Int Joint Res Ctr Smart Agr & Water Secur Yunnan P, Kunming 650201, Peoples R China
[3] Tsinghua Univ, Tsinghua Shenzhen Int Grad Sch, Div Adv Mfg, Shenzhen 518055, Peoples R China
来源
AGRONOMY-BASEL | 2023年 / 13卷 / 09期
关键词
fruit feature detection; image fusion; VGG-19; infrared image; RGB image; RIPENESS; APPLES; PERFORMANCE; NETWORK; BRUISES; DAMAGE; COLOR; TIME;
D O I
10.3390/agronomy13092217
中图分类号
S3 [农学(农艺学)];
学科分类号
0901 ;
摘要
Feature detection of strawberry multi-type defects and the ripeness stage faces huge challenges because of color diversity and visual similarity. Images from hyperspectral near-infrared (NIR) information sources are also limited by their low spatial resolution. In this study, an accurate RGB image (with a spatial resolution of 2048x1536 pixels) and NIR image (ranging from 700-1100 nm in wavelength, covering 146 bands, and with a spatial resolution of 696x700 pixels) fusion method was proposed to improve the detection of defects and features in strawberries. This fusion method was based on a pretrained VGG-19 model. The high-frequency parts of original RGB and NIR image pairs were filtered and fed into the pretrained VGG-19 simultaneously. The high-frequency features were extracted and output into ReLU layers; the l1-norm was used to fuse multiple feature maps into one feature map, and area pixel averaging was introduced to avoid the effect of extreme pixels. The high- and low-frequency parts of RGB and NIR were summed into one image according to the information weights at the end. In the validation section, the detection dataset included expanded 4000 RGB images and 4000 NIR images (training and testing set ratio was 4:1) from 240 strawberry samples labeled as mud contaminated, bruised, both defects, defect-free, ripe, half-ripe, and unripe. The detection neural network YOLOv3-tiny operated on RGB-only, NIR-only, and fused image input modes, achieving the highest mean average precision of 87.18% for the proposed method. Finally, the effects of different RGB and NIR weights on the detection results were also studied. This research demonstrated that the proposed fusion method can greatly improve the defect and feature detection of strawberry samples.
引用
收藏
页数:19
相关论文
共 50 条
  • [31] DIDFuse: Deep Image Decomposition for Infrared and Visible Image Fusion
    Zhao, Zixiang
    Xu, Shuang
    Zhang, Chunxia
    Liu, Junmin
    Zhang, Jiangshe
    Li, Pengfei
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 970 - 976
  • [32] Infrared-Visible Image Fusion through Feature-Based Decomposition and Domain Normalization
    Chen, Weiyi
    Miao, Lingjuan
    Wang, Yuhao
    Zhou, Zhiqiang
    Qiao, Yajun
    REMOTE SENSING, 2024, 16 (06)
  • [33] Weber-aware weighted mutual information evaluation for infrared-visible image fusion
    Luo, Xiaoyan
    Wang, Shining
    Yuan, Ding
    JOURNAL OF APPLIED REMOTE SENSING, 2016, 10
  • [34] A Novel Teacher-Student Framework With Degradation Model for Infrared-Visible Image Fusion
    Xue, Weimin
    Liu, Yisha
    Wang, Fei
    He, Guojian
    Zhuang, Yan
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73 : 1 - 12
  • [35] Infrared and Visible Image Fusion Method by Using Hybrid Representation Learning
    He, Guiqing
    Ji, Jiaqi
    Dong, Dandan
    Wang, Jun
    Fan, Jianping
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2019, 16 (11) : 1796 - 1800
  • [36] A DT-CWT-based infrared-visible image fusion method for smart city
    Qi G.
    Zheng M.
    Zhu Z.
    Yuan R.
    International Journal of Simulation and Process Modelling, 2019, 14 (06) : 559 - 570
  • [37] Heterogeneous Knowledge Distillation for Simultaneous Infrared-Visible Image Fusion and Super-Resolution
    Xiao, Wanxin
    Zhang, Yafei
    Wang, Hongbin
    Li, Fan
    Jin, Hua
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
  • [38] Cross-similarity guided contrastive learning for infrared-visible image-to-image translation
    Yu, Pan
    Zhao, Wei
    Huang, Yan
    Wang, Guoyou
    Proceedings of SPIE - The International Society for Optical Engineering, 2024, 13180
  • [39] An Improved Infrared and Visible Image Fusion Using an Adaptive Contrast Enhancement Method and Deep Learning Network with Transfer Learning
    Bhutto, Jameel Ahmed
    Tian, Lianfang
    Du, Qiliang
    Sun, Zhengzheng
    Yu, Lubin
    Soomro, Toufique Ahmed
    REMOTE SENSING, 2022, 14 (04)
  • [40] Infrared and Visible Image Fusion Using a Deep Unsupervised Framework With Perceptual Loss
    Xu, Dongdong
    Wang, Yongcheng
    Zhang, Xin
    Zhang, Ning
    Yu, Sibo
    IEEE ACCESS, 2020, 8 : 206445 - 206458