Strawberry Defect Identification Using Deep Learning Infrared-Visible Image Fusion

被引:5
|
作者
Lu, Yuze [1 ]
Gong, Mali [1 ]
Li, Jing [2 ]
Ma, Jianshe [3 ]
机构
[1] Tsinghua Univ, Key Lab Photon Control Technol, Minist Educ, Beijing 100083, Peoples R China
[2] Yunnan Agr Univ, Int Joint Res Ctr Smart Agr & Water Secur Yunnan P, Kunming 650201, Peoples R China
[3] Tsinghua Univ, Tsinghua Shenzhen Int Grad Sch, Div Adv Mfg, Shenzhen 518055, Peoples R China
来源
AGRONOMY-BASEL | 2023年 / 13卷 / 09期
关键词
fruit feature detection; image fusion; VGG-19; infrared image; RGB image; RIPENESS; APPLES; PERFORMANCE; NETWORK; BRUISES; DAMAGE; COLOR; TIME;
D O I
10.3390/agronomy13092217
中图分类号
S3 [农学(农艺学)];
学科分类号
0901 ;
摘要
Feature detection of strawberry multi-type defects and the ripeness stage faces huge challenges because of color diversity and visual similarity. Images from hyperspectral near-infrared (NIR) information sources are also limited by their low spatial resolution. In this study, an accurate RGB image (with a spatial resolution of 2048x1536 pixels) and NIR image (ranging from 700-1100 nm in wavelength, covering 146 bands, and with a spatial resolution of 696x700 pixels) fusion method was proposed to improve the detection of defects and features in strawberries. This fusion method was based on a pretrained VGG-19 model. The high-frequency parts of original RGB and NIR image pairs were filtered and fed into the pretrained VGG-19 simultaneously. The high-frequency features were extracted and output into ReLU layers; the l1-norm was used to fuse multiple feature maps into one feature map, and area pixel averaging was introduced to avoid the effect of extreme pixels. The high- and low-frequency parts of RGB and NIR were summed into one image according to the information weights at the end. In the validation section, the detection dataset included expanded 4000 RGB images and 4000 NIR images (training and testing set ratio was 4:1) from 240 strawberry samples labeled as mud contaminated, bruised, both defects, defect-free, ripe, half-ripe, and unripe. The detection neural network YOLOv3-tiny operated on RGB-only, NIR-only, and fused image input modes, achieving the highest mean average precision of 87.18% for the proposed method. Finally, the effects of different RGB and NIR weights on the detection results were also studied. This research demonstrated that the proposed fusion method can greatly improve the defect and feature detection of strawberry samples.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] A Contrastive Learning Approach for Infrared-Visible Image Fusion
    Gupta, Ashish Kumar
    Barnwal, Meghna
    Mishra, Deepak
    PATTERN RECOGNITION AND MACHINE INTELLIGENCE, PREMI 2023, 2023, 14301 : 199 - 208
  • [2] Infrared-visible Image Fusion Using Accelerated Convergent Convolutional Dictionary Learning
    Chengfang Zhang
    Ziliang Feng
    Arabian Journal for Science and Engineering, 2022, 47 : 10295 - 10306
  • [3] Infrared-visible Image Fusion Using Accelerated Convergent Convolutional Dictionary Learning
    Zhang, Chengfang
    Feng, Ziliang
    ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING, 2022, 47 (08) : 10295 - 10306
  • [4] Visible and Infrared Image Fusion Using Deep Learning
    Zhang, Xingchen
    Demiris, Yiannis
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (08) : 10535 - 10554
  • [5] An automatic building façade deterioration detection system using infrared-visible image fusion and deep learning
    Wang, Pujin
    Xiao, Jianzhuang
    Qiang, Xingxing
    Xiao, Rongwei
    Liu, Yi
    Sun, Chang
    Hu, Jianhui
    Liu, Shijie
    JOURNAL OF BUILDING ENGINEERING, 2024, 95
  • [6] Infrared and Visible Image Fusion using a Deep Learning Framework
    Li, Hui
    Wu, Xiao-Jun
    Kittler, Josef
    2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 2705 - 2710
  • [7] PFCFuse: A Poolformer and CNN Fusion Network for Infrared-Visible Image Fusion
    Hu, Xinyu
    Liu, Yang
    Yang, Feng
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73
  • [8] Infrared and Visible Image Fusion Based on NSCT and Deep Learning
    Feng, Xin
    JOURNAL OF INFORMATION PROCESSING SYSTEMS, 2018, 14 (06): : 1405 - 1419
  • [9] Infrared-Visible Image Fusion Based on Convolutional Neural Networks (CNN)
    Ren, Xianyi
    Meng, Fanyang
    Hu, Tao
    Liu, Zhijun
    Wang, Changwei
    INTELLIGENCE SCIENCE AND BIG DATA ENGINEERING, 2018, 11266 : 301 - 307
  • [10] Infrared-Visible Image Fusion Based on Semantic Guidance and Visual Perception
    Chen, Xiaoyu
    Teng, Zhijie
    Liu, Yingqi
    Lu, Jun
    Bai, Lianfa
    Han, Jing
    ENTROPY, 2022, 24 (10)