Strawberry Defect Identification Using Deep Learning Infrared-Visible Image Fusion

被引:5
|
作者
Lu, Yuze [1 ]
Gong, Mali [1 ]
Li, Jing [2 ]
Ma, Jianshe [3 ]
机构
[1] Tsinghua Univ, Key Lab Photon Control Technol, Minist Educ, Beijing 100083, Peoples R China
[2] Yunnan Agr Univ, Int Joint Res Ctr Smart Agr & Water Secur Yunnan P, Kunming 650201, Peoples R China
[3] Tsinghua Univ, Tsinghua Shenzhen Int Grad Sch, Div Adv Mfg, Shenzhen 518055, Peoples R China
来源
AGRONOMY-BASEL | 2023年 / 13卷 / 09期
关键词
fruit feature detection; image fusion; VGG-19; infrared image; RGB image; RIPENESS; APPLES; PERFORMANCE; NETWORK; BRUISES; DAMAGE; COLOR; TIME;
D O I
10.3390/agronomy13092217
中图分类号
S3 [农学(农艺学)];
学科分类号
0901 ;
摘要
Feature detection of strawberry multi-type defects and the ripeness stage faces huge challenges because of color diversity and visual similarity. Images from hyperspectral near-infrared (NIR) information sources are also limited by their low spatial resolution. In this study, an accurate RGB image (with a spatial resolution of 2048x1536 pixels) and NIR image (ranging from 700-1100 nm in wavelength, covering 146 bands, and with a spatial resolution of 696x700 pixels) fusion method was proposed to improve the detection of defects and features in strawberries. This fusion method was based on a pretrained VGG-19 model. The high-frequency parts of original RGB and NIR image pairs were filtered and fed into the pretrained VGG-19 simultaneously. The high-frequency features were extracted and output into ReLU layers; the l1-norm was used to fuse multiple feature maps into one feature map, and area pixel averaging was introduced to avoid the effect of extreme pixels. The high- and low-frequency parts of RGB and NIR were summed into one image according to the information weights at the end. In the validation section, the detection dataset included expanded 4000 RGB images and 4000 NIR images (training and testing set ratio was 4:1) from 240 strawberry samples labeled as mud contaminated, bruised, both defects, defect-free, ripe, half-ripe, and unripe. The detection neural network YOLOv3-tiny operated on RGB-only, NIR-only, and fused image input modes, achieving the highest mean average precision of 87.18% for the proposed method. Finally, the effects of different RGB and NIR weights on the detection results were also studied. This research demonstrated that the proposed fusion method can greatly improve the defect and feature detection of strawberry samples.
引用
收藏
页数:19
相关论文
共 50 条
  • [41] A deep learning based relative clarity classification method for infrared and visible image fusion
    Abera, Deboch Eyob
    Qi, Jin
    Cheng, Jian
    INFRARED PHYSICS & TECHNOLOGY, 2024, 140
  • [42] Infrared and Visible Image Fusion: Statistical Analysis, Deep Learning Approaches and Future Prospects
    Wu Yifei
    Yang Rui
    Lu Qishen
    Tang Yuting
    Zhang Chengmin
    Liu Shuaihui
    LASER & OPTOELECTRONICS PROGRESS, 2024, 61 (14)
  • [43] A perceptual framework for infrared-visible image fusion based on multiscale structure decomposition and biological vision
    Zhou, Zhiqiang
    Fei, Erfang
    Miao, Lingjuan
    Yang, Rao
    INFORMATION FUSION, 2023, 93 : 174 - 191
  • [44] Infrared-Visible Heterogenous Image Matching Based on Intra-Class Transfer Learning
    Mao Y.
    He Z.
    Ma Z.
    Bi R.
    Wang Z.
    Hsi-An Chiao Tung Ta Hsueh/Journal of Xi'an Jiaotong University, 2020, 54 (01): : 49 - 55
  • [45] Infrared and visible image fusion based on deep Boltzmann model
    Feng Xin
    Li Chuan
    Hu Kai-Qun
    ACTA PHYSICA SINICA, 2014, 63 (18)
  • [46] Physics driven deep Retinex fusion for adaptive infrared and visible image fusion
    Gu, Yuanjie
    Xiao, Zhibo
    Guan, Yinghan
    Dai, Haoran
    Liu, Cheng
    Xue, Liang
    Wang, Shouyu
    OPTICAL ENGINEERING, 2023, 62 (08) : 83101
  • [47] Infrared-Visible Light Image Fusion Method Based on Weighted Salience Detection and Visual Information Preservation
    Liu, Yibo
    Ke, Ting
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT VI, ICIC 2024, 2024, 14867 : 159 - 168
  • [48] Early Forest Fire Detection With UAV Image Fusion: A Novel Deep Learning Method Using Visible and Infrared Sensors
    Niu, Kunlong
    Wang, Chongyang
    Xu, Jianhui
    Liang, Jianrong
    Zhou, Xia
    Wen, Kaixiang
    Lu, Minjian
    Yang, Chuanxun
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2025, 18 : 6617 - 6629
  • [49] CMFA_Net: A cross-modal feature aggregation network for infrared-visible image fusion
    Ding, Zhaisheng
    Li, Haiyan
    Zhou, Dongming
    Li, Hongsong
    Liu, Yanyu
    Hou, Ruichao
    INFRARED PHYSICS & TECHNOLOGY, 2021, 118
  • [50] Infrared-Visible Image Fusion Using Dual-Branch Auto-Encoder With Invertible High-Frequency Encoding
    Liu, Honglin
    Mao, Qirong
    Dong, Ming
    Zhan, Yongzhao
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (03) : 2675 - 2688