Deep-learning-ready RGB-depth images of seedling development

被引:0
|
作者
Mercier, Felix [1 ]
Couasnet, Geoffroy [1 ]
El Ghaziri, Angelina [2 ,3 ]
Bouhlel, Nizar [2 ,3 ]
Sarniguet, Alain [1 ,3 ,4 ]
Marchi, Muriel [1 ,3 ,4 ]
Barret, Matthieu [1 ,3 ,4 ]
Rousseau, David [1 ,4 ]
机构
[1] Univ Angers, 40 Rue Rennes, F-49000 Angers, France
[2] Inst Agro, 2 Rue Andre Notre, F-49000 Angers, France
[3] Inst Rech Hort & Semences IRHS, UMR1345, F-49071 Beaucouze, France
[4] INRAE, 42 Rue Georges Morel, F-49071 Beaucouze, France
关键词
RGB-depth; Seedling kinetics; Deep learning; Data set;
D O I
10.1186/s13007-025-01334-3
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
In the era of machine learning-driven plant imaging, the production of annotated datasets is a very important contribution. In this data paper, a unique annotated dataset of seedling emergence kinetics is proposed. It is composed of almost 70,000 RGB-depth frames and more than 700,000 plant annotations. The dataset is shown valuable for training deep learning models and performing high-throughput phenotyping by imaging. The ability of such models to generalize to several species and outperform the state-of-the-art owing to the delivered dataset is demonstrated. We also discuss how this dataset raises new questions in plant phenotyping.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Enhancing the Tracking of Seedling Growth Using RGB-Depth Fusion and Deep Learning
    Garbouge, Hadhami
    Rasti, Pejman
    Rousseau, David
    SENSORS, 2021, 21 (24)
  • [2] Precision mapping through an RGB-Depth camera and deep learning
    Petrakis, Georgios
    Partsinevelos, Panagiotis
    25TH AGILE CONFERENCE ON GEOGRAPHIC INFORMATION SCIENCE ARTIFICIAL INTELLIGENCE IN THE SERVICE OF GEOSPATIAL TECHNOLOGIES, 2022, 3
  • [3] Study on Stairs Detection using RGB-Depth Images
    Murakami, Soichiro
    Shimakawa, Manabu
    Kiyota, Kimiyasu
    Kato, Takashi
    2014 JOINT 7TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING AND INTELLIGENT SYSTEMS (SCIS) AND 15TH INTERNATIONAL SYMPOSIUM ON ADVANCED INTELLIGENT SYSTEMS (ISIS), 2014, : 1186 - 1191
  • [4] Study on stairs detection using RGB-depth images
    20151400714274
    (1) Advanced Course of Electronics and Information Systems Engineering, Kumamoto National College of Technology, Koshi-shi, Kumamoto, Japan; (2) Dept. of Human-Oriented Information Systems Engineering, Kumamoto National College of Technology, Koshi-shi, Kumamoto, Japan; (3) Dept. of Human Intelligence Systems, Kyushu Institute of Technology, Kitakyushu-shi, Fukuoka, Japan, 1600, Japan Society for Fuzzy Theory and Intelligent Informatics (SOFT) (Institute of Electrical and Electronics Engineers Inc., United States):
  • [5] Development of integral photography image with RGB-Depth camera
    Yano, Sumio
    Lee, Hyoung
    Park, Min-Chul
    Son, Jung Young
    FOURTEENTH INTERNATIONAL CONFERENCE ON CORRELATION OPTICS, 2020, 11369
  • [6] Facial Expression Recognition via Joint Deep Learning of RGB-Depth Map Latent Representations
    Oyedotun, Oyebade K.
    Demisse, Girum
    Shabayek, Abd El Rahman
    Aouada, Djamila
    Ottersten, Bjorn
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2017), 2017, : 3161 - 3168
  • [7] Segmentation of body parts of cows in RGB-depth images based on template matching
    Jia, Nan
    Kootstra, Gert
    Koerkamp, Peter Groot
    Shi, Zhengxiang
    Du, Songhuai
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2021, 180
  • [8] Cardboard packaging minimization: Autonomous box selection by RGB-depth vision system and complementary deep learning classification
    Kwon, Jimmy
    Kwon, Sungmin
    INSTRUMENTATION SCIENCE & TECHNOLOGY, 2024,
  • [9] A Deep Multi-Modal Learning Method and a New RGB-Depth Data Set for Building Roof Extraction
    Khoshboresh-Masouleh, Mehdi
    Shah-Hosseini, Reza
    PHOTOGRAMMETRIC ENGINEERING AND REMOTE SENSING, 2021, 87 (10): : 759 - 766
  • [10] Combining RGB and Depth images for Indoor Scene Classification using Deep Learning
    Pujar, Karthik
    Chickerur, Satyadhyan
    Patil, Mahesh S.
    2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND COMPUTING RESEARCH (ICCIC), 2017, : 466 - 473