On Domain-Specific Pre-Training for Effective Semantic Perception in Agricultural Robotics

被引:1
|
作者
Roggiolani, Gianmarco [1 ]
Magistri, Federico [1 ]
Guadagnino, Tiziano [1 ]
Weyler, Jan [1 ]
Grisetti, Giorgio [4 ]
Stachniss, Cyrill [1 ,2 ,3 ]
Behley, Jens [1 ]
机构
[1] Univ Bonn, Bonn, Germany
[2] Univ Oxford, Dept Engn Sci, Oxford, England
[3] Lamarr Inst Machine Learning & Artificial Intelli, Bonn, Germany
[4] Univ Roma La Sapienza, Rome, Italy
关键词
D O I
10.1109/ICRA48891.2023.10160624
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Agricultural robots have the prospect to enable more efficient and sustainable agricultural production of food, feed, and fiber. Perception of crops and weeds is a central component of agricultural robots that aim to monitor fields and assess the plants as well as their growth stage in an automatic manner. Semantic perception mostly relies on deep learning using supervised approaches, which require time and qualified workers to label fairly large amounts of data. In this paper, we look into the problem of reducing the amount of labels without compromising the final segmentation performance. For robots operating in the field, pre-training networks in a supervised way is already a popular method to reduce the number of required labeled images. We investigate the possibility of pre-training in a self-supervised fashion using data from the target domain. To better exploit this data, we propose a set of domain-specific augmentation strategies. We evaluate our pre-training on semantic segmentation and leaf instance segmentation, two important tasks in our domain. The experimental results suggest that pre-training with domain-specific data paired with our data augmentation strategy leads to superior performance compared to commonly used pre-trainings. Furthermore, the pre-trained networks obtain similar performance to the fully supervised with less labeled data.
引用
收藏
页码:11786 / 11793
页数:8
相关论文
共 50 条
  • [21] Semantic extensions to domain-specific markup languages
    Varde, A
    Rundensteiner, E
    Mani, M
    Maniruzzaman, M
    Sisson, RD
    INTERNATIONAL CONFERENCE ON COMPUTING, COMMUNICATIONS AND CONTROL TECHNOLOGIES, VOL 2, PROCEEDINGS, 2004, : 55 - 60
  • [22] Multi-label legal document classification: A deep learning-based approach with label-attention and domain-specific pre-training
    Song, Dezhao
    Vold, Andrew
    Madan, Kanika
    Schilder, Frank
    INFORMATION SYSTEMS, 2022, 106
  • [23] PreparedLLM: effective pre-pretraining framework for domain-specific large language models
    Chen, Zhou
    Lin, Ming
    Wang, Zimeng
    Zang, Mingrun
    Bai, Yuqi
    BIG EARTH DATA, 2024, 8 (04) : 649 - 672
  • [24] Domain-Specific Image Caption Generator with Semantic Ontology
    Han, Seung-Ho
    Choi, Ho-Jin
    2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA AND SMART COMPUTING (BIGCOMP 2020), 2020, : 526 - 530
  • [25] A Broad Study of Pre-training for Domain Generalization and Adaptation
    Kim, Donghyun
    Wang, Kaihong
    Sclaroff, Stan
    Saenko, Kate
    COMPUTER VISION - ECCV 2022, PT XXXIII, 2022, 13693 : 621 - 638
  • [26] Domain-Specific Knowledge Graph Construction for Semantic Analysis
    Jain, Nitisha
    SEMANTIC WEB: ESWC 2020 SATELLITE EVENTS, 2020, 12124 : 250 - 260
  • [27] Emerging Property of Masked Token for Effective Pre-training
    Choi, Hyesong
    Lee, Hunsang
    Joung, Seyoung
    Park, Hyejin
    Kim, Jiyeong
    Min, Dongbo
    COMPUTER VISION - ECCV 2024, PT LXXVI, 2025, 15134 : 272 - 289
  • [28] The domain-specific approach of working memory training
    Peng, Peng
    Swanson, H. Lee
    DEVELOPMENTAL REVIEW, 2022, 65
  • [29] Revisiting Weakly Supervised Pre-Training of Visual Perception Models
    Singh, Mannat
    Gustafson, Laura
    Adcock, Aaron
    Reis, Vinicius De Freitas
    Gedik, Bugra
    Kosaraju, Raj Prateek
    Mahajan, Dhruv
    Girshick, Ross
    Dollar, Piotr
    Van Der Maaten, Laurens
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 794 - 804
  • [30] Domain-specific development of face memory but not face perception
    Weigelt, Sarah
    Koldewyn, Kami
    Dilks, Daniel D.
    Balas, Benjamin
    McKone, Elinor
    Kanwisher, Nancy
    DEVELOPMENTAL SCIENCE, 2014, 17 (01) : 47 - 58