Detection Method of Corn Weed Based on Mask R-CNN

被引:0
|
作者
Jiang H. [1 ]
Zhang C. [1 ]
Zhang Z. [2 ]
Mao W. [3 ]
Wang D. [4 ]
Wang D. [4 ]
机构
[1] College of Information Science and Engineering, Shandong Agricultural University, Tai'an
[2] College of Electron and Electricity Engineering, Baoji University of Arts and Sciences, Baoji
[3] Chinese Academy of Agricultural Mechanization Sciences, Beijing
[4] College of Agronomy, Shandong Agricultural University, Tai'an
[5] College of Mechanical and Electrical Engineering, Qingdao Agricultural University, Qingdao
关键词
Feature extraction; Image segmentation; Residual neural network; Variable spraying pesticide; Weeds;
D O I
10.6041/j.issn.1000-1298.2020.06.023
中图分类号
学科分类号
摘要
Accurate detection and identification of weeds is a prerequisite for weed control. Aiming at the problem of low accuracy of weed segmentation in complex field environment, an intelligent weed detection and segmentation method based on Mask R-CNN was proposed. The ResNet-101 network was used to extract the feature map of weed semantic and spatial information. The characteristic map was classified by the regional suggestion network, and the pre-selection box regression was trained. The pre-selection area was screened by the non-maximum suppression algorithm. RoIAlign was used to cancel the border position deviation caused by quantization, and the region of interest (RoI) feature map was transformed into a fixed-size feature map. The output module calculated the classification, regression and segmentation loss for each RoI, predicted the category, location and contour of the candidate area through training, and realized weed detection and contour segmentation. When IoU (intersection over union) was 0.5, the mean accuracy precision (mAP) value was 0.853, which was better than that of SharpMask and DeepMask with 0.816 and 0.795, respectively. The single sample time of the three methods was 280 ms, 256 ms and 248 ms respectively. The results showed that the method can quickly and accurately detect and segment the category, location and contour of weeds, and it can be better than SharpMask and DeepMask. When IoU was 0.5, the mAP value of the proposed method was 0.785, and the time for a single sample was 285 ms, indicating that this method can realize the field operation in the complex background and meet the real-time control requirements of field pesticide variable spraying. In the field variable spraying test, the accuracy rate of identifying weeds was 91%, the accuracy rate of identifying weeds and spraying them accurately was 85%, the spray density of pesticide spray droplets was 55 per square centimetre, and the average processing time of the device was 0.98 s. It can meet the control standard of pesticide variable spraying. © 2020, Chinese Society of Agricultural Machinery. All right reserved.
引用
收藏
页码:220 / 228and247
相关论文
共 32 条
  • [21] HE K, ZHANG X, REN S, Et al., Deep residual learning for image recognition, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770-778, (2016)
  • [22] RUSSELL B C, TORRALBA A, MURPHY K P, Et al., LabelMe: a database and web-based tool for image annotation, International Journal of Computer Vision, 77, 1-3, pp. 157-173, (2008)
  • [23] LONG J, SHELHAMER E, DARRELL T., Fully convolutional networks for semantic segmentation, IEEE Transactions on Pattern Analysis & Machine Intelligence, 39, 4, pp. 640-651, (2014)
  • [24] RAHMAN M A, WANG Y., Optimizing intersection-over-union in deep neural networks for image segmentation, Advances in visual computing, (2016)
  • [25] NEUBECK A, GOOL L V., Efficient non-maximum suppression, International Conference on Pattern Recognition, (2006)
  • [26] GIRSHICK R., Fast R-CNN, Proceedings of the IEEE International Conference on Computer Vision, pp. 1440-1448, (2015)
  • [27] REN S, GIRSHICK R, GIRSHICK R, Et al., Faster R-CNN: towards real-time object detection with region proposal networks, IEEE Transactions on Pattern Analysis & Machine Intelligence, 39, 6, pp. 1137-1149, (2017)
  • [28] DAI J, LI Y, HE K, Et al., R-FCN: object detection via region-based fully convolutional networks, Advances in Neural Information Processing Systems, pp. 379-387, (2016)
  • [29] NGUYEN T D, SHINYA A, HARADA T, Et al., Segmentation mask refinement using image transformations, IEEE Access, 5, pp. 26409-26418, (2017)
  • [30] TRULLO D, PETITJEAN S, RUAN S, Et al., Segmentation of organs at risk in thoracic ct images using a sharpmask architecture and conditional random fields[C], Proc. IEEE Int. Symp. Biomed. Imaging, pp. 1003-1006, (2017)