Detection of facial gestures of group pigs based on improved Tiny-YOLO

被引:0
|
作者
Yan H. [1 ]
Liu Z. [1 ]
Cui Q. [2 ]
Hu Z. [1 ]
Li Y. [1 ]
机构
[1] College of Information Science and Engineering, Shanxi Agricultural University, Taigu
[2] College of Engineering, Shanxi Agricultural University, Taigu
关键词
Channel attention; Image processing; Models; Object detection; Spatial attention; Tiny-YOLO;
D O I
10.11975/j.issn.1002-6819.2019.18.021
中图分类号
学科分类号
摘要
The face of the pig contains rich biometric information, and the detection of the facial gestures can provide a basis for the individual identification and behavior analysis of the pig. Detection of facial posture can provide basis for individual recognition and behavioral analysis of pigs. However, under the scene of group pigs breeding, there always have many factors, such as pig house lighting and pig adhesion, which brings great challenges to the detection of pig face. In this paper, we take the group raising pigs in the real breeding scene as the research object, and the video frame data is used as the data source. Latter we propose a new detection algorithm named DAT-YOLO which based on the attention mechanism and Tiny-YOLO model, and channel attention and spatial attention information are introduced into the feature extraction process. High-order features guide low-order features for channel attention information acquisition, and low-order features in turn guide high-order features for spatial attention screening, meanwhile the model parameters don't have significant increase, the model feature extraction ability is improved and the detection accuracy is improved. We collect 504 sheets total 3 712 face area picture for the 5 groups of 20 days to 3 and a half months of group health pig video extraction, the number of pigs is 35. In order to obtain the model input data set, we perform a two-step pre-processing operation of filling pixel values and scaling for the captured video. The model outputs are divided into six classes, which are horizontal face, horizontal side-face, bow face, bow side-face, rise face and rise side-face. The results show that for the test set, the detection precision(AP) reaches 85.54%, 79.3%, 89.61%, 76.12%, 79.37%, 84.35% of the horizontal face, horizontal side-face, bow face, bow side-face, rise face and rise side-face respectively, and the mean detection precision(mAP) is 8.39%, 4.66% and 2.95% higher than that of the general Tiny-YOLO model, the CAT-YOLO model only refers to channel attention and the SAT-YOLO model only introduces spatial attention respectively. In order to further verify the migration performance of attention on the remaining models, under the same experimental conditions, two attentional information were introduced to construct the corresponding attention sub-models based on the YOLOV3-based model. The experiment shows that compared to the YOLOV3 submodel, the sub-model based on Tiny-YOLO increase by 0.46% to 1.92% in the mAP. The Tiny-YOLO and YOLOV3 series models have different performance improvements after adding attention information, indicating that the attention mechanism is beneficial to the accurate and effective group gestures detection of different groups of pigs. In this study, the data is pseudo-equalized from the perspective of loss function to avoid the data imbalance caused by the number of poses of different facial categories, and actively explore the reasons for the difference in the accuracy of different facial gesture detection. The study can provide reference for the subsequent individual identification and behavior analysis of pigs. © 2019, Editorial Department of the Transactions of the Chinese Society of Agricultural Engineering. All right reserved.
引用
收藏
页码:169 / 179
页数:10
相关论文
共 43 条
  • [1] Sun L., Li Y., Zou Y., Et al., Pig image segmentation method based on improved Graph Cut algorithm, Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 33, 16, pp. 196-202, (2017)
  • [2] Zou Y., Sun L., Li Y., Et al., Video monitoring and analysis system for pig breeding based on distributed flow Computing, Transactions of the Chinese Society for Agricultural Machinery, 48, pp. 365-373, (2017)
  • [3] Xue Y., Zhu X., Zheng C., Et al., Lactating sow postures recognition from depth image of videos based on improved Faster R-CNN, Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 34, 9, pp. 189-196, (2018)
  • [4] Hu Z., Yang H., Lou T., Et al., Extraction of pig contour based on fully convolutional networks, Journal of South China Agricultural University, 39, 6, pp. 111-119, (2018)
  • [5] Yang A., Xue Y., Huang H., Et al., Lactating sow image segmentation based on fully convolutional networks, Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 33, 23, pp. 219-225, (2017)
  • [6] Yang A., Huang H., Zheng C., Et al., High-accuracy image segmentation for lactating sows using a fully convolutional network, Biosystems Engineering, 176, pp. 36-47, (2018)
  • [7] Psota E., Mittek M., Perez L., Et al., Multi-pig part detection and association with a fully-convolutional network, Sensors, 19, 4, (2019)
  • [8] Wang J., Liu A., Xiao J., Video-Based Pig Recognition with Feature-Integrated Transfer Learning, Biometric Recognition, pp. 620-631, (2018)
  • [9] Chen Z., Wu K., Li Y., Et al., SSD-MSN: An improved multi-scale object detection network based on SSD, IEEE Access, 7, pp. 80622-80632, (2019)
  • [10] Ghiasi G., Lin T.Y., Le Q.V., Nas-fpn: Learning scalable feature pyramid architecture for object detection, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7036-7045, (2019)