Pain assessment in horses using automatic facial expression recognition through deep learning-based modeling

被引:30
|
作者
Lencioni, Gabriel Carreira [1 ]
de Sousa, Rafael Vieira [2 ]
de Souza Sardinha, Edson Jose [2 ]
Correa, Rodrigo Romero [3 ]
Zanella, Adroaldo Jose [1 ]
机构
[1] Univ Sao Paulo, Sch Vet Med & Anim Sci FMVZ, Dept Prevent Vet Med & Anim Hlth, Sao Paulo, SP, Brazil
[2] Univ Sao Paulo, Fac Anim Sci & Food Engn FZEA, Dept Biosyst Engn, Pirassununga, SP, Brazil
[3] Univ Sao Paulo, Sch Vet Med & Anim Sci FMVZ, Dept Surg, Sao Paulo, SP, Brazil
来源
PLOS ONE | 2021年 / 16卷 / 10期
关键词
D O I
10.1371/journal.pone.0258672
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
The aim of this study was to develop and evaluate a machine vision algorithm to assess the pain level in horses, using an automatic computational classifier based on the Horse Grimace Scale (HGS) and trained by machine learning method. The use of the Horse Grimace Scale is dependent on a human observer, who most of the time does not have availability to evaluate the animal for long periods and must also be well trained in order to apply the evaluation system correctly. In addition, even with adequate training, the presence of an unknown person near an animal in pain can result in behavioral changes, making the evaluation more complex. As a possible solution, the automatic video-imaging system will be able to monitor pain responses in horses more accurately and in real-time, and thus allow an earlier diagnosis and more efficient treatment for the affected animals. This study is based on assessment of facial expressions of 7 horses that underwent castration, collected through a video system positioned on the top of the feeder station, capturing images at 4 distinct timepoints daily for two days before and four days after surgical castration. A labeling process was applied to build a pain facial image database and machine learning methods were used to train the computational pain classifier. The machine vision algorithm was developed through the training of a Convolutional Neural Network (CNN) that resulted in an overall accuracy of 75.8% while classifying pain on three levels: not present, moderately present, and obviously present. While classifying between two categories (pain not present and pain present) the overall accuracy reached 88.3%. Although there are some improvements to be made in order to use the system in a daily routine, the model appears promising and capable of measuring pain on images of horses automatically through facial expressions, collected from video images.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] A Simple and Efficient Deep Learning-Based Framework for Automatic Fruit Recognition
    Hussain, Dostdar
    Hussain, Israr
    Ismail, Muhammad
    Alabrah, Amerah
    Ullah, Syed Sajid
    Alaghbari, Hayat Mansoor
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2022, 2022
  • [32] Deep learning-based automatic recognition network of agricultural machinery images
    Zhang, Ziqiang
    Liu, Hui
    Meng, Zhijun
    Chen, Jingping
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2019, 166
  • [33] Deep Learning-Based Automatic Modulation Recognition in OTFS and OFDM systems
    Zhou, Jinggan
    Liao, Xuewen
    Gao, Zhenzhen
    2023 IEEE 97TH VEHICULAR TECHNOLOGY CONFERENCE, VTC2023-SPRING, 2023,
  • [34] A Review on Deep Learning-Based Approaches for Automatic Sonar Target Recognition
    Neupane, Dhiraj
    Seok, Jongwon
    ELECTRONICS, 2020, 9 (11) : 1 - 30
  • [35] Deep Learning-based Automatic Modulation Recognition Algorithm in Internet of Things
    Wang, Yu
    Gui, Guan
    Huang, Hao
    Wang, Jie
    Yin, Yue
    Zhou, Tian
    Zhao, Yu
    Sheng, Hong
    Zhu, Xiaomei
    PROCEEDINGS OF 2019 IEEE 2ND INTERNATIONAL CONFERENCE ON ELECTRONIC INFORMATION AND COMMUNICATION TECHNOLOGY (ICEICT 2019), 2019, : 576 - 579
  • [36] Deep learning-based vessel automatic recognition for laparoscopic right hemicolectomy
    Kyoko Ryu
    Daichi Kitaguchi
    Kei Nakajima
    Yuto Ishikawa
    Yuriko Harai
    Atsushi Yamada
    Younae Lee
    Kazuyuki Hayashi
    Norihito Kosugi
    Hiro Hasegawa
    Nobuyoshi Takeshita
    Yusuke Kinugasa
    Masaaki Ito
    Surgical Endoscopy, 2024, 38 : 171 - 178
  • [37] Deep learning-based vessel automatic recognition for laparoscopic right hemicolectomy
    Ryu, Kyoko
    Kitaguchi, Daichi
    Nakajima, Kei
    Ishikawa, Yuto
    Harai, Yuriko
    Yamada, Atsushi
    Lee, Younae
    Hayashi, Kazuyuki
    Kosugi, Norihito
    Hasegawa, Hiro
    Takeshita, Nobuyoshi
    Kinugasa, Yusuke
    Ito, Masaaki
    SURGICAL ENDOSCOPY AND OTHER INTERVENTIONAL TECHNIQUES, 2024, 38 (01): : 171 - 178
  • [38] Facial expression recognition based on active region of interest using deep learning and parallelism
    Hossain, Mohammad Alamgir
    Assiri, Basem
    PEERJ COMPUTER SCIENCE, 2022, 8
  • [39] Human-Robot Interaction Based on Facial Expression Recognition Using Deep Learning
    Maeda, Yoichiro
    Sakai, Tensei
    Kamei, Katsuari
    Cooper, Eric W.
    2020 JOINT 11TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING AND INTELLIGENT SYSTEMS AND 21ST INTERNATIONAL SYMPOSIUM ON ADVANCED INTELLIGENT SYSTEMS (SCIS-ISIS), 2020, : 211 - 216
  • [40] Deep learning based facial expression recognition using improved Cat Swarm Optimization
    H. Sikkandar
    R. Thiyagarajan
    Journal of Ambient Intelligence and Humanized Computing, 2021, 12 : 3037 - 3053