Convolutional Neural Network-Based Automated System for Dog Tracking and Emotion Recognition in Video Surveillance

被引:4
|
作者
Chen, Huan-Yu [1 ]
Lin, Chuen-Horng [1 ]
Lai, Jyun-Wei [1 ]
Chan, Yung-Kuan [2 ]
机构
[1] Natl Taichung Univ Sci & Technol, Dept Comp Sci & Informat Engn, 129,Sec 3,Sanmin Rd, Taichung 404, Taiwan
[2] Natl Chung Hsing Univ, Dept Management Informat Syst, 145 Xingda Rd, Taichung 402, Taiwan
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 07期
关键词
convolutional neural networks; dog detection; dog tracking; dog emotion recognition; long short-term memory;
D O I
10.3390/app13074596
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
This paper proposes a multi-convolutional neural network (CNN)-based system for the detection, tracking, and recognition of the emotions of dogs in surveillance videos. This system detects dogs in each frame of a video, tracks the dogs in the video, and recognizes the dogs' emotions. The system uses a YOLOv3 model for dog detection. The dogs are tracked in real time with a deep association metric model (DeepDogTrack), which uses a Kalman filter combined with a CNN for processing. Thereafter, the dogs' emotional behaviors are categorized into three types-angry (or aggressive), happy (or excited), and neutral (or general) behaviors-on the basis of manual judgments made by veterinary experts and custom dog breeders. The system extracts sub-images from videos of dogs, determines whether the images are sufficient to recognize the dogs' emotions, and uses the long short-term deep features of dog memory networks model (LDFDMN) to identify the dog's emotions. The dog detection experiments were conducted using two image datasets to verify the model's effectiveness, and the detection accuracy rates were 97.59% and 94.62%, respectively. Detection errors occurred when the dog's facial features were obscured, when the dog was of a special breed, when the dog's body was covered, or when the dog region was incomplete. The dog-tracking experiments were conducted using three video datasets, each containing one or more dogs. The highest tracking accuracy rate (93.02%) was achieved when only one dog was in the video, and the highest tracking rate achieved for a video containing multiple dogs was 86.45%. Tracking errors occurred when the region covered by a dog's body increased as the dog entered or left the screen, resulting in tracking loss. The dog emotion recognition experiments were conducted using two video datasets. The emotion recognition accuracy rates were 81.73% and 76.02%, respectively. Recognition errors occurred when the background of the image was removed, resulting in the dog region being unclear and the incorrect emotion being recognized. Of the three emotions, anger was the most prominently represented; therefore, the recognition rates for angry emotions were higher than those for happy or neutral emotions. Emotion recognition errors occurred when the dog's movements were too subtle or too fast, the image was blurred, the shooting angle was suboptimal, or the video resolution was too low. Nevertheless, the current experiments revealed that the proposed system can correctly recognize the emotions of dogs in videos. The accuracy of the proposed system can be dramatically increased by using more images and videos for training the detection, tracking, and emotional recognition models. The system can then be applied in real-world situations to assist in the early identification of dogs that may exhibit aggressive behavior.
引用
收藏
页数:29
相关论文
共 50 条
  • [41] Automated Daily Human Activity Recognition for Video Surveillance Using Neural Network
    Babiker, Mohanad
    Khalifa, Othman O.
    Htike, Kyaw Kyaw
    Hassan, Aisha
    Zaharadeen, Muhamed
    2017 IEEE 4TH INTERNATIONAL CONFERENCE ON SMART INSTRUMENTATION, MEASUREMENT AND APPLICATION (ICSIMA 2017), 2017,
  • [42] Convolutional neural network-based cross-corpus speech emotion recognition with data augmentation and features fusion
    Jahangir, Rashid
    Teh, Ying Wah
    Mujtaba, Ghulam
    Alroobaea, Roobaea
    Shaikh, Zahid Hussain
    Ali, Ihsan
    MACHINE VISION AND APPLICATIONS, 2022, 33 (03)
  • [43] Convolutional neural network-based cross-corpus speech emotion recognition with data augmentation and features fusion
    Rashid Jahangir
    Ying Wah Teh
    Ghulam Mujtaba
    Roobaea Alroobaea
    Zahid Hussain Shaikh
    Ihsan Ali
    Machine Vision and Applications, 2022, 33
  • [44] Video Surveillance Shoplifting Recognition Based on a Hybrid Neural Network
    Kirichenko, Lyudmyla
    Sydorenko, Bohdan
    Radivilova, Tamara
    Zinchenko, Petro
    2022 IEEE 17TH INTERNATIONAL CONFERENCE ON COMPUTER SCIENCES AND INFORMATION TECHNOLOGIES (CSIT), 2022, : 44 - 47
  • [45] Convolutional Neural Network based Foreground Segmentation for Video Surveillance Systems
    Shahbaz, Ajmal
    Van-Thanh Hoang
    Jo, Kang-Hyun
    45TH ANNUAL CONFERENCE OF THE IEEE INDUSTRIAL ELECTRONICS SOCIETY (IECON 2019), 2019, : 86 - 89
  • [46] Dynamic emotion recognition of human face based on convolutional neural network
    Xu, Lanbo
    INTERNATIONAL JOURNAL OF BIOMETRICS, 2024, 16 (05) : 533 - 551
  • [47] ELECTROENCEPHALOGRAM-BASED EMOTION RECOGNITION USING A CONVOLUTIONAL NEURAL NETWORK
    Savinov, V. B.
    Botman, S. A.
    Sapunov, V. V.
    Petrov, V. A.
    Samusev, I. G.
    Shusharina, N. N.
    BULLETIN OF RUSSIAN STATE MEDICAL UNIVERSITY, 2019, (03): : 34 - 38
  • [48] Neural network-based blended ensemble learning for speech emotion recognition
    Yalamanchili, Bhanusree
    Samayamantula, Srinivas Kumar
    Anne, Koteswara Rao
    MULTIDIMENSIONAL SYSTEMS AND SIGNAL PROCESSING, 2022, 33 (04) : 1323 - 1348
  • [49] Neural network-based blended ensemble learning for speech emotion recognition
    Bhanusree Yalamanchili
    Srinivas Kumar Samayamantula
    Koteswara Rao Anne
    Multidimensional Systems and Signal Processing, 2022, 33 : 1323 - 1348
  • [50] Emotion recognition with convolutional neural network and EEG-based EFDMs
    Wang, Fei
    Wu, Shichao
    Zhang, Weiwei
    Xu, Zongfeng
    Zhang, Yahui
    Wu, Chengdong
    Coleman, Sonya
    NEUROPSYCHOLOGIA, 2020, 146