A comparison on visual prediction models for MAMO (multi activity-multi object) recognition using deep learning

被引:0
|
作者
Budi Padmaja
Madhu Bala Myneni
Epili Krishna Rao Patro
机构
[1] Institute of Aeronautical Engineering,Department of Computer Science and Engineering
来源
关键词
Multi-activity; Human activity recognition; Computer vision; YOLO; Video sequences;
D O I
暂无
中图分类号
学科分类号
摘要
Multi activity-multi object recognition (MAMO) is a challenging task in visual systems for monitoring, recognizing and alerting in various public places, such as universities, hospitals and airports. While both academic and commercial researchers are aiming towards automatic tracking of human activities in intelligent video surveillance using deep learning frameworks. This is required for many real time applications to detect unusual/suspicious activities like tracking of suspicious behaviour in crime events etc. The primary purpose of this paper is to render a multi class activity prediction in individuals as well as groups from video sequences by using the state-of-the-art object detector You Look only Once (YOLOv3). By optimum utilization of the geographical information of cameras and YOLO object detection framework, a Deep Landmark model recognize a simple to complex human actions on gray scale to RGB image frames of video sequences. This model is tested and compared with various benchmark datasets and found to be the most precise model for detecting human activities in video streams. Upon analysing the experimental results, it has been observed that the proposed method shows superior performance as well as high accuracy.
引用
收藏
相关论文
共 50 条
  • [1] A comparison on visual prediction models for MAMO (multi activity-multi object) recognition using deep learning
    Padmaja, Budi
    Myneni, Madhu Bala
    Patro, Epili Krishna Rao
    JOURNAL OF BIG DATA, 2020, 7 (01)
  • [2] Fusion of tactile and visual information in deep learning models for object recognition
    Babadian, Reza Pebdani
    Faez, Karim
    Amiri, Mahmood
    Falotico, Egidio
    INFORMATION FUSION, 2023, 92 : 313 - 325
  • [3] Multi-Modal ISAR Object Recognition using Adaptive Deep Relation Learning
    Xue, Bin
    Tong, Ningning
    2019 INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS, SIGNAL PROCESSING AND NETWORKING (WISPNET 2019): ADVANCING WIRELESS AND MOBILE COMMUNICATIONS TECHNOLOGIES FOR 2020 INFORMATION SOCIETY, 2019, : 48 - 53
  • [4] Multi-Class Object Classification using Deep Learning Models in Automotive Object Detection Scenarios
    Soumya, A.
    Cenkeramaddi, Linga Reddy
    Vishnu, Chalavadi
    Mohan, Krishna C.
    SIXTEENTH INTERNATIONAL CONFERENCE ON MACHINE VISION, ICMV 2023, 2024, 13072
  • [5] Multi-Label Human Activity Recognition on Image Using Deep Learning
    Nikolaev, Pavel
    PROCEEDINGS OF THE 7TH SCIENTIFIC CONFERENCE ON INFORMATION TECHNOLOGIES FOR INTELLIGENT DECISION MAKING SUPPORT (ITIDS 2019), 2019, 166 : 141 - 145
  • [6] A Deep Learning Framework Using Convolutional Neural Network for Multi-class Object Recognition
    Hayat, Shaukat
    She Kun
    Zuo Tengtao
    Yue Yu
    Tu, Tianyi
    Du, Yantong
    2018 IEEE 3RD INTERNATIONAL CONFERENCE ON IMAGE, VISION AND COMPUTING (ICIVC), 2018, : 194 - 198
  • [7] Unsupervised Learning of Visual Object Recognition Models
    Navarrete, Dulce J.
    Morales, Eduardo F.
    Enrique Sucar, Luis
    ADVANCES IN ARTIFICIAL INTELLIGENCE - IBERAMIA 2012, 2012, 7637 : 511 - 520
  • [8] A Sustainable Deep Learning Framework for Object Recognition Using Multi-Layers Deep Features Fusion and Selection
    Rashid, Muhammad
    Khan, Muhammad Attique
    Alhaisoni, Majed
    Wang, Shui-Hua
    Naqvi, Syed Rameez
    Rehman, Amjad
    Saba, Tanzila
    SUSTAINABILITY, 2020, 12 (12)
  • [9] Multi-Sensors System and Deep Learning Models for Object Tracking
    El Natour, Ghina
    Bresson, Guillaume
    Trichet, Remi
    SENSORS, 2023, 23 (18)
  • [10] Infrared Multi-Object Detection Using Deep Learning
    Aboalia, Hossam
    Hussein, Sherif
    Mahmoud, Alaaeldin
    2024 14TH INTERNATIONAL CONFERENCE ON ELECTRICAL ENGINEERING, ICEENG 2024, 2024, : 175 - 177