A methodology for image annotation of human actions in videos

被引:0
|
作者
Moomina Waheed
Shahid Hussain
Arif Ali Khan
Mansoor Ahmed
Bashir Ahmad
机构
[1] COMSATS University,College of CS&T
[2] Nanjing University of Aeronautics and Astronautics,undefined
[3] Department of Computer Science,undefined
[4] Qurtuba University of Science & Information Technology,undefined
来源
关键词
Image annotation; SIFT; Clustering; Semantic analysis; Image labeling; Action recognition;
D O I
暂无
中图分类号
学科分类号
摘要
In the context of video-based image classification, image annotation plays a vital role in improving the image classification decision based on it’s semantics. Though, several methods have been introduced to adopt the image annotation such as manual and semi-supervised. However, formal specification, high cost, high probability of errors and computation time remain major issues to perform image annotation. In order to overcome these issues, we propose a new image annotation technique which consists of three tiers namely frames extraction, interest point’s generation, and clustering. The aim of the proposed technique is to automate the label generation of video frames. Moreover, an evaluation model to assess the effectiveness of the proposed technique is used. The promising results of the proposed technique indicate the effectiveness (77% in terms of Adjusted Random Index) of the proposed technique in the context label generation for video frames. In the end, a comparative study analysis is made between the existing techniques and proposed methodology.
引用
收藏
页码:24347 / 24365
页数:18
相关论文
共 50 条
  • [41] Automatic group activity annotation for mobile videos
    Zhao, Chaoyang
    Wang, Jinqiao
    Li, Jianqiang
    Lu, Hanqing
    MULTIMEDIA SYSTEMS, 2017, 23 (06) : 667 - 677
  • [42] Use of context in automatic annotation of sports videos
    Kolonias, I
    Christmas, W
    Kittler, J
    PROGRESS IN PATTERN RECOGNITION, IMAGE ANALYSIS AND APPLICATIONS, 2004, 3287 : 1 - 12
  • [43] Recognizing Human Actions From Noisy Videos via Multiple Instance Learning
    Sener, Fadime
    Samet, Nermin
    Duygulu, Pinar
    Ikizler-Cinbis, Nazli
    2013 21ST SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2013,
  • [44] Fast Temporal Activity Proposals for Efficient Detection of Human Actions in Untrimmed Videos
    Heilbron, Fabian Caba
    Niebles, Juan Carlos
    Ghanem, Bernard
    2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 1914 - 1923
  • [45] Learning to Detect, Associate, and Recognize Human Actions and Surrounding Scenes in Untrimmed Videos
    Park, Jungin
    Jeon, Sangryul
    Kim, Seungryong
    Lee, Jiyoung
    Kim, Sunok
    Sohn, Kwanghoon
    PROCEEDINGS OF THE 1ST WORKSHOP AND CHALLENGE ON COMPREHENSIVE VIDEO UNDERSTANDING IN THE WILD (COVIEW'18), 2018, : 21 - 26
  • [46] Classification of Human Actions in Videos with a Large-Scale Photonic Reservoir Computer
    Antonik, Piotr
    Marsal, Nicolas
    Brunner, Daniel
    Rontani, Damien
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2019: WORKSHOP AND SPECIAL SESSIONS, 2019, 11731 : 156 - 160
  • [47] A Methodology for low-cost Image Annotation based on Conceptual Modeling: a Biological Example
    Da Costa, Arnaud
    Savonnet, Marinette
    Leclercq, Eric
    Terrasse, Marie-Noelle
    SITIS 2007: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON SIGNAL IMAGE TECHNOLOGIES & INTERNET BASED SYSTEMS, 2008, : 18 - 25
  • [48] Fine-Tuning CNN Image Retrieval with No Human Annotation
    Radenovic, Filip
    Tolias, Giorgos
    Chum, Ondrej
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (07) : 1655 - 1668
  • [49] Block Annotation: Better Image Annotation with Sub-Image Decomposition
    Lin, Hubert
    Upchurch, Paul
    Bala, Kavita
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 5289 - 5299
  • [50] Parsing videos of actions with segmental grammars
    Pirsiavash, Hamed
    Ramanan, Deva
    2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, : 612 - 619