Efficient Search and Localization of Human Actions in Video Databases

被引:58
|
作者
Shao, Ling [1 ,2 ]
Jones, Simon [2 ]
Li, Xuelong [3 ]
机构
[1] Nanjing Univ Informat Sci & Technol, Coll Elect & Informat Engn, Nanjing 210044, Jiangsu, Peoples R China
[2] Univ Sheffield, Dept Elect & Elect Engn, Sheffield S1 3JD, S Yorkshire, England
[3] Chinese Acad Sci, Xian Inst Opt & Precis Mech, State Key Lab Transient Opt & Photon, Ctr Opt Imagery Anal & Learning OPTIMAL, Xian 710119, Shaanxi, Peoples R China
基金
中国国家自然科学基金; 英国工程与自然科学研究理事会;
关键词
Human actions; relevance feedback; spatio-temporal localization; video retrieval; RELEVANCE FEEDBACK; RECOGNITION; RETRIEVAL;
D O I
10.1109/TCSVT.2013.2276700
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
As digital video databases grow, so grows the problem of effectively navigating through them. In this paper we propose a novel content-based video retrieval approach to searching such video databases, specifically those involving human actions, incorporating spatio-temporal localization. We outline a novel, highly efficient localization model that first performs temporal localization based on histograms of evenly spaced time-slices, then spatial localization based on histograms of a 2-D spatial grid. We further argue that our retrieval model, based on the aforementioned localization, followed by relevance ranking, results in a highly discriminative system, while remaining an order of magnitude faster than the current state-of-the-art method. We also show how relevance feedback can be applied to our localization and ranking algorithms. As a result, the presented system is more directly applicable to real-world problems than any prior content-based video retrieval system.
引用
收藏
页码:504 / 512
页数:9
相关论文
共 50 条
  • [31] GString: A novel approach for efficient search in graph databases
    Jiang, Haoliang
    Wang, Haixun
    Yu, Philip S.
    Zhou, Shuigeng
    2007 IEEE 23RD INTERNATIONAL CONFERENCE ON DATA ENGINEERING, VOLS 1-3, 2007, : 541 - +
  • [32] Violence region localization in video and the school violent actions classification
    Ha, Ngo Duong
    Tran, Nhu Y.
    Thuy, Le Nhi Lam
    Shimizu, Ikuko
    Bao, Pham The
    FRONTIERS IN COMPUTER SCIENCE, 2023, 5
  • [33] CueVideo: A system for cross-modal search and browse of video databases
    Syeda-Mahmood, T
    Srinivasan, S
    Amir, A
    Ponceleon, D
    Blanchard, B
    Petkovic, D
    IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, PROCEEDINGS, VOL II, 2000, : 786 - 787
  • [34] An efficient content-based video retrieval for large databases
    Cedillo-Hernandez, Antonio
    Cedillo-Hernandez, Manuel
    Garcia-Ugalde, Francisco
    Nakano-Miyatake, Mariko
    Perez-Meana, Hector
    2015 INTERNATIONAL CONFERENCE ON MECHATRONICS, ELECTRONICS, AND AUTOMOTIVE ENGINEERING (ICMEAE 2015), 2015, : 15 - 19
  • [35] EFFICIENT VIDEO SEARCH USING IMAGE QUERIES
    Araujo, A.
    Makar, M.
    Chandrasekhar, V.
    Chen, D.
    Tsai, S.
    Chen, H.
    Angst, R.
    Girod, B.
    2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2014, : 3082 - 3086
  • [36] An Efficient and Effective Video Similarity Search Method
    Zhu Liuzhang
    Li Zimian
    Cao Zheng
    INTERNATIONAL CONFERENCE ON GRAPHIC AND IMAGE PROCESSING (ICGIP 2011), 2011, 8285
  • [37] Efficient and reliable video transmission with error localization
    Hong, GY
    Fong, B
    Fong, ACM
    2002 INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS, DIGEST OF TECHNICAL PAPERS, 2002, : 84 - 85
  • [38] An Efficient Similarity Search Algorithm for Web Video
    Cao, Zheng
    Zhu, Ming
    2009 IEEE INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING AND INTELLIGENT SYSTEMS, PROCEEDINGS, VOL 4, 2009, : 209 - 213
  • [39] Video Google: Efficient visual search of videos
    Sivic, Josef
    Zisserman, Andrew
    TOWARD CATEGORY-LEVEL OBJECT RECOGNITION, 2006, 4170 : 127 - +
  • [40] Efficient selection of candidates in video content search
    Stassinopoulos, George I.
    Papastefanos, Serafeim S.
    AEU-INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATIONS, 2010, 64 (07) : 650 - 662