Camera Motion-Based Analysis of User Generated Video

被引:42
|
作者
Abdollahian, Golnaz [1 ]
Taskiran, Cuneyt M. [2 ]
Pizlo, Zygmunt [3 ]
Delp, Edward J. [1 ]
机构
[1] Purdue Univ, Sch Elect & Comp Engn, W Lafayette, IN 47907 USA
[2] Motorola Inc, Applicat Res & Technol Ctr, Schaumburg, IL 60196 USA
[3] Purdue Univ, Dept Psychol Sci, W Lafayette, IN 47907 USA
关键词
Content-based video analysis; eye tracking; home video; motion-based analysis; regions of interest; saliency maps; user generated video; video summarization; COMPRESSED VIDEO; VISUAL-ATTENTION; MODEL;
D O I
10.1109/TMM.2009.2036286
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper we propose a system for the analysis of user generated video (UGV). UGV often has a rich camera motion structure that is generated at the time the video is recorded by the person taking the video, i.e., the "camera person." We exploit this structure by defining a new concept known as camera view for temporal segmentation of UGV. The segmentation provides a video summary with unique properties that is useful in applications such as video annotation. Camera motion is also a powerful feature for identification of keyframes and regions of interest (ROIs) since it is an indicator of the camera person's interests in the scene and can also attract the viewers' attention. We propose a new location-based saliency map which is generated based on camera motion parameters. This map is combined with other saliency maps generated using features such as color contrast, object motion and face detection to determine the ROIs. In order to evaluate our methods we conducted several user studies. A subjective evaluation indicated that our system produces results that is consistent with viewers' preferences. We also examined the effect of camera motion on human visual attention through an eye tracking experiment. The results showed a high dependency between the distribution of fixation points of the viewers and the direction of camera movement which is consistent with our location-based saliency map.
引用
收藏
页码:28 / 41
页数:14
相关论文
共 50 条
  • [41] Motion-based video fusion using optical flow information
    Li, Jian
    Nikolov, Stavri G.
    Benton, Christopher P.
    Scott-Samuel, Nicholas E.
    2006 9TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION, VOLS 1-4, 2006, : 1786 - 1793
  • [42] A Robust Technique for Motion-Based Video Sequences Temporal Alignment
    Lu, Cheng
    Mandal, Mrinal
    IEEE TRANSACTIONS ON MULTIMEDIA, 2013, 15 (01) : 70 - 82
  • [43] Unsupervised Video Object Segmentation with Motion-Based Bilateral Networks
    Li, Siyang
    Seybold, Bryan
    Vorobyov, Alexey
    Lei, Xuejing
    Kuo, C-C Jay
    COMPUTER VISION - ECCV 2018, PT III, 2018, 11207 : 215 - 231
  • [44] A motion-based scene tree for compressed video content management
    Yi, HR
    Rajan, D
    Chia, LT
    IMAGE AND VISION COMPUTING, 2006, 24 (02) : 131 - 142
  • [45] A GROUND TRUTH FOR MOTION-BASED VIDEO-OBJECT SEGMENTATION
    Tiburzi, Fabrizio
    Escudero, Marcos
    Bescos, Jesus
    Martinez, Jose M.
    2008 15TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-5, 2008, : 17 - 20
  • [46] Motion-based segmentation for object-based video coding and indexing
    Chupeau, B
    François, E
    IMAGE AND VIDEO COMMUNICATIONS AND PROCESSING 2000, 2000, 3974 : 853 - 860
  • [47] THE INFLUENCE OF CAMERA SHAKES, HARMFUL OCCLUSIONS AND CAMERA MISALIGNMENT ON THE PERCEIVED QUALITY IN USER GENERATED VIDEO
    Wilk, Stefan
    Effelsberg, Wolfgang
    2014 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2014,
  • [48] A quadratic motion-based object-oriented video codec
    Yemez, Y
    Sankur, B
    Anarim, E
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2000, 15 (09) : 729 - 766
  • [49] Motion-based motion deblurring
    Ben-Ezra, M
    Nayar, SK
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2004, 26 (06) : 689 - 698
  • [50] Automatically parsing and labelling video based on camera motion qualitative analysis
    Shi, YC
    Zhou, XZ
    Luo, W
    Zhang, F
    2004 8TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION, VOLS 1-3, 2004, : 1543 - 1546