An Unsupervised Method for Summarizing Egocentric Sport Videos

被引:0
|
作者
Habibi Aghdam, Hamed [1 ]
Jahani Heravi, Elnaz [1 ]
Puig, Domenec [1 ]
机构
[1] Univ Rovira & Virgili, Comp Engn & Math Dept, Tarragona, Spain
关键词
Video Summarizing; Egocentric Video; Temporal Segmentation; Sparse Coding; ONLINE;
D O I
10.1117/12.2228883
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
People are getting more interested to record their sport activities using head-worn or hand-held cameras. This type of videos which is called egocentric sport videos has different motion and appearance patterns compared with life-logging videos. While a life-logging video can be defined in terms of well-defined human-object interactions, notwithstanding, it is not trivial to describe egocentric sport videos using well-defined activities. For this reason, summarizing egocentric sport videos based on human-object interaction might fail to produce meaningful results. In this paper, we propose an unsupervised method for summarizing egocentric videos by identifying the key frames of the video. Our method utilizes both appearance and motion information and it automatically finds the number of the key-frames. Our blind user study on the new dataset collected from YouTube shows that in 93.5% cases, the users choose the proposed method as their first video summary choice. In addition, our method is within the top 2 choices of the users in 99% of studies.
引用
收藏
页数:5
相关论文
共 50 条
  • [31] Organizing egocentric videos of daily living activities
    Ortis, Alessandro
    Farinella, Giovanni M.
    D'Amico, Valeria
    Addesso, Luca
    Torrisi, Giovanni
    Battiato, Sebastiano
    PATTERN RECOGNITION, 2017, 72 : 207 - 218
  • [32] Generic Action Recognition from Egocentric Videos
    Singh, Suriya
    Arora, Chetan
    Jawahar, C. V.
    2015 FIFTH NATIONAL CONFERENCE ON COMPUTER VISION, PATTERN RECOGNITION, IMAGE PROCESSING AND GRAPHICS (NCVPRIPG), 2015,
  • [33] Learning Navigation Subroutines from Egocentric Videos
    Kumar, Ashish
    Gupta, Saurabh
    Malik, Jitendra
    CONFERENCE ON ROBOT LEARNING, VOL 100, 2019, 100
  • [34] Multiscale summarization and action ranking in egocentric videos
    Sahu, Abhimanyu
    Chowdhury, Ananda S.
    PATTERN RECOGNITION LETTERS, 2020, 133 : 256 - 263
  • [35] Anticipating Next Active Objects for Egocentric Videos
    Thakur, Sanket Kumar
    Beyan, Cigdem
    Morerio, Pietro
    Murino, Vittorio
    del Bue, Alessio
    IEEE ACCESS, 2024, 12 : 61767 - 61779
  • [36] Left/right hand segmentation in egocentric videos
    Betancourt, Alejandro
    Morerio, Pietro
    Barakova, Emilia
    Marcenaro, Lucio
    Rauterberg, Matthias
    Regazzoni, Carlo
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2017, 154 : 73 - 81
  • [37] Market basket analysis from egocentric videos
    Santarcangelo, Vito
    Farinella, Giovanni Maria
    Furnari, Antonino
    Battiato, Sebastiano
    PATTERN RECOGNITION LETTERS, 2018, 112 : 83 - 90
  • [38] EgoTaskQA: Understanding Human Tasks in Egocentric Videos
    Jia, Baoxiong
    Lei, Ting
    Zhu, Song-Chun
    Huang, Siyuan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [39] Recognizing Personal Locations From Egocentric Videos
    Furnari, Antonino
    Farinella, Giovanni Maria
    Battiato, Sebastiano
    IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2017, 47 (01) : 6 - 18
  • [40] Egocentric action anticipation from untrimmed videos
    Rodin, Ivan
    Furnari, Antonino
    Farinella, Giovanni Maria
    IET COMPUTER VISION, 2025, 19 (01)