Summarizing egocentric videos using deep features and optimal clustering

被引:11
|
作者
Sahu, Abhimanyu [1 ]
Chowdhury, Ananda S. [1 ]
机构
[1] Jadavpur Univ, Dept Elect & Telecommun Engn, Kolkata 700032, India
关键词
Egocentric video summarization; Deep features; Center-surround model; Integer Knapsack; FRAMEWORK;
D O I
10.1016/j.neucom.2020.02.099
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we address the problem of summarizing egocentric videos using deep features and an optimal clustering approach. Based on an augmented pre-trained convolutional neural network (CNN), each frame in an egocentric video is represented by deep features. An optimal clustering algorithm, based on a center-surround model (CSM) and an Integer Knapsack type formulation (IK) for K-means, termed as CSMIK K-means, is applied next to obtain the summary. In the center surround model, we compute difference in entropy and the optical flow values between the central region and that of the surrounding region of each frame. In the integer knapsack formulation, each cluster is treated as an item whose cost is assigned from the center surround model. A potential set of clusters in CSMIK K-means is obtained from the chi-square distance between color histograms of successive frames. CSMIK K-Means evaluates different cluster formations and simultaneously determines the optimal number of clusters and the corresponding summary. Experimental evaluation on four well-known benchmark datasets clearly indicate the superiority of the proposed method over several state-of-the-art approaches. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页码:209 / 221
页数:13
相关论文
共 50 条
  • [41] Happy Emotion Recognition From Unconstrained Videos Using 3D Hybrid Deep Features
    Samadiani, Najmeh
    Huang, Guangyan
    Hu, Yu
    Li, Xiaowei
    IEEE ACCESS, 2021, 9 : 35524 - 35538
  • [42] A facial expression recognition system using robust face features from depth videos and deep learning
    Uddin, Md. Zia
    Hassan, Mohammed Mehedi
    Almogren, Ahmad
    Zuair, Mansour
    Fortino, Giancarlo
    Torresen, Jim
    COMPUTERS & ELECTRICAL ENGINEERING, 2017, 63 : 114 - 125
  • [43] Classification of Sleep Videos Using Deep Learning
    Choe, Jeehyun
    Schwichtenberg, A. J.
    Delp, Edward J.
    2019 2ND IEEE CONFERENCE ON MULTIMEDIA INFORMATION PROCESSING AND RETRIEVAL (MIPR 2019), 2019, : 115 - 120
  • [44] Deep Trajectory Representation-Based Clustering for Motion Pattern Extraction in Videos
    Boyle, Jonathan
    Nawaz, Tahir
    Ferryman, James
    2017 14TH IEEE INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE (AVSS), 2017,
  • [45] TRINet: Tracking and Re-identification Network for Multiple Targets in Egocentric Videos Using LSTMs
    Nigam, Jyoti
    Rameshan, Renu M.
    COMPUTER ANALYSIS OF IMAGES AND PATTERNS, CAIP 2019, PT II, 2019, 11679 : 438 - 448
  • [46] Driving Behavior Aware Caption Generation for Egocentric Driving Videos Using In-Vehicle Sensors
    Zhang, Hongkuan
    Takeda, Koichi
    Sasano, Ryohei
    Adachi, Yusuke
    Ohtani, Kento
    2021 IEEE INTELLIGENT VEHICLES SYMPOSIUM WORKSHOPS (IV WORKSHOPS), 2021, : 287 - 292
  • [47] Creating Deep Learning-based Acrobatic Videos Using Imitation Videos
    Choi, Jong In
    Nam, Sang Hun
    KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2021, 15 (02): : 713 - 728
  • [48] Action Recognition Based on Linear Dynamical Systems with Deep Features in Videos
    Du, Zhouning
    Mukaidani, Hiroaki
    Saravanakumar, Ramasamy
    2020 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2020, : 2634 - 2639
  • [49] Joint Coding of Local and Global Deep Features in Videos for Visual Search
    Ding, Lin
    Tian, Yonghong
    Fan, Hongfei
    Chen, Changhuai
    Huang, Tiejun
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 3734 - 3749
  • [50] Image Steganalysis Based on Deep Content Features Clustering
    Mo, Chengyu
    Liu, Fenlin
    Zhu, Ma
    Yan, Gengcong
    Qi, Baojun
    Yang, Chunfang
    CMC-COMPUTERS MATERIALS & CONTINUA, 2023, 76 (03): : 2921 - 2936