Efficient Video Object Co-Localization With Co-Saliency Activated Tracklets

被引:28
|
作者
Jerripothula, Koteswar Rao [1 ]
Cai, Jianfei [2 ]
Yuan, Junsong [3 ]
机构
[1] Graph Era Univ, Dehra Dun 248002, Uttar Pradesh, India
[2] Nanyang Technol Univ, Singapore 639798, Singapore
[3] SUNY Buffalo, Buffalo, NY 14260 USA
基金
新加坡国家研究基金会;
关键词
Tracklets; video; co-localization; co-saliency; SEGMENTATION;
D O I
10.1109/TCSVT.2018.2805811
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Video object co-localization is the task of jointly localizing common visual objects across videos. Due to the large variations both across the videos and within each video, it is quite challenging to identify and track the common objects jointly. Unlike the previous joint frameworks that use a large number of bounding box proposals to attack the problem, we propose to leverage co-saliency activated tracklets to efficiently address the problem. To highlight the common object regions, we first explore inter-video commonness, intra-video commonness, and motion saliency to generate the co-saliency maps for a small number of selected key frames at regular intervals. Object proposals of high objectness and co-saliency scores in those frames are tracked across each interval to build tracklets. Finally, the best tube for a video is obtained through selecting the optimal tracklet from each interval with the help of confidence and smoothness constraints. Experimental results on the benchmark YouTubeobjects dataset show that the proposed method outperforms the state-of-the-art methods in terms of accuracy and speed under both weakly supervised and unsupervised settings. Moreover, by noticing the existing benchmark dataset lacks of sufficient annotations for object localization (only one annotated frame per video), we further annotate more than 15k frames of the YouTube videos and develop a new benchmark dataset for video co-localization.
引用
收藏
页码:744 / 755
页数:12
相关论文
共 50 条
  • [31] Co-saliency Detection Based on Hierarchical Consistency
    Li, Bo
    Sun, Zhengxing
    Wang, Quan
    Li, Qian
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 1392 - 1400
  • [32] Exemplar-based image saliency and co-saliency detection
    Huang, Rui
    Feng, Wei
    Wang, Zezheng
    Xing, Yan
    Zou, Yaobin
    NEUROCOMPUTING, 2020, 371 : 147 - 157
  • [33] Co-Saliency Detection Based on Hierarchical Segmentation
    Liu, Zhi
    Zou, Wenbin
    Li, Lina
    Shen, Liquan
    Le Meur, Olivier
    IEEE SIGNAL PROCESSING LETTERS, 2014, 21 (01) : 88 - 92
  • [34] An Iterative Co-Saliency Framework for RGBD Images
    Cong, Runmin
    Lei, Jianjun
    Fu, Huazhu
    Lin, Weisi
    Huang, Qingming
    Cao, Xiaochun
    Hou, Chunping
    IEEE TRANSACTIONS ON CYBERNETICS, 2019, 49 (01) : 233 - 246
  • [35] A model of co-saliency based audio attention
    XiaoMing Zhao
    Xinxin Wang
    De Cheng
    Multimedia Tools and Applications, 2020, 79 : 23045 - 23069
  • [36] Cluster-Based Co-Saliency Detection
    Fu, Huazhu
    Cao, Xiaochun
    Tu, Zhuowen
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2013, 22 (10) : 3766 - 3778
  • [37] Co-Saliency Detection within a Single Image
    Yu, Hongkai
    Zheng, Kang
    Fang, Jianwu
    Guo, Hao
    Feng, Wei
    Wang, Song
    THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 7509 - 7516
  • [38] Consistent image processing based on co-saliency
    Ren, Xiangnan
    Li, Jinjiang
    Hua, Zhen
    Jiang, Xinbo
    CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY, 2021, 6 (03) : 324 - 337
  • [39] Co-saliency Detection via Base Reconstruction
    Cao, Xiaochun
    Cheng, Yupeng
    Tao, Zhiqiang
    Fu, Huazhu
    PROCEEDINGS OF THE 2014 ACM CONFERENCE ON MULTIMEDIA (MM'14), 2014, : 997 - 1000
  • [40] ICNet: Intra-saliency Correlation Network for Co-Saliency Detection
    Jin, Wen-Da
    Xu, Jun
    Cheng, Ming-Ming
    Zhang, Yi
    Guo, Wei
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33