Efficient Video Object Co-Localization With Co-Saliency Activated Tracklets

被引:28
|
作者
Jerripothula, Koteswar Rao [1 ]
Cai, Jianfei [2 ]
Yuan, Junsong [3 ]
机构
[1] Graph Era Univ, Dehra Dun 248002, Uttar Pradesh, India
[2] Nanyang Technol Univ, Singapore 639798, Singapore
[3] SUNY Buffalo, Buffalo, NY 14260 USA
基金
新加坡国家研究基金会;
关键词
Tracklets; video; co-localization; co-saliency; SEGMENTATION;
D O I
10.1109/TCSVT.2018.2805811
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Video object co-localization is the task of jointly localizing common visual objects across videos. Due to the large variations both across the videos and within each video, it is quite challenging to identify and track the common objects jointly. Unlike the previous joint frameworks that use a large number of bounding box proposals to attack the problem, we propose to leverage co-saliency activated tracklets to efficiently address the problem. To highlight the common object regions, we first explore inter-video commonness, intra-video commonness, and motion saliency to generate the co-saliency maps for a small number of selected key frames at regular intervals. Object proposals of high objectness and co-saliency scores in those frames are tracked across each interval to build tracklets. Finally, the best tube for a video is obtained through selecting the optimal tracklet from each interval with the help of confidence and smoothness constraints. Experimental results on the benchmark YouTubeobjects dataset show that the proposed method outperforms the state-of-the-art methods in terms of accuracy and speed under both weakly supervised and unsupervised settings. Moreover, by noticing the existing benchmark dataset lacks of sufficient annotations for object localization (only one annotated frame per video), we further annotate more than 15k frames of the YouTube videos and develop a new benchmark dataset for video co-localization.
引用
收藏
页码:744 / 755
页数:12
相关论文
共 50 条
  • [41] CO-SALIENCY DETECTION VIA SIMILARITY-BASED SALIENCY PROPAGATION
    Ge, Chenjie
    Fu, Keren
    Li, Yijun
    Yang, Jie
    Shi, Pengfei
    Bai, Li
    2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2015, : 1845 - 1849
  • [42] Co-Saliency Detection With Co-Attention Fully Convolutional Network
    Gao, Guangshuai
    Zhao, Wenting
    Liu, Qingjie
    Wang, Yunhong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (03) : 877 - 889
  • [43] SALIENCY AND CO-SALIENCY DETECTION BY LOW-RANK MULTISCALE FUSION
    Huang, Rui
    Feng, Wei
    Sun, Jizhou
    2015 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO (ICME), 2015,
  • [44] THE ACCURACY OF CO-LOCALIZATION
    VARNDELL, IM
    HENNESSY, RJ
    ANOYRKATIS, SC
    POLAK, JM
    HISTOCHEMICAL JOURNAL, 1985, 17 (07): : 835 - 835
  • [45] Re-Thinking the Relations in Co-Saliency Detection
    Tang, Lv
    Li, Bo
    Kuang, Senyun
    Song, Mofei
    Ding, Shouhong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (08) : 5453 - 5466
  • [46] Self-supervised image co-saliency detection
    Liu, Yan
    Li, Tengpeng
    Wu, Yang
    Song, Huihui
    Zhang, Kaihua
    COMPUTERS & ELECTRICAL ENGINEERING, 2023, 105
  • [47] Co-saliency Detection via Looking Deep and Wide
    Zhang, Dingwen
    Han, Junwei
    Li, Chao
    Wang, Jingdong
    2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2015, : 2994 - 3002
  • [48] Unsupervised object discovery and co-localization by deep descriptor transformation
    Wei, Xiu-Shen
    Zhang, Chen-Lin
    Wu, Jianxin
    Shen, Chunhua
    Zhou, Zhi-Hua
    PATTERN RECOGNITION, 2019, 88 : 113 - 126
  • [49] Low-rank weighted co-saliency detection via efficient manifold ranking
    Li, Tengpeng
    Song, Huihui
    Zhang, Kaihua
    Liu, Qingshan
    Lian, Wei
    MULTIMEDIA TOOLS AND APPLICATIONS, 2019, 78 (15) : 21309 - 21324
  • [50] CO-SALIENCY DETECTION VIA HIERARCHICAL CONSISTENCY MEASURE
    Zhang, Yonghua
    Li, Liang
    Cong, Runmin
    Guo, Xiaojie
    Xu, Hui
    Zhang, Jiawan
    2018 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2018,