Deep Self-Taught Learning for Weakly Supervised Object Localization

被引:125
|
作者
Jie, Zequn [1 ,2 ]
Wei, Yunchao [1 ]
Jin, Xiaojie [1 ]
Feng, Jiashi [1 ]
Liu, Wei [2 ]
机构
[1] Natl Univ Singapore, Singapore, Singapore
[2] Tencent Al Lab, Singapore, Singapore
关键词
D O I
10.1109/CVPR.2017.457
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Most existing weakly supervised localization (WSL) approaches learn detectors by finding positive bounding boxes based on features learned with image-level supervision. However, those features do not contain spatial location related information and usually provide poor-quality positive samples for training a detector. To overcome this issue, we propose a deep self-taught learning approach, which makes the detector learn the object-level features reliable for acquiring tight positive samples and afterwards re-train itself based on them. Consequently, the detector progressively improves its detection ability and localizes more informative positive samples. To implement such self-taught learning, we propose a seed sample acquisition method via image-to-object transferring and dense subgraph discovery to find reliable positive samples for initializing the detector. An online supportive sample harvesting scheme is further proposed to dynamically select the most confident tight positive samples and train the detector in a mutual boosting way. To prevent the detector from being trapped in poor optima due to overfitting, we propose a new relative improvement of predicted CNN scores for guiding the self-taught learning process. Extensive experiments on PASCAL 2007 and 2012 show that our approach outperforms the state-of-the-arts, strongly validating its effectiveness.
引用
收藏
页码:4294 / 4302
页数:9
相关论文
共 50 条
  • [1] Self-Taught Object Localization with Deep Networks
    Bazzani, Loris
    Bergamo, Alessandro
    Anguelov, Dragomir
    Torresani, Lorenzo
    2016 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2016), 2016,
  • [2] Self-taught cross-domain few-shot learning with weakly supervised object localization and task-decomposition
    Liu, Xiyao
    Ji, Zhong
    Pang, Yanwei
    Han, Zhi
    KNOWLEDGE-BASED SYSTEMS, 2023, 265
  • [3] Semi-Supervised Self-Taught Deep Learning for Finger Bones Segmentation
    Zhao, Ziyuan
    Zhang, Xiaoman
    Chen, Cen
    Li, Wei
    Peng, Songyou
    Wang, Jie
    Yang, Xulei
    Zhang, Le
    Zeng, Zeng
    2019 IEEE EMBS INTERNATIONAL CONFERENCE ON BIOMEDICAL & HEALTH INFORMATICS (BHI), 2019,
  • [4] Weakly Supervised Volumetric Segmentation via Self-taught Shape Denoising Model
    He, Qian
    Li, Shuailin
    He, Xuming
    MEDICAL IMAGING WITH DEEP LEARNING, VOL 143, 2021, 143 : 268 - 285
  • [5] Self Paced Deep Learning for Weakly Supervised Object Detection
    Sangineto, Enver
    Nabi, Moin
    Culibrk, Dubravko
    Sebe, Nicu
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (03) : 712 - 725
  • [6] Deep self-taught learning for facial beauty prediction
    Gan, Junying
    Li, Lichen
    Zhai, Yikui
    Liu, Yinhua
    NEUROCOMPUTING, 2014, 144 : 295 - 303
  • [7] Deep Learning for Weakly-Supervised Object Detection and Localization: A Survey
    Shao, Feifei
    Chen, Long
    Shao, Jian
    Ji, Wei
    Xiao, Shaoning
    Ye, Lu
    Zhuang, Yueting
    Xiao, Jun
    NEUROCOMPUTING, 2022, 496 : 192 - 207
  • [8] Weakly-Supervised Object Localization by Cutting Background with Deep Reinforcement Learning
    Zheng, Wu
    Zhang, Zhaoxiang
    PRICAI 2018: TRENDS IN ARTIFICIAL INTELLIGENCE, PT II, 2018, 11013 : 210 - 218
  • [9] Supervised Self-taught Learning: Actively Transferring Knowledge from Unlabeled Data
    Huang, Kaizhu
    Xu, Zenglin
    King, Irwin
    Lyu, Michael R.
    Campbell, Colin
    IJCNN: 2009 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1- 6, 2009, : 481 - +
  • [10] Weakly Supervised Object Localization with Latent Category Learning
    Wang, Chong
    Ren, Weiqiang
    Huang, Kaiqi
    Tan, Tieniu
    COMPUTER VISION - ECCV 2014, PT VI, 2014, 8694 : 431 - 445