Regularized Semi-supervised Latent Dirichlet Allocation for Visual Concept Learning

被引:0
|
作者
Zhuang, Liansheng [1 ,2 ]
She, Lanbo [2 ]
Huang, Jingjing [2 ]
Luo, Jiebo [3 ]
Yu, Nenghai [1 ,2 ]
机构
[1] USTC, MOE MS Keynote Lab MCC, Hefei 230027, Peoples R China
[2] USTC, Sch Informat Sci & Technol, Hefei 230027, Peoples R China
[3] Eastman Kodak Co, Kodak Res Labs, Rochester, NY 14650 USA
来源
基金
中国国家自然科学基金; 国家高技术研究发展计划(863计划);
关键词
Visual Concept Learning; Latent Dirichlet Allocation; Semisupervised Learning;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Topic models are a popular tool for visual concept learning. Current topic models are either unsupervised or fully supervised. Although lots of labeled images can significantly improve the performance of topic models, they are very costly to acquire. Meanwhile, billions of unlabeled images are freely available on the internet. In this paper, to take advantage of both limited labeled training images and rich unlabeled images, we propose a novel technique called regularized Semi-supervised Latent Dirichlet Allocation (r-SSLDA) for learning visual concept classifiers. Instead of introducing a new topic model, we attempt to find an efficient way to learn topic models in a semi-supervised way. r-SSLDA considers both semi-supervised properties and supervised topic model simultaneously in a regularization framework. Experiments on Caltech 101 and Caltech 256 have shown that r-SSLDA outperforms unsupervised LDA, and achieves competitive performance against fully supervised LDA, while sharply reducing the number of labeled images required.
引用
收藏
页码:403 / +
页数:3
相关论文
共 50 条
  • [21] Graph Regularized Variational Ladder Networks for Semi-Supervised Learning
    Hu, Cong
    Song, Xiao-Ning
    IEEE ACCESS, 2020, 8 : 206280 - 206288
  • [22] INFERENCE IN SUPERVISED LATENT DIRICHLET ALLOCATION
    Lakshminarayanan, Balaji
    Raich, Raviv
    2011 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP), 2011,
  • [23] Latent Space Virtual Adversarial Training for Supervised and Semi-Supervised Learning
    Osada, Genki
    Ahsan, Budrul
    Prasad Bora, Revoti
    Nishide, Takashi
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2022, E105D (03) : 667 - 678
  • [24] Lagrangian Regularized Twin Extreme Learning Machine for Supervised and Semi-Supervised Classification
    Ma, Jun
    Yu, Guolin
    SYMMETRY-BASEL, 2022, 14 (06):
  • [25] Regularized semi-supervised classification on manifold
    Zhao, LW
    Luo, SW
    Zhao, YC
    Liao, LZ
    Wang, ZH
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PROCEEDINGS, 2006, 3918 : 20 - 29
  • [26] Regularized Boost for Semi-supervised Ranking
    Miao, Zhigao
    Wang, Juan
    Zhou, Aimin
    Tang, Ke
    PROCEEDINGS OF THE 18TH ASIA PACIFIC SYMPOSIUM ON INTELLIGENT AND EVOLUTIONARY SYSTEMS, VOL 1, 2015, : 643 - 651
  • [27] Distribution-free Bayesian regularized learning framework for semi-supervised learning
    Ma, Jun
    Yu, Guolin
    NEURAL NETWORKS, 2024, 174
  • [28] Supervised and semi-supervised twin parametric-margin regularized extreme learning machine
    Jun Ma
    Pattern Analysis and Applications, 2020, 23 : 1603 - 1626
  • [29] A Regularized Maximum Figure-of-Merit (rMFoM) Approach to Supervised and Semi-Supervised Learning
    Ma, Chengyuan
    Lee, Chin-Hui
    IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2011, 19 (05): : 1316 - 1327
  • [30] Semi-supervised learning using constrained laplacian regularized least squares
    Sousa, Celso A. R.
    2024 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN 2024, 2024,