Recognizing RGB Images by Learning from RGB-D Data

被引:31
|
作者
Chen, Lin [1 ]
Li, Wen [2 ]
Xu, Dong [2 ]
机构
[1] Agcy Sci Technol & Res, Inst Infocomm Res, Singapore, Singapore
[2] Nanyang Technol Univ, Sch Comp Engn, Singapore, Singapore
关键词
D O I
10.1109/CVPR.2014.184
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work, we propose a new framework for recognizing RGB images captured by the conventional cameras by leveraging a set of labeled RGB-D data, in which the depth features can be additionally extracted from the depth images. We formulate this task as a new unsupervised domain adaptation (UDA) problem, in which we aim to take advantage of the additional depth features in the source domain and also cope with the data distribution mismatch between the source and target domains. To effectively utilize the additional depth features, we seek two optimal projection matrices to map the samples from both domains into a common space by preserving as much as possible the correlations between the visual features and depth features. To effectively employ the training samples from the source domain for learning the target classifier, we reduce the data distribution mismatch by minimizing the Maximum Mean Discrepancy (MMD) criterion, which compares the data distributions for each type of feature in the common space. Based on the above two motivations, we propose a new SVM based objective function to simultaneously learn the two projection matrices and the optimal target classifier in order to well separate the source samples from different classes when using each type of feature in the common space. An efficient alternating optimization algorithm is developed to solve our new objective function. Comprehensive experiments for object recognition and gender recognition demonstrate the effectiveness of our proposed approach for recognizing RGB images by learning from RGB-D data.
引用
收藏
页码:1418 / 1425
页数:8
相关论文
共 50 条
  • [1] Visual Recognition in RGB Images and Videos by Learning from RGB-D Data
    Li, Wen
    Chen, Lin
    Xu, Dong
    Van Gool, Luc
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (08) : 2030 - 2036
  • [2] Domain adaptation from RGB-D to RGB images
    Li, Xiao
    Fang, Min
    Zhang, Ju-Jie
    Wu, Jinqiao
    SIGNAL PROCESSING, 2017, 131 : 27 - 35
  • [3] Learning Coupled Classifiers with RGB images for RGB-D object recognition
    Li, Xiao
    Fang, Min
    Zhang, Ju-Jie
    Wu, Jinqiao
    PATTERN RECOGNITION, 2017, 61 : 433 - 446
  • [4] From RGB-D Images to RGB Images: Single Labeling for Mining Visual Models
    Zhang, Quanshi
    Song, Xuan
    Shao, Xiaowei
    Zhao, Huijing
    Shibasaki, Ryosuke
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2015, 6 (02)
  • [5] Child Action Recognition in RGB and RGB-D Data
    Turarova, Aizada
    Zhanatkyzy, Aida
    Telisheva, Zhansaule
    Sabyrov, Arman
    Sandygulova, Anara
    HRI'20: COMPANION OF THE 2020 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2020, : 491 - 492
  • [6] UTILIZING RELEVANT RGB-D DATA TO HELP RECOGNIZE RGB IMAGES IN THE TARGET DOMAIN
    Gao, Depeng
    Liu, Jiafeng
    Wu, Rui
    Cheng, Dansong
    Fan, Xiaopeng
    Tang, Xianglong
    INTERNATIONAL JOURNAL OF APPLIED MATHEMATICS AND COMPUTER SCIENCE, 2019, 29 (03) : 611 - 621
  • [7] Unsupervised Segmentation of RGB-D Images
    Deng, Zhuo
    Latecki, Longin Jan
    COMPUTER VISION - ACCV 2014, PT III, 2015, 9005 : 423 - 435
  • [8] Incremental Registration of RGB-D Images
    Dryanovski, Ivan
    Jaramillo, Carlos
    Xiao, Jizhong
    2012 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2012, : 1685 - 1690
  • [9] Boosting RGB-D Saliency Detection by Leveraging Unlabeled RGB Images
    Wang, Xiaoqiang
    Zhu, Lei
    Tang, Siliang
    Fu, Huazhu
    Li, Ping
    Wu, Fei
    Yang, Yi
    Zhuang, Yueting
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 1107 - 1119
  • [10] Extracting Sharp Features from RGB-D Images
    Cao, Y-P.
    Ju, T.
    Xu, J.
    Hu, S-M.
    COMPUTER GRAPHICS FORUM, 2017, 36 (08) : 138 - 152