ON MULTI-VIEW LEARNING WITH ADDITIVE MODELS

被引:23
|
作者
Culp, Mark [1 ]
Michailidis, George [2 ]
Johnson, Kjell [3 ]
机构
[1] W Virginia Univ, Dept Stat, Morgantown, WV 26506 USA
[2] Univ Michigan, Dept Stat, Ann Arbor, MI 48109 USA
[3] Pfizer Global Res & Dev, Ann Arbor, MI 48105 USA
来源
ANNALS OF APPLIED STATISTICS | 2009年 / 3卷 / 01期
关键词
Multi-view learning; generalized additive model; semi-supervised learning; smoothing; model selection; REGRESSION;
D O I
10.1214/08-AOAS202
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
In many scientific settings data can be naturally partitioned into variable groupings called views. Common examples include environmental (1st view) and genetic information (2nd view) in ecological applications, chemical (1st view) and biological (2nd view) data in drug discovery. Multi-view data also occur in text analysis and proteomics applications where one view consists of a graph with observations as the vertices and a weighted measure of pairwise similarity between observations as the edges. Further, in several of these applications the observations can be partitioned into two sets, one where the response is observed (labeled) and the other where the response is not (unlabeled). The problem for simultaneously addressing viewed data and incorporating unlabeled observations in training is referred to as multiview transductive learning. In this work we introduce and Study a comprehensive generalized fixed point additive modeling framework for multi-view transductive learning, where any view is represented by a linear smoother. The problem of view selection is discussed using a generalized Akaike Information Criterion, which provides an approach for testing the contribution of each view. An efficient implementation is provided for fitting these models with both backfitting and local-scoring type algorithms adjusted to semi-supervised graph-based learning. The proposed technique is assessed oil both synthetic and real data sets and is shown to be competitive to state-of-the-art co-training and graph-based techniques.
引用
收藏
页码:292 / 318
页数:27
相关论文
共 50 条
  • [31] Deep Partial Multi-View Learning
    Zhang, Changqing
    Cui, Yajie
    Han, Zongbo
    Zhou, Joey Tianyi
    Fu, Huazhu
    Hu, Qinghua
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (05) : 2402 - 2415
  • [32] DeepMVS: Learning Multi-view Stereopsis
    Huang, Po-Han
    Matzen, Kevin
    Kopf, Johannes
    Ahuja, Narendra
    Huang, Jia-Bin
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 2821 - 2830
  • [33] A survey of multi-view machine learning
    Sun, Shiliang
    NEURAL COMPUTING & APPLICATIONS, 2013, 23 (7-8): : 2031 - 2038
  • [34] Attentive multi-view reinforcement learning
    Hu, Yueyue
    Sun, Shiliang
    Xu, Xin
    Zhao, Jing
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2020, 11 (11) : 2461 - 2474
  • [35] A Survey of Multi-View Representation Learning
    Li, Yingming
    Yang, Ming
    Zhang, Zhongfei
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2019, 31 (10) : 1863 - 1883
  • [36] Multi-View Feature Engineering and Learning
    Dong, Jingming
    Karianakis, Nikolaos
    Davis, Damek
    Hernandez, Joshua
    Balzer, Jonathan
    Soatto, Stefano
    2015 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2015, : 3251 - 3260
  • [37] Multi-view class incremental learning
    Li, Depeng
    Wang, Tianqi
    Chen, Junwei
    Kawaguchi, Kenji
    Lian, Cheng
    Zeng, Zhigang
    INFORMATION FUSION, 2024, 102
  • [38] Deep Multi-View Learning to Rank
    Cao, Guanqun
    Iosifidis, Alexandros
    Gabbouj, Moncef
    Raghavan, Vijay
    Gottumukkala, Raju
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2021, 33 (04) : 1426 - 1438
  • [39] On Deep Multi-View Representation Learning
    Wang, Weiran
    Arora, Raman
    Livescu, Karen
    Bilmes, Jeff
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 37, 2015, 37 : 1083 - 1092
  • [40] Attentive multi-view reinforcement learning
    Yueyue Hu
    Shiliang Sun
    Xin Xu
    Jing Zhao
    International Journal of Machine Learning and Cybernetics, 2020, 11 : 2461 - 2474