ON MULTI-VIEW LEARNING WITH ADDITIVE MODELS

被引:23
|
作者
Culp, Mark [1 ]
Michailidis, George [2 ]
Johnson, Kjell [3 ]
机构
[1] W Virginia Univ, Dept Stat, Morgantown, WV 26506 USA
[2] Univ Michigan, Dept Stat, Ann Arbor, MI 48109 USA
[3] Pfizer Global Res & Dev, Ann Arbor, MI 48105 USA
来源
ANNALS OF APPLIED STATISTICS | 2009年 / 3卷 / 01期
关键词
Multi-view learning; generalized additive model; semi-supervised learning; smoothing; model selection; REGRESSION;
D O I
10.1214/08-AOAS202
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
In many scientific settings data can be naturally partitioned into variable groupings called views. Common examples include environmental (1st view) and genetic information (2nd view) in ecological applications, chemical (1st view) and biological (2nd view) data in drug discovery. Multi-view data also occur in text analysis and proteomics applications where one view consists of a graph with observations as the vertices and a weighted measure of pairwise similarity between observations as the edges. Further, in several of these applications the observations can be partitioned into two sets, one where the response is observed (labeled) and the other where the response is not (unlabeled). The problem for simultaneously addressing viewed data and incorporating unlabeled observations in training is referred to as multiview transductive learning. In this work we introduce and Study a comprehensive generalized fixed point additive modeling framework for multi-view transductive learning, where any view is represented by a linear smoother. The problem of view selection is discussed using a generalized Akaike Information Criterion, which provides an approach for testing the contribution of each view. An efficient implementation is provided for fitting these models with both backfitting and local-scoring type algorithms adjusted to semi-supervised graph-based learning. The proposed technique is assessed oil both synthetic and real data sets and is shown to be competitive to state-of-the-art co-training and graph-based techniques.
引用
收藏
页码:292 / 318
页数:27
相关论文
共 50 条
  • [21] Incorporate Hashing with Multi-view Learning
    Tang, Jingjing
    Li, Dewei
    2016 IEEE 16TH INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS (ICDMW), 2016, : 853 - 859
  • [22] Deep Multi-View Concept Learning
    Xu, Cai
    Guan, Ziyu
    Zhao, Wei
    Niu, Yunfei
    Wang, Quan
    Wang, Zhiheng
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 2898 - 2904
  • [23] Multi-view Transfer Learning with Adaboost
    Xu, Zhijie
    Sun, Shiliang
    2011 23RD IEEE INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2011), 2011, : 399 - 402
  • [24] Reliable Conflictive Multi-View Learning
    Xu, Cai
    Si, Jiajun
    Guan, Ziyu
    Zhao, Wei
    Wu, Yue
    Gao, Xiyue
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 14, 2024, : 16129 - 16137
  • [25] Comprehensive Multi-view Representation Learning
    Zheng, Qinghai
    Zhu, Jihua
    Li, Zhongyu
    Tian, Zhiqiang
    Li, Chen
    INFORMATION FUSION, 2023, 89 : 198 - 209
  • [26] Multi-View Intact Space Learning
    Xu, Chang
    Tao, Dacheng
    Xu, Chao
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2015, 37 (12) : 2531 - 2544
  • [27] Multi-View Dynamic Texture Learning
    Thanh Minh Nguyen
    Wu, Q. M. Jonathan
    2016 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2016), 2016,
  • [28] Multi-View Learning with Dependent Views
    Brefeld, Ulf
    30TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, VOLS I AND II, 2015, : 865 - 870
  • [29] A survey of multi-view machine learning
    Shiliang Sun
    Neural Computing and Applications, 2013, 23 : 2031 - 2038
  • [30] Variational Distillation for Multi-View Learning
    Tian, Xudong
    Zhang, Zhizhong
    Wang, Cong
    Zhang, Wensheng
    Qu, Yanyun
    Ma, Lizhuang
    Wu, Zongze
    Xie, Yuan
    Tao, Dacheng
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (07) : 4551 - 4566