Learning a hyperplane regressor through a tight bound on the VC dimension

被引:12
|
作者
Jayadeva [1 ]
Chandra, Suresh [2 ]
Batra, Sanjit S. [3 ]
Sabharwal, Siddarth [1 ]
机构
[1] Indian Inst Technol, Dept Elect Engn, Delhi, India
[2] Indian Inst Technol, Dept Math, Delhi, India
[3] Indian Inst Technol, Dept Comp Sci & Engn, Delhi, India
关键词
Machine learning; SVM; Regression;
D O I
10.1016/j.neucom.2015.06.065
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we show how to learn a hyperplane regressor by minimizing a tight or Theta bound on its VC dimension. While minimizing the VC dimension with respect to the defining variables is an ill posed and intractable problem, we propose a smooth, continuous, and differentiable function for a tight bound. Minimizing a tight bound yields the Minimal Complexity Machine (MCM) Regressor, and involves solving a simple linear programming problem. Experimental results show that on a number of benchmark datasets, the proposed approach yields regressors with error rates much lower than those obtained with conventional SVM regresssors, while often using fewer support vectors. On some benchmark datasets, the number of support vectors is less than one-tenth the number used by SVMs, indicating that the MCM does indeed learn simpler representations. (C) 2015 Elsevier B.V. All rights reserved.
引用
收藏
页码:1610 / 1616
页数:7
相关论文
共 50 条
  • [31] Self-directed learning and its relation to the VC-dimension and to teacher-directed learning
    Ben-David, S
    Eiron, N
    MACHINE LEARNING, 1998, 33 (01) : 87 - 104
  • [32] Single Image Super Resolution Through Multi Extreme Learning Machine Regressor Fusion
    Wang, Xiang
    Gan, Zongliang
    Qi, Lina
    Chen, Changhong
    Liu, Feng
    PATTERN RECOGNITION (CCPR 2016), PT II, 2016, 663 : 133 - 146
  • [33] Solar energy prediction through machine learning models: A comparative analysis of regressor algorithms
    Nguyen, Huu Nam
    Tran, Quoc Thanh
    Ngo, Canh Tung
    Nguyen, Duc Dam
    Tran, Van Quan
    PLOS ONE, 2025, 20 (01):
  • [34] BOUNDS ON THE SAMPLE COMPLEXITY OF BAYESIAN LEARNING USING INFORMATION-THEORY AND THE VC DIMENSION
    HAUSSLER, D
    KEARNS, M
    SCHAPIRE, RE
    MACHINE LEARNING, 1994, 14 (01) : 83 - 113
  • [35] The MC-ELM: Learning an ELM-like network with minimum VC dimension
    Jayadeva
    Soman, Sumit
    Bhaya, Amit
    2015 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2015,
  • [36] Incremental learning and dimension selection through sleep
    Yamauchi, K
    NEURAL INFORMATION PROCESSING, 2004, 3316 : 489 - 495
  • [37] CONCEPTIONS OF LEARNING THROUGH LEARNING STYLES AND COGNITIVE DIMENSION IN VOCATIONAL EDUCATION
    Mohamad, Mimi Mohaffyza Binti
    Heong, Yee Mei
    Kiong, Tee Tze
    JOURNAL OF TECHNICAL EDUCATION AND TRAINING, 2014, 6 (01): : 32 - 41
  • [38] A Divide-and-Cooperate Machine Learning Model-Based RBF with Its VC Dimension Analysis
    Huang, Rongbo
    Dong, Jianwei
    Guo, Suixun
    Chen, Yanmei
    IEEE Access, 2020, 8 : 113414 - 113418
  • [39] A Divide-and-Cooperate Machine Learning Model-Based RBF With Its VC Dimension Analysis
    Huang, Rongbo
    Dong, Jianwei
    Guo, Suixun
    Chen, Yanmei
    IEEE ACCESS, 2020, 8 : 113414 - 113418
  • [40] VC-Dimension and Rademacher Averages: From Statistical Learning Theory to Sampling Algorithms Tutorial Outline
    Riondato, Matteo
    Upfal, Eli
    KDD'15: PROCEEDINGS OF THE 21ST ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2015, : 2321 - 2322