Learning a hyperplane regressor through a tight bound on the VC dimension

被引:12
|
作者
Jayadeva [1 ]
Chandra, Suresh [2 ]
Batra, Sanjit S. [3 ]
Sabharwal, Siddarth [1 ]
机构
[1] Indian Inst Technol, Dept Elect Engn, Delhi, India
[2] Indian Inst Technol, Dept Math, Delhi, India
[3] Indian Inst Technol, Dept Comp Sci & Engn, Delhi, India
关键词
Machine learning; SVM; Regression;
D O I
10.1016/j.neucom.2015.06.065
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we show how to learn a hyperplane regressor by minimizing a tight or Theta bound on its VC dimension. While minimizing the VC dimension with respect to the defining variables is an ill posed and intractable problem, we propose a smooth, continuous, and differentiable function for a tight bound. Minimizing a tight bound yields the Minimal Complexity Machine (MCM) Regressor, and involves solving a simple linear programming problem. Experimental results show that on a number of benchmark datasets, the proposed approach yields regressors with error rates much lower than those obtained with conventional SVM regresssors, while often using fewer support vectors. On some benchmark datasets, the number of support vectors is less than one-tenth the number used by SVMs, indicating that the MCM does indeed learn simpler representations. (C) 2015 Elsevier B.V. All rights reserved.
引用
收藏
页码:1610 / 1616
页数:7
相关论文
共 50 条
  • [1] Learning a hyperplane classifier by minimizing an exact bound on the VC dimension
    Jayadeva
    Neurocomputing, 2015, 149 : 683 - 689
  • [2] Learning a hyperplane classifier by minimizing an exact bound on the VC dimensioni
    Jayadeva
    NEUROCOMPUTING, 2015, 149 : 683 - 689
  • [3] Designing nonlinear classifiers through minimizing VC dimension bound
    Xu, JH
    ADVANCES IN NEURAL NETWORKS - ISNN 2005, PT 1, PROCEEDINGS, 2005, 3496 : 900 - 905
  • [4] An improved VC dimension bound for sparse polynomials
    Schmitt, M
    LEARNING THEORY, PROCEEDINGS, 2004, 3120 : 393 - 407
  • [5] A tight bound on the projective dimension of four quadrics
    Huneke, Craig
    Mantero, Paolo
    McCullough, Jason
    Seceleanu, Alexandra
    JOURNAL OF PURE AND APPLIED ALGEBRA, 2018, 222 (09) : 2524 - 2551
  • [6] QMCM: Minimizing Vapnik's bound on the VC dimension
    Jayadeva
    Soman, Sumit
    Pant, Himanshu
    Sharma, Mayank
    NEUROCOMPUTING, 2020, 399 : 352 - 360
  • [7] Tight bounds for the VC-dimension of piecewise polynomial networks
    Sakurai, A
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 11, 1999, 11 : 323 - 329
  • [8] Lower bound on VC-dimension by local shattering
    Erlich, Y
    Chazan, D
    Petrack, S
    Levy, A
    NEURAL COMPUTATION, 1997, 9 (04) : 771 - 776
  • [9] A tight bound on concept learning
    Takahashi, H
    Gu, HZ
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 1998, 9 (06): : 1191 - 1202
  • [10] Tight lower bounds on the VC-dimension of geometric set systems
    Csikós, Monika
    Mustafa, Nabil H.
    Kupavskii, Andrey
    Journal of Machine Learning Research, 2019, 20