Learning a hyperplane regressor through a tight bound on the VC dimension

被引:12
|
作者
Jayadeva [1 ]
Chandra, Suresh [2 ]
Batra, Sanjit S. [3 ]
Sabharwal, Siddarth [1 ]
机构
[1] Indian Inst Technol, Dept Elect Engn, Delhi, India
[2] Indian Inst Technol, Dept Math, Delhi, India
[3] Indian Inst Technol, Dept Comp Sci & Engn, Delhi, India
关键词
Machine learning; SVM; Regression;
D O I
10.1016/j.neucom.2015.06.065
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we show how to learn a hyperplane regressor by minimizing a tight or Theta bound on its VC dimension. While minimizing the VC dimension with respect to the defining variables is an ill posed and intractable problem, we propose a smooth, continuous, and differentiable function for a tight bound. Minimizing a tight bound yields the Minimal Complexity Machine (MCM) Regressor, and involves solving a simple linear programming problem. Experimental results show that on a number of benchmark datasets, the proposed approach yields regressors with error rates much lower than those obtained with conventional SVM regresssors, while often using fewer support vectors. On some benchmark datasets, the number of support vectors is less than one-tenth the number used by SVMs, indicating that the MCM does indeed learn simpler representations. (C) 2015 Elsevier B.V. All rights reserved.
引用
收藏
页码:1610 / 1616
页数:7
相关论文
共 50 条
  • [21] Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networks
    Bartlett, Peter L.
    Harvey, Nick
    Liaw, Christopher
    Mehrabian, Abbas
    Journal of Machine Learning Research, 2019, 20
  • [22] Nearly-tight VC-dimension and Pseudodimension Bounds for Piecewise Linear Neural Networks
    Bartlett, Peter L.
    Harvey, Nick
    Liaw, Christopher
    Mehrabian, Abbas
    JOURNAL OF MACHINE LEARNING RESEARCH, 2019, 20 : 1 - 17
  • [23] Tight Lower Bound of Generalization Error in Ensemble Learning
    Uchida, Masato
    2014 JOINT 7TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING AND INTELLIGENT SYSTEMS (SCIS) AND 15TH INTERNATIONAL SYMPOSIUM ON ADVANCED INTELLIGENT SYSTEMS (ISIS), 2014, : 1130 - 1133
  • [24] The VC-Dimension of SQL Queries and Selectivity Estimation through Sampling
    Riondato, Matteo
    Akdere, Mert
    Cetintemel, Ugur
    Zdonik, Stanley B.
    Upfal, Eli
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, PT II, 2011, 6912 : 661 - 676
  • [25] A Tight Lower Bound Instance for k-means plus plus in Constant Dimension
    Bhattacharya, Anup
    Jaiswal, Ragesh
    Ailon, Nir
    THEORY AND APPLICATIONS OF MODELS OF COMPUTATION (TAMC 2014), 2014, 8402 : 7 - 22
  • [26] PAC learnability versus VC dimension: a footnote to a basic result of statistical learning
    Pestov, Vladimir
    2011 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2011, : 1141 - 1145
  • [27] BOUNDING THE ORDER OF A GRAPH USING ITS DIAMETER AND METRIC DIMENSION: A STUDY THROUGH TREE DECOMPOSITIONS AND VC DIMENSION
    Beaudou, Laurent
    Dankelmann, Peter
    Foucaud, Florent
    Henning, Michael A.
    Mary, Arnaud
    Parreau, Aline
    SIAM JOURNAL ON DISCRETE MATHEMATICS, 2018, 32 (02) : 902 - 918
  • [28] A VC-dimension-based Outer Bound on the Zero-Error Capacity of the Binary Adder Channel
    Ordentlich, Or
    Shayevitz, Ofer
    2015 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2015, : 2366 - 2370
  • [29] Tight Dimension Independent Lower Bound on the Expected Convergence Rate for Diminishing Step Sizes in SGD
    Phuong Ha Nguyen
    Nguyen, Lam M.
    van Dijk, Marten
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [30] Self-Directed Learning and Its Relation to the VC-Dimension and to Teacher-Directed Learning
    Shai Ben-David
    Nadav Eiron
    Machine Learning, 1998, 33 : 87 - 104