A Characterization of Multiclass Learnability

被引:10
|
作者
Brukhim, Nataly [1 ]
Carmon, Daniel [2 ]
Dinur, Irit [3 ]
Moran, Shay [2 ,4 ]
Yehudayoff, Amir [2 ]
机构
[1] Princeton Univ, Dept Comp Sci, Princeton, NJ 08544 USA
[2] Technion, Dept Math, Haifa, Israel
[3] Weizmann Inst Sci, Dept Comp Sci, Rehovot, Israel
[4] Technion, Dept Comp Sci, Haifa, Israel
关键词
D O I
10.1109/FOCS54457.2022.00093
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
A seminal result in learning theory characterizes the PAC learnability of binary classes through the Vapnik-Chervonenkis dimension. Extending this characterization to the general multiclass setting has been open since the pioneering works on multiclass PAC learning in the late 1980s. This work resolves this problem: we characterize multiclass PAC learnability through the DS dimension, a combinatorial dimension defined by Daniely and Shalev-Shwartz (2014). The classical characterization of the binary case boils down to empirical risk minimization. In contrast, our characterization of the multiclass case involves a variety of algorithmic ideas; these include a natural setting we call list PAC learning. In the list learning setting, instead of predicting a single outcome for a given unseen input, the goal is to provide a short menu of predictions. Our second main result concerns the Natarajan dimension, which has been a central candidate for characterizing multiclass learnability. This dimension was introduced by Natarajan (1988) as a barrier for PAC learning. He furthered showed that it is the only barrier, provided that the number of labels is bounded. Whether the Natarajan dimension characterizes PAC learnability in general has been posed as an open question in several papers since. This work provides a negative answer: we construct a non-learnable class with Natarajan dimension 1. For the construction, we identify a fundamental connection between concept classes and topology (i.e., colorful simplicial complexes). We crucially rely on a deep and involved construction of hyperbolic pseudo-manifolds by Januszkiewicz and ' Swiatkowski. It is interesting that hyperbolicity is directly related to learning problems that are difficult to solve although no obvious barriers exist. This is another demonstration of the fruitful links machine learning has with different areas in mathematics.
引用
收藏
页码:943 / 955
页数:13
相关论文
共 50 条
  • [31] On the Learnability of Recursive Data
    Barbara Hammer
    Mathematics of Control, Signals and Systems, 1999, 12 : 62 - 79
  • [32] Transformations that preserve learnability
    Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science), 1996, 1160
  • [33] The learnability of natural concepts
    Douven, Igor
    MIND & LANGUAGE, 2025, 40 (01) : 120 - 135
  • [34] Interlanguage and learnability.
    Kühlwein, W
    IRAL-INTERNATIONAL REVIEW OF APPLIED LINGUISTICS IN LANGUAGE TEACHING, 1999, 37 (04): : 321 - 327
  • [35] On the learnability of implicit arguments
    Mateu, Victoria
    Hyams, Nina
    THREE STREAMS OF GENERATIVE LANGUAGE ACQUISITION RESEARCH, 2019, 63 : 185 - 201
  • [36] Learnability in optimality theory
    McMahon, A
    JOURNAL OF LINGUISTICS, 2002, 38 (02) : 456 - 459
  • [37] On the Learnability of Possibilistic Theories
    Persia, Cosimo
    Ozaki, Ana
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 1870 - 1876
  • [38] THE STRENGTH OF WEAK LEARNABILITY
    SCHAPIRE, RE
    30TH ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE, 1989, : 28 - 33
  • [39] Learnability of automatic classes
    Jain, Sanjay
    Luo, Qinglong
    Stephan, Frank
    JOURNAL OF COMPUTER AND SYSTEM SCIENCES, 2012, 78 (06) : 1910 - 1927
  • [40] On the Learnability of Multilabel Ranking
    Raman, Vinod
    Subedi, Unique
    Tewari, Ambuj
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,