A Characterization of Multiclass Learnability

被引:10
|
作者
Brukhim, Nataly [1 ]
Carmon, Daniel [2 ]
Dinur, Irit [3 ]
Moran, Shay [2 ,4 ]
Yehudayoff, Amir [2 ]
机构
[1] Princeton Univ, Dept Comp Sci, Princeton, NJ 08544 USA
[2] Technion, Dept Math, Haifa, Israel
[3] Weizmann Inst Sci, Dept Comp Sci, Rehovot, Israel
[4] Technion, Dept Comp Sci, Haifa, Israel
关键词
D O I
10.1109/FOCS54457.2022.00093
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
A seminal result in learning theory characterizes the PAC learnability of binary classes through the Vapnik-Chervonenkis dimension. Extending this characterization to the general multiclass setting has been open since the pioneering works on multiclass PAC learning in the late 1980s. This work resolves this problem: we characterize multiclass PAC learnability through the DS dimension, a combinatorial dimension defined by Daniely and Shalev-Shwartz (2014). The classical characterization of the binary case boils down to empirical risk minimization. In contrast, our characterization of the multiclass case involves a variety of algorithmic ideas; these include a natural setting we call list PAC learning. In the list learning setting, instead of predicting a single outcome for a given unseen input, the goal is to provide a short menu of predictions. Our second main result concerns the Natarajan dimension, which has been a central candidate for characterizing multiclass learnability. This dimension was introduced by Natarajan (1988) as a barrier for PAC learning. He furthered showed that it is the only barrier, provided that the number of labels is bounded. Whether the Natarajan dimension characterizes PAC learnability in general has been posed as an open question in several papers since. This work provides a negative answer: we construct a non-learnable class with Natarajan dimension 1. For the construction, we identify a fundamental connection between concept classes and topology (i.e., colorful simplicial complexes). We crucially rely on a deep and involved construction of hyperbolic pseudo-manifolds by Januszkiewicz and ' Swiatkowski. It is interesting that hyperbolicity is directly related to learning problems that are difficult to solve although no obvious barriers exist. This is another demonstration of the fruitful links machine learning has with different areas in mathematics.
引用
收藏
页码:943 / 955
页数:13
相关论文
共 50 条
  • [41] On the learnability of vector spaces
    Harizanov, VS
    Stephan, F
    ALGORITHMIC LEARNING THEORY, PROCEEDINGS, 2002, 2533 : 233 - 247
  • [42] Learnability in optimality theory
    McCarthy, JJ
    TRENDS IN COGNITIVE SCIENCES, 2001, 5 (03) : 132 - 133
  • [43] Learnability of quantified formulas
    Dalmau, V
    Jeavons, P
    THEORETICAL COMPUTER SCIENCE, 2003, 306 (1-3) : 485 - 511
  • [44] 2 PERSPECTIVES ON LEARNABILITY
    OGRADY, W
    BEHAVIORAL AND BRAIN SCIENCES, 1989, 12 (02) : 354 - 355
  • [45] On the learnability of shuffle ideals
    1600, Microtome Publishing (14):
  • [46] The learnability of quantum states
    Aaronson, Scott
    PROCEEDINGS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES, 2007, 463 (2088): : 3089 - 3114
  • [47] Learnability and semantic universals
    Steinert-Threlkeld, Shane
    Szymanik, Jakub
    SEMANTICS & PRAGMATICS, 2019, 12
  • [48] The learnability of Pauli noise
    Chen, Senrui
    Liu, Yunchao
    Otten, Matthew
    Seif, Alireza
    Fefferman, Bill
    Jiang, Liang
    NATURE COMMUNICATIONS, 2023, 14 (01)
  • [49] Learnability in optimality theory
    Daelemans, W
    COMPUTATIONAL LINGUISTICS, 2001, 27 (02) : 316 - 317
  • [50] LINGUISTICS, LEARNABILITY, AND COMPUTATION
    WILKINS, W
    COMPUTER AND THE BRAIN : PERSPECTIVES ON HUMAN AND ARTIFICIAL INTELLIGENCE, 1989, : 197 - 207