A Characterization of Multiclass Learnability

被引:10
|
作者
Brukhim, Nataly [1 ]
Carmon, Daniel [2 ]
Dinur, Irit [3 ]
Moran, Shay [2 ,4 ]
Yehudayoff, Amir [2 ]
机构
[1] Princeton Univ, Dept Comp Sci, Princeton, NJ 08544 USA
[2] Technion, Dept Math, Haifa, Israel
[3] Weizmann Inst Sci, Dept Comp Sci, Rehovot, Israel
[4] Technion, Dept Comp Sci, Haifa, Israel
关键词
D O I
10.1109/FOCS54457.2022.00093
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
A seminal result in learning theory characterizes the PAC learnability of binary classes through the Vapnik-Chervonenkis dimension. Extending this characterization to the general multiclass setting has been open since the pioneering works on multiclass PAC learning in the late 1980s. This work resolves this problem: we characterize multiclass PAC learnability through the DS dimension, a combinatorial dimension defined by Daniely and Shalev-Shwartz (2014). The classical characterization of the binary case boils down to empirical risk minimization. In contrast, our characterization of the multiclass case involves a variety of algorithmic ideas; these include a natural setting we call list PAC learning. In the list learning setting, instead of predicting a single outcome for a given unseen input, the goal is to provide a short menu of predictions. Our second main result concerns the Natarajan dimension, which has been a central candidate for characterizing multiclass learnability. This dimension was introduced by Natarajan (1988) as a barrier for PAC learning. He furthered showed that it is the only barrier, provided that the number of labels is bounded. Whether the Natarajan dimension characterizes PAC learnability in general has been posed as an open question in several papers since. This work provides a negative answer: we construct a non-learnable class with Natarajan dimension 1. For the construction, we identify a fundamental connection between concept classes and topology (i.e., colorful simplicial complexes). We crucially rely on a deep and involved construction of hyperbolic pseudo-manifolds by Januszkiewicz and ' Swiatkowski. It is interesting that hyperbolicity is directly related to learning problems that are difficult to solve although no obvious barriers exist. This is another demonstration of the fruitful links machine learning has with different areas in mathematics.
引用
收藏
页码:943 / 955
页数:13
相关论文
共 50 条
  • [21] Transfer and learnability
    Inagaki, Shunji
    LINGUISTIC APPROACHES TO BILINGUALISM, 2024, 14 (01) : 74 - 78
  • [22] Learnability theory
    Fulop, Sean A.
    Chater, Nick
    WILEY INTERDISCIPLINARY REVIEWS-COGNITIVE SCIENCE, 2013, 4 (03) : 299 - 306
  • [23] Numberings and Learnability
    Ambos-Spies, K.
    LOBACHEVSKII JOURNAL OF MATHEMATICS, 2014, 35 (04) : 302 - 303
  • [24] Robustifying learnability
    Tetlow, Robert J.
    von zur Muehlen, Peter
    JOURNAL OF ECONOMIC DYNAMICS & CONTROL, 2009, 33 (02): : 296 - 316
  • [25] THEORY OF LEARNABILITY
    UESAKA, Y
    AIZAWA, T
    EBARA, T
    OZEKI, K
    KYBERNETIK, 1973, 13 (03): : 123 - 131
  • [26] LEARNABILITY AND FEEDBACK
    GORDON, P
    DEVELOPMENTAL PSYCHOLOGY, 1990, 26 (02) : 217 - 220
  • [27] NONUNIFORM LEARNABILITY
    BENEDEK, GM
    ITAI, A
    LECTURE NOTES IN COMPUTER SCIENCE, 1988, 317 : 82 - 92
  • [28] Learnability and automatizability
    Alekhnovich, M
    Braverman, M
    Feldman, V
    Klivans, AR
    Pitassi, T
    45TH ANNUAL IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE, PROCEEDINGS, 2004, : 621 - 630
  • [29] Deep Learnability: Using Neural Networks to Quantify Language Similarity and Learnability
    Cohen, Clara
    Higham, Catherine F.
    Nabi, Syed Waqar
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2020, 3
  • [30] The learnability of Naive Bayes
    Zhang, HJ
    Ling, CX
    Zhao, ZD
    ADVANCES IN ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2000, 1822 : 432 - 441