Harnessing the Power of Choices in Decision Tree Learning

被引:0
|
作者
Blanc, Guy [1 ]
Lange, Jane [2 ]
Pabbaraju, Chirag [1 ]
Sullivan, Colin [1 ]
Tan, Li-Yang [1 ]
Tiwari, Mo [1 ]
机构
[1] Stanford, Stanford, CA 94305 USA
[2] MIT, Cambridge, MA 02139 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose a simple generalization of standard and empirically successful decision tree learning algorithms such as ID3, C4.5, and CART. These algorithms, which have been central to machine learning for decades, are greedy in nature: they grow a decision tree by iteratively splitting on the best attribute. Our algorithm, Top-k, considers the k best attributes as possible splits instead of just the single best attribute.We demonstrate, theoretically and empirically, the power of this simple generalization. We first prove a greediness hierarchy theorem showing that for every k is an element of N, Top-(k + 1) can be dramatically more powerful than Top-k: there are data distributions for which the former achieves accuracy 1 - epsilon, whereas the latter only achieves accuracy 1/2 + epsilon. We then show, through extensive experiments, that Top-k outperforms the two main approaches to decision tree learning: classic greedy algorithms and more recent "optimal decision tree" algorithms. On one hand, Top-k consistently enjoys significant accuracy gains over greedy algorithms across a wide range of benchmarks. On the other hand, Top-k is markedly more scalable than optimal decision tree algorithms and is able to handle dataset and feature set sizes that remain far beyond the reach of these algorithms. The code to reproduce our results is available at: https://github.com/SullivanC19/pydl8.5-topk.
引用
收藏
页数:13
相关论文
共 50 条
  • [31] PAC-learning a decision tree with pruning
    Kim, H
    Koehler, GJ
    EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 1996, 94 (02) : 405 - 418
  • [32] Structure and majority classes in decision tree learning
    Hickey, Ray J.
    JOURNAL OF MACHINE LEARNING RESEARCH, 2007, 8 : 1747 - 1768
  • [33] Decision Tree Learning with Spatial Modal Logics
    Pagliarini, Giovanni
    Sciavicco, Guido
    ELECTRONIC PROCEEDINGS IN THEORETICAL COMPUTER SCIENCE, 2021, (346): : 273 - 290
  • [34] Multiple fairness criteria in decision tree learning
    Bagriacik, Meryem
    Otero, Fernando E. B.
    APPLIED SOFT COMPUTING, 2024, 167
  • [35] Interval Temporal Logic Decision Tree Learning
    Brunello, Andrea
    Sciavicco, Guido
    Stan, Ionel Eduard
    LOGICS IN ARTIFICIAL INTELLIGENCE, JELIA 2019, 2019, 11468 : 778 - 793
  • [36] Transfer Learning on Decision Tree with Class Imbalance
    Minvielle, Ludovic
    Atiq, Mounir
    Peignier, Sergio
    Mougeot, Mathilde
    2019 IEEE 31ST INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2019), 2019, : 1003 - 1010
  • [37] Learning the Attribute Selection Measures for Decision Tree
    Chen, Xiaolin
    Wu, Jia
    Cai, Zhihua
    FIFTH INTERNATIONAL CONFERENCE ON MACHINE VISION (ICMV 2012): ALGORITHMS, PATTERN RECOGNITION AND BASIC TECHNOLOGIES, 2013, 8784
  • [38] DRAGON: Decision Tree Learning for Link Discovery
    Obraczka, Daniel
    Ngomo, Axel-Cyrille Ngonga
    WEB ENGINEERING (ICWE 2019), 2019, 11496 : 441 - 456
  • [39] Cost-Sensitive Decision Tree Learning
    Vadera, Sunil
    PROCEEDINGS 2019 AMITY INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE (AICAI), 2019, : 4 - 5
  • [40] Parameterized Complexity of Small Decision Tree Learning
    Ordyniak, Sebastian
    Szeider, Stefan
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 6454 - 6462