The information bottleneck problem and its applications in machine learning

被引:82
|
作者
Goldfeld Z. [1 ]
Polyanskiy Y. [2 ]
机构
[1] The Electrical and Computer Engineering Department, Cornell University, Ithaca, 14850, NY
[2] The Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, 02139, MA
关键词
Deep learning; Information bottleneck; Machine learning; Mutual information; Neural networks;
D O I
10.1109/JSAIT.2020.2991561
中图分类号
学科分类号
摘要
Inference capabilities of machine learning (ML) systems skyrocketed in recent years, now playing a pivotal role in various aspect of society. The goal in statistical learning is to use data to obtain simple algorithms for predicting a random variable Y from a correlated observation X. Since the dimension of X is typically huge, computationally feasible solutions should summarize it into a lower-dimensional feature vector T, from which Y is predicted. The algorithm will successfully make the prediction if T is a good proxy of Y, despite the said dimensionality-reduction. A myriad of ML algorithms (mostly employing deep learning (DL)) for finding such representations T based on real-world data are now available. While these methods are effective in practice, their success is hindered by the lack of a comprehensive theory to explain it. The information bottleneck (IB) theory recently emerged as a bold information-theoretic paradigm for analyzing DL systems. Adopting mutual information as the figure of merit, it suggests that the best representation T should be maximally informative about Y while minimizing the mutual information with X. In this tutorial we survey the information-theoretic origins of this abstract principle, and its recent impact on DL. For the latter, we cover implications of the IB problem on DL theory, as well as practical algorithms inspired by it. Our goal is to provide a unified and cohesive description. A clear view of current knowledge is important for further leveraging IB and other information-theoretic ideas to study DL models. © 2020 IEEE.
引用
收藏
页码:19 / 38
页数:19
相关论文
共 50 条
  • [1] Successive Information Bottleneck and Applications in Deep Learning
    Yousfi, Yassine
    Akyol, Emrah
    2020 54TH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS, AND COMPUTERS, 2020, : 1210 - 1213
  • [2] Information Bottleneck: Theory and Applications in Deep Learning
    Geiger, Bernhard C.
    Kubin, Gernot
    ENTROPY, 2020, 22 (12)
  • [3] Information Bottleneck Problem Revisited
    Bayat, Farhang
    Wei, Shuangqing
    2019 57TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 2019, : 40 - 47
  • [4] MACHINE LEARNING AND ITS APPLICATIONS
    Yu, B.
    Zhang, Y.
    NEURAL NETWORK WORLD, 2016, 26 (03) : 203 - 204
  • [5] Applications of machine learning in information retrieval
    Cunningham, SJ
    Witten, IH
    Littin, J
    ANNUAL REVIEW OF INFORMATION SCIENCE AND TECHNOLOGY, 1999, 34 : 341 - 384
  • [6] Learning and generalization with the information bottleneck
    Shamir, Ohad
    Sabato, Sivan
    Tishby, Naftali
    THEORETICAL COMPUTER SCIENCE, 2010, 411 (29-30) : 2696 - 2711
  • [7] Learning and Generalization with the Information Bottleneck
    Shamir, Ohad
    Sabato, Sivan
    Tishby, Naftali
    ALGORITHMIC LEARNING THEORY, PROCEEDINGS, 2008, 5254 : 92 - 107
  • [8] Information Bottleneck and Aggregated Learning
    Soflaei, Masoumeh
    Zhang, Richong
    Guo, Hongyu
    Al-Bashabsheh, Ali
    Mao, Yongyi
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (12) : 14807 - 14820
  • [9] Extreme learning machine and its applications
    Ding, Shifei
    Xu, Xinzheng
    Nie, Ru
    NEURAL COMPUTING & APPLICATIONS, 2014, 25 (3-4): : 549 - 556
  • [10] Machine learning and its applications to biology
    Tarca, Adi L.
    Carey, Vincent J.
    Chen, Xue-Wen
    Romero, Roberto
    Draghici, Sorin
    PLOS COMPUTATIONAL BIOLOGY, 2007, 3 (06) : 953 - 963