Predictive Learning on Hidden Tree-Structured Ising Models

被引:0
|
作者
Nikolakakis, Konstantinos E. [1 ]
Kalogerias, Dionysios S. [2 ]
Sarwate, Anand D. [1 ]
机构
[1] Rutgers State Univ, Dept Elect & Comp Engn, 94 Brett Rd, Piscataway, NJ 08854 USA
[2] Michigan State Univ, Dept Elect & Comp Engn, 428 S Shaw Lane, E Lansing, MI 48824 USA
关键词
Ising Model; Chow-Liu Algorithm; Structure Learning; Predictive Learning; Distribution Estimation; Noisy Data; Hidden Markov Random Fields; MARKOV RANDOM-FIELD; LIKELIHOOD-ESTIMATION; PARTICLE FILTERS; GRAPHICAL MODEL; SELECTION; DISTRIBUTIONS; CONSISTENCY; TUTORIAL;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We provide high-probability sample complexity guarantees for exact structure recovery and accurate predictive learning using noise-corrupted samples from an acyclic (tree-shaped) graphical model. The hidden variables follow a tree-structured Ising model distribution, whereas the observable variables are generated by a binary symmetric channel taking the hidden variables as its input (flipping each bit independently with some constant probability q is an element of [0, 1/2)). In the absence of noise, predictive learning on Ising models was recently studied by Bresler and Karzand (2020); this paper quantifies how noise in the hidden model impacts the tasks of structure recovery and marginal distribution estimation by proving upper and lower bounds on the sample complexity. Our results generalize state-of-the-art bounds reported in prior work, and they exactly recover the noiseless case (q = 0). In fact, for any tree with p vertices and probability of incorrect recovery delta > 0, the sufficient number of samples remains logarithmic as in the noiseless case, i.e., O(log(p/(delta)), while the dependence on q is O(1/(1 - 2q)(4)), for both aforementioned tasks. We also present a new equivalent of Isserlis' Theorem for sign-valued tree-structured distributions, yielding a new low-complexity algorithm for higher-order moment estimation.
引用
收藏
页数:82
相关论文
共 50 条
  • [41] Tree-Structured Shading Decomposition
    Geng, Chen
    Yu, Hong-Xing
    Zhang, Sharon
    Agrawala, Maneesh
    Wu, Jiajun
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 488 - 498
  • [42] Tree-Structured Haar Transforms
    Karen Egiazarian
    Jaakko Astola
    Journal of Mathematical Imaging and Vision, 2002, 16 : 269 - 279
  • [43] Tree-structured Curriculum Learning based on Semantic Similarity of Text
    Han, Sanggyu
    Myaeng, Sung-Hyon
    2017 16TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA), 2017, : 971 - 976
  • [44] Improving Uncertainty Quantification of Variance Networks by Tree-Structured Learning
    Ma, Wenxuan
    Yan, Xing
    Zhang, Kun
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, : 1 - 15
  • [45] Visual tracking with tree-structured appearance model for online learning
    Lv, Yun-Qiu
    Liu, Kai
    Cheng, Fei
    Li, Wei
    IET IMAGE PROCESSING, 2019, 13 (12) : 2106 - 2115
  • [46] Tree-Structured Readings of the Tractatus
    Stern, David
    WITTGENSTEIN-STUDIEN, 2023, 14 (01) : 223 - 262
  • [47] Diagnosing tree-structured systems
    Stumptner, M
    Wotawa, F
    ARTIFICIAL INTELLIGENCE, 2001, 127 (01) : 1 - 29
  • [48] On estimation and prediction for multivariate multiresolution tree-structured spatial linear models
    Yue, Wei
    Zhu, Jun
    STATISTICA SINICA, 2006, 16 (03) : 981 - 1020
  • [49] Tree-Structured Models for Efficient Multi-Cue Scene Labeling
    Cordts, Marius
    Rehfeld, Timo
    Enzweiler, Markus
    Franke, Uwe
    Roth, Stefan
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (07) : 1444 - 1454
  • [50] Unifying Parsing and Tree-Structured Models for Generating Sentence Semantic Representations
    Simonlin, Antoine
    Crabbe, Benoit
    NAACL 2022: THE 2022 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES: PROCEEDINGS OF THE STUDENT RESEARCH WORKSHOP, 2022, : 267 - 276