Rates of convergence for Laplacian semi-supervised learning with low labeling rates

被引:7
|
作者
Calder, Jeff [1 ]
Slepcev, Dejan [2 ]
Thorpe, Matthew [3 ,4 ]
机构
[1] Univ Minnesota, Sch Math, Minneapolis, MN 55455 USA
[2] Carnegie Mellon Univ, Dept Math Sci, Pittsburgh, PA USA
[3] Univ Manchester, Dept Math, Manchester, England
[4] Alan Turing Inst, London NW1 2DB, England
基金
美国国家科学基金会; 欧洲研究理事会;
关键词
Semi-supervised learning; Regression; Asymptotic consistency; Gamma-convergence; PDEs on graphs; Non-local variational problems; Random walks on graphs; CONSISTENCY; GRAPH; REGULARIZATION;
D O I
10.1007/s40687-022-00371-x
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
We investigate graph-based Laplacian semi-supervised learning at low labeling rates (ratios of labeled to total number of data points) and establish a threshold for the learning to be well posed. Laplacian learning uses harmonic extension on a graph to propagate labels. It is known that when the number of labeled data points is finite while the number of unlabeled data points tends to infinity, the Laplacian learning becomes degenerate and the solutions become roughly constant with a spike at each labeled data point. In this work, we allow the number of labeled data points to grow to infinity as the total number of data points grows. We show that for a random geometric graph with length scale epsilon > 0, if the labeling rate / << epsilon(2), then the solution becomes degenerate and spikes form. On the other hand, if beta >> epsilon(2), then Laplacian learning is well-posed and consistent with a continuum Laplace equation. Furthermore, in the well-posed setting we prove quantitative error estimates of O( epsilon beta-(1/2)) for the difference between the solutions of the discrete problem and continuum PDE, up to logarithmic factors. We also study p-Laplacian regularization and show the same degeneracy result when / << epsilon( p). The proofs of our well-posedness results use the random walk interpretation of Laplacian learning and PDE arguments, while the proofs of the ill-posedness results use gamma -convergence tools from the calculus of variations. We also present numerical results on synthetic and real data to illustrate our results.
引用
收藏
页数:42
相关论文
共 50 条
  • [1] Rates of convergence for Laplacian semi-supervised learning with low labeling rates
    Jeff Calder
    Dejan Slepčev
    Matthew Thorpe
    Research in the Mathematical Sciences, 2023, 10
  • [2] Poisson Learning: Graph Based Semi-Supervised Learning At Very Low Label Rates
    Calder, Jeff
    Cook, Brendan
    Thorpe, Matthew
    Slepcev, Dejan
    25TH AMERICAS CONFERENCE ON INFORMATION SYSTEMS (AMCIS 2019), 2019,
  • [3] Poisson Learning: Graph Based Semi-Supervised Learning At Very Low Label Rates
    Calder, Jeff
    Cook, Brendan
    Thorpe, Matthew
    Slepcev, Dejan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 119, 2020, 119
  • [4] Semi-supervised learning with regularized Laplacian
    Avrachenkov, K.
    Chebotarev, P.
    Mishenin, A.
    OPTIMIZATION METHODS & SOFTWARE, 2017, 32 (02): : 222 - 236
  • [5] Semi-supervised Multitask Learning for Sequence Labeling
    Rei, Marek
    PROCEEDINGS OF THE 55TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2017), VOL 1, 2017, : 2121 - 2130
  • [6] On the effectiveness of laplacian normalization for graph semi-supervised learning
    Johnson, Rie
    Zhang, Tong
    JOURNAL OF MACHINE LEARNING RESEARCH, 2007, 8 : 1489 - 1517
  • [7] A Modified Semi-Supervised Learning Algorithm on Laplacian Eigenmaps
    Zhao, Zhong-Qiu
    Li, Jun-Zhao
    Gao, Jun
    Wu, Xindong
    NEURAL PROCESSING LETTERS, 2010, 32 (01) : 75 - 82
  • [8] Laplacian-optimized diffusion for semi-supervised learning
    Budninskiy, Max
    Abdelaziz, Ameera
    Tong, Yiying
    Desbrun, Mathieu
    COMPUTER AIDED GEOMETRIC DESIGN, 2020, 79
  • [9] Semi-supervised classification with Laplacian multiple kernel learning
    Yang, Tao
    Fu, Dongmei
    NEUROCOMPUTING, 2014, 140 : 19 - 26
  • [10] A Modified Semi-Supervised Learning Algorithm on Laplacian Eigenmaps
    Zhong-Qiu Zhao
    Jun-Zhao Li
    Jun Gao
    Xindong Wu
    Neural Processing Letters, 2010, 32 : 75 - 82