Efficient Learning of Transform-Domain LMS Filter Using Graph Laplacian

被引:3
|
作者
Batabyal, Tamal [1 ]
Weller, Daniel [2 ,3 ]
Kapur, Jaideep [1 ]
Acton, Scott T. [2 ]
机构
[1] Univ Virginia, Dept Neurol, Charlottesville, VA 22904 USA
[2] Univ Virginia, Dept Elect & Comp Engn, Charlottesville, VA 22904 USA
[3] KLA Corp, Ann Arbor, MI 48105 USA
关键词
Convergence; Autocorrelation; Mathematical models; Transforms; Neurons; Linear systems; Discrete cosine transforms; Graph Laplacian; graph learning; Hebb-least mean squares (LMS) learning; LMS filter; split preconditioner; unitary transform; ADAPTIVE FILTERS; ALGORITHMS;
D O I
10.1109/TNNLS.2022.3144637
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transform-domain least mean squares (TDLMS) adaptive filters encompass the class of learning algorithms where the input data are subjected to a data-independent unitary transform followed by a power normalization stage as preprocessing steps. Because conventional transformations are not data-dependent, this preconditioning procedure was shown theoretically to improve the convergence of the least mean squares (LMS) filter only for certain classes of input data. So, one can tailor the transformation to the class of data. However, in reality, if the class of input data is not known beforehand, it is difficult to decide which transformation to use. Thus, there is a need to devise a learning framework to obtain such a preconditioning transformation using input data prior to applying on the input data. It is hypothesized that the underlying topology of the data affects the selection of the transformation. With the input modeled as a weighted finite graph, our method, called preconditioning using graph (PrecoG), adaptively learns the desired transform by recursive estimation of the graph Laplacian matrix. We show the efficacy of the transform as a generalized split preconditioner on a linear system of equations and in Hebbian-LMS learning models. In terms of the improvement of the condition number after applying the transformation, PrecoG performs significantly better than the existing state-of-the-art techniques that involve unitary and nonunitary transforms.
引用
收藏
页码:7608 / 7620
页数:13
相关论文
共 50 条
  • [21] An Efficient Multilevel Transform-Domain Partial Distortion Search Algorithm
    Vemula, Kiran Kumar
    Neeraja, S.
    PATTERN RECOGNITION AND IMAGE ANALYSIS, 2022, 32 (01) : 45 - 56
  • [22] An Efficient Multilevel Transform-Domain Partial Distortion Search Algorithm
    S. Kiran Kumar Vemula
    Pattern Recognition and Image Analysis, 2022, 32 : 45 - 56
  • [23] The wavelet transform-domain adaptive filter for nonlinear acoustic echo cancellation
    Jitendra Raghuwanshi
    Amit Mishra
    Narendra Singh
    Multimedia Tools and Applications, 2020, 79 : 25853 - 25871
  • [24] The wavelet transform-domain adaptive filter for nonlinear acoustic echo cancellation
    Raghuwanshi, Jitendra
    Mishra, Amit
    Singh, Narendra
    MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (35-36) : 25853 - 25871
  • [25] Feedback active noise control based on transform-domain forward-backward LMS predictor
    Pavithra, S.
    Narasimhan, S. V.
    SIGNAL IMAGE AND VIDEO PROCESSING, 2014, 8 (03) : 479 - 487
  • [26] Transform-domain adaptive constrained normalized-LMS filtering scheme for time delay estimation
    Huang, Chi-Hui
    Lin, Shyh-Neng
    Chern, Shiunn-Jang
    Jian, Jiun-Je
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2006, E89A (08) : 2230 - 2234
  • [27] Efficient transform-domain bit-rate estimation technique for CABAC
    School of Computer Science, National University of Defense Technology, Changsha 410073, China
    Tien Tzu Hsueh Pao, 2008, 8 (1512-1518):
  • [28] Marking and detection of text documents using transform-domain techniques
    Liu, Y
    Mant, J
    Wong, E
    Low, S
    SECURITY AND WATERMARKING OF MULTIMEDIA CONTENTS, 1999, 3657 : 317 - 328
  • [29] Speech bandwidth extension using transform-domain data hiding
    Kurada, Phaneendra
    Maruvada, Sailaja
    Sanagapallea, Koteswara Rao
    INTERNATIONAL JOURNAL OF SPEECH TECHNOLOGY, 2019, 22 (02) : 305 - 312
  • [30] SEISMIC DECONVOLUTION USING ITERATIVE TRANSFORM-DOMAIN SPARSE INVERSION
    Bai, Min
    Wu, Juan
    JOURNAL OF SEISMIC EXPLORATION, 2018, 27 (02): : 103 - 115