On neural-network implementations of k-nearest neighbor pattern classifiers

被引:0
|
作者
Chen, YQ [1 ]
Damper, RI [1 ]
Nixon, MS [1 ]
机构
[1] UNIV SOUTHAMPTON,DEPT ELECT & COMP SCI,SOUTHAMPTON SO17 1BJ,HANTS,ENGLAND
关键词
Logic circuits; neural-network applications; neural-network hardware; pattern recognition;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The k-nearest neighbor (k-NN) decision rule is the basis of a well-established, high-performance pattern-recognition technique but its sequential implementation is inherently slow. More recently, feedforward neural networks trained on error backpropagation have been widely used to solve a variety of pattern-recognition problems. However, it is arguably unnecessary to learn such a computationally intensive solution when one (i.e., the k-NN rule) is effectively available a priori, especially given the well-known pitfalls of backpropagation. Accordingly, there is some interest in the literature in network implementations of this rule, so as to combine its known, good performance with the speed of a massively parallel realization. In this paper, we present a novel neural-network architecture which implements the L-NN rule and whose distinctive feature relative to earlier work is its synchronous (i.e., clocked) nature. Essentially, it has a layered, feedforward structure but, in its basic form, also incorporates feedback to control sequential selection of the k neighbors. The principal advantages of this new scheme are the avoidance of the stability problems which can arise with alternative asynchronous feedback (lateral-inhibition) circuits, the restriction of analog weights to the first hidden layer and the fact that network design uses noniterative weight calculations rather than iterative backpropagation. Analysis of the network shows that it will converge to the desired solution (faithfully classifying the input pattern according to the k-NN rule) within (2k - 1) clock cycles. Apart from minor changes which can be effected externally, the same design serves for any value of X. The space complexity of the ('brute-force'' network implementation is O(N-2) units, where N is the number of training patterns, and it has O(N(2)d) analog weights where d is the dimensionality of these patterns. Thus, some modifications to reduce the required number of units (and, thereby, weighted connections) are considered. Overall, this paper affords for high-speed, parallel implementations of proven pattern-classification techniques.
引用
收藏
页码:622 / 629
页数:8
相关论文
共 50 条
  • [1] Validation of k-Nearest Neighbor Classifiers
    Bax, Eric
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2012, 58 (05) : 3225 - 3234
  • [2] Improving Neural-Network Classifiers Using Nearest Neighbor Partitioning
    Wang, Lin
    Yang, Bo
    Chen, Yuehui
    Zhang, Xiaoqian
    Orchard, Jeff
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2017, 28 (10) : 2255 - 2267
  • [3] EVOLVING EDITED k-NEAREST NEIGHBOR CLASSIFIERS
    Gil-Pita, Roberto
    Yao, Xin
    INTERNATIONAL JOURNAL OF NEURAL SYSTEMS, 2008, 18 (06) : 459 - 467
  • [4] Twin neural network improved k-nearest neighbor regression
    Wetzel, Sebastian J.
    INTERNATIONAL JOURNAL OF DATA SCIENCE AND ANALYTICS, 2024,
  • [5] Fast implementations of nearest neighbor classifiers
    Grother, PJ
    Candela, GT
    Blue, JL
    PATTERN RECOGNITION, 1997, 30 (03) : 459 - 465
  • [6] FUZZY K-NEAREST NEIGHBOR CLASSIFIERS FOR VENTRICULAR ARRHYTHMIA DETECTION
    CABELLO, D
    BARRO, S
    SALCEDA, JM
    RUIZ, R
    MIRA, J
    INTERNATIONAL JOURNAL OF BIO-MEDICAL COMPUTING, 1991, 27 (02): : 77 - 93
  • [7] Comparison of Accuracy Estimation for Weighted k-Nearest Neighbor Classifiers
    Zhao, Ming
    Chen, Jingchao
    Xu, Mengyao
    FUZZY SYSTEMS AND DATA MINING V (FSDM 2019), 2019, 320 : 783 - 791
  • [8] Using a genetic algorithm for editing k-nearest neighbor classifiers
    Gil-Pita, R.
    Yao, X.
    INTELLIGENT DATA ENGINEERING AND AUTOMATED LEARNING - IDEAL 2007, 2007, 4881 : 1141 - +
  • [9] An RBF Neural Network Clustering Algorithm Based on K-Nearest Neighbor
    Li, Jitao
    Xu, Chugui
    Liang, Yongquan
    Wu, Gengkun
    Liang, Zhao
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2022, 2022
  • [10] Boosted K-nearest neighbor classifiers based on fuzzy granules
    Li, Wei
    Chen, Yumin
    Song, Yuping
    KNOWLEDGE-BASED SYSTEMS, 2020, 195