Learning subspace classifiers and error-corrective feature extraction

被引:0
|
作者
Laaksonen, J [1 ]
Oja, E [1 ]
机构
[1] Helsinki Univ Technol, Lab Comp & Informat Sci, FIN-02015 Helsinki, Finland
关键词
statistical classification; subspace methods; adaptive classifiers; feature extraction; handwritten digit recognition;
D O I
10.1142/S0218001498000270
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Subspace methods are a powerful class of statistical pattern classification algorithms. The subspaces form semiparametric representations of the pattern classes in the form of principal components. In this sense, subspace classification methods are an application of classical optimal data compression techniques. Additionally, the subspace formalism can be given a neural network interpretation. There are learning versions of the subspace classification methods, in which error-driven learning procedures are applied to the subspaces in order to reduce the number of misclassified vectors. An algorithm for iterative selection of the subspace dimensions is presented in this paper. Likewise, a modified formula for calculating the projection lengths in the subspaces is investigated. The principle of adaptive learning in subspace methods can further be applied to feature extraction. In our work, we have studied two adaptive feature extraction schemes. The adaptation process is directed by errors occurring in the classifier. Unlike most traditional classifier models which take the preceding feature extraction stage as given, this scheme allows for reducing the loss of information in the feature extraction stage. The enhanced overall classification performance resulting from the added adaptivity is demonstrated with experiments in which recognition of handwritten digits has been used as an exemplary application.
引用
收藏
页码:423 / 436
页数:14
相关论文
共 50 条
  • [41] Integrated Phoneme Subspace Method for Speech Feature Extraction
    Hyunsin Park
    Tetsuya Takiguchi
    Yasuo Ariki
    EURASIP Journal on Audio, Speech, and Music Processing, 2009
  • [42] Feature extraction based on subspace methods for regression problems
    Kwak, Nojun
    Lee, Jung-Won
    NEUROCOMPUTING, 2010, 73 (10-12) : 1740 - 1751
  • [43] Multivariate feature selection using random subspace classifiers for gene expression data
    Kamath, Vidya P.
    Hall, Lawrence O.
    Yeatman, Timothy J.
    Eschrich, Steven. A.
    PROCEEDINGS OF THE 7TH IEEE INTERNATIONAL SYMPOSIUM ON BIOINFORMATICS AND BIOENGINEERING, VOLS I AND II, 2007, : 1041 - +
  • [44] Unsupervised feature extraction via kernel subspace techniques
    Teixeira, A. R.
    Tome, A. M.
    Lang, E. W.
    NEUROCOMPUTING, 2011, 74 (05) : 820 - 830
  • [45] Integrated Phoneme Subspace Method for Speech Feature Extraction
    Park, Hyunsin
    Takiguchi, Tetsuya
    Ariki, Yasuo
    EURASIP JOURNAL ON AUDIO SPEECH AND MUSIC PROCESSING, 2009,
  • [46] A subspace method for feature extraction using independent components
    Kinukawa, S
    Kotani, M
    Ozawa, S
    SICE 2002: PROCEEDINGS OF THE 41ST SICE ANNUAL CONFERENCE, VOLS 1-5, 2002, : 746 - 749
  • [47] Feature extraction of helicopter acoustic signal with subspace decomposition
    Zhou, ZL
    ICEMI'99: FOURTH INTERNATIONAL CONFERENCE ON ELECTRONIC MEASUREMENT & INSTRUMENTS, VOLS 1 AND 2, CONFERENCE PROCEEDINGS, 1999, : 204 - 208
  • [48] Corrective feedback and persistent learning for information extraction
    Culotta, Aron
    Kristjansson, Trausti
    McCallum, Andrew
    Viola, Paul
    ARTIFICIAL INTELLIGENCE, 2006, 170 (14-15) : 1101 - 1122
  • [49] Joint subspace learning and subspace clustering based unsupervised feature selection
    Xiao, Zijian
    Chen, Hongmei
    Mi, Yong
    Luo, Chuan
    Horng, Shi-Jinn
    Li, Tianrui
    NEUROCOMPUTING, 2025, 635
  • [50] Learning feature-projection based classifiers
    Dayanik, Aynur
    EXPERT SYSTEMS WITH APPLICATIONS, 2012, 39 (04) : 4532 - 4544