Feature-augmented Random Vector Functional-link Neural Network

被引:0
|
作者
Long M.-S. [1 ]
Wang S.-T. [1 ]
机构
[1] School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi
来源
Ruan Jian Xue Bao/Journal of Software | 2024年 / 35卷 / 06期
关键词
broad learning system; feature augmentation; fuzzy inference system; interpretability; random vector functional-link neural network (RVFLNN); Sigmoid function;
D O I
10.13328/j.cnki.jos.006920
中图分类号
学科分类号
摘要
The broad-learning-based dynamic fuzzy inference system (BL-DFIS) can automatically assemble simplified fuzzy rules and achieve high accuracy in classification tasks. However, when BL-DFIS works on large and complex datasets, it may generate too many fuzzy rules to achieve satisfactory identification accuracy, which adversely affects its interpretability. In order to circumvent such a bottleneck, a fuzzy neural network called feature-augmented random vector functional-link neural network (FA-RVFLNN) is proposed in this study to achieve excellent trade-off between classification performance and interpretability. In the proposed network, the RVFLNN with original data as input is taken as its primary structure, and BL-DFIS is taken as a performance supplement, which implies that FA-RVFLNN contains direct links to boost the performance of the whole system. The inference mechanism of the primary structure can be explained by a fuzzy logic operator (I-OR), owing to the use of Sigmoid activation functions in the enhancement nodes of this structure. Moreover, the original input data with clear meaning also help to explain the inference rules of the primary structure. With the support of direct links, FA-RVFLNN can learn more useful information through enhancement nodes, feature nodes, and fuzzy nodes. The experimental results indicate that FA-RVFLNN indeed eases the problem of rule explosion caused by excessive enhancement nodes in the primary structure and improves the interpretability of BL-DFIS therein (The average number of fuzzy rules is reduced by about 50%), and is still competitive in terms of generalization performance and network size. © 2024 Chinese Academy of Sciences. All rights reserved.
引用
收藏
页码:2903 / 2922
页数:19
相关论文
共 36 条
  • [1] Pao YH, Takefuji Y., Functional-link net computing: Theory, system architecture, and functionalities, Computer, 25, 5, pp. 76-79, (1992)
  • [2] Pao YH, Park GH, Sobajic DJ., Learning and generalization characteristics of the random vector functional-link net, Neurocomputing, 6, 2, pp. 163-180, (1994)
  • [3] Chen CLP, Wan JZ., A rapid learning and dynamic stepwise updating algorithm for flat neural networks and the application to time-series prediction, IEEE Trans. on Systems, Man, and Cybernetics, Part B (Cybernetics), 29, 1, pp. 62-72, (1999)
  • [4] Malik AK, Gao RB, Ganaie MA, Tanveer M, Suganthan PN., Random vector functional link network: Recent developments, applications, and future directions, (2022)
  • [5] Katuwal R, Suganthan PN., Stacked autoencoder based deep random vector functional link neural network for classification, Applied Soft Computing, 85, (2019)
  • [6] Henriquez PA, Ruz GA., Twitter sentiment classification based on deep random vector functional link, Proc. of the 2018 Int’l Joint Conf. on Neural Networks, pp. 1-6, (2018)
  • [7] Zhang L, Suganthan PN., Visual tracking with convolutional random vector functional link network, IEEE Trans. on Cybernetics, 47, 10, pp. 3243-3253, (2017)
  • [8] Shi QS, Katuwal R, Suganthan PN, Tanveer M., Random vector functional link neural network based ensemble deep learning, Pattern Recognition, 117, (2021)
  • [9] Chen CLP, Liu ZL., Broad learning system: An effective and efficient incremental learning system without the need for deep architecture, IEEE Trans. on Neural Networks and Learning Systems, 29, 1, pp. 10-24, (2018)
  • [10] Zhou T, Chung FL, Wang ST., Deep TSK fuzzy classifier with stacked generalization and triplely concise interpretability guarantee for large data, IEEE Trans. on Fuzzy Systems, 25, 5, pp. 1207-1221, (2017)