Neutrosophic Compound Orthogonal Neural Network and Its Applications in Neutrosophic Function Approximation

被引:5
|
作者
Ye, Jun [1 ]
Cui, Wenhua [1 ]
机构
[1] Shaoxing Univ, Dept Elect & Informat Engn, 508 Huancheng West Rd, Shaoxing 312000, Peoples R China
来源
SYMMETRY-BASEL | 2019年 / 11卷 / 02期
基金
中国国家自然科学基金;
关键词
Neutrosophic compound orthogonal neural network; Neutrosophic number; Neutrosophic function; Function approximation;
D O I
10.3390/sym11020147
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Neural networks are powerful universal approximation tools. They have been utilized for functions/data approximation, classification, pattern recognition, as well as their various applications. Uncertain or interval values result from the incompleteness of measurements, human observation and estimations in the real world. Thus, a neutrosophic number (NsN) can represent both certain and uncertain information in an indeterminate setting and imply a changeable interval depending on its indeterminate ranges. In NsN settings, however, existing interval neural networks cannot deal with uncertain problems with NsNs. Therefore, this original study proposes a neutrosophic compound orthogonal neural network (NCONN) for the first time, containing the NsN weight values, NsN input and output, and hidden layer neutrosophic neuron functions, to approximate neutrosophic functions/NsN data. In the proposed NCONN model, single input and single output neurons are the transmission notes of NsN data and hidden layer neutrosophic neurons are constructed by the compound functions of both the Chebyshev neutrosophic orthogonal polynomial and the neutrosophic sigmoid function. In addition, illustrative and actual examples are provided to verify the effectiveness and learning performance of the proposed NCONN model for approximating neutrosophic nonlinear functions and NsN data. The contribution of this study is that the proposed NCONN can handle the approximation problems of neutrosophic nonlinear functions and NsN data. However, the main advantage is that the proposed NCONN implies a simple learning algorithm, higher speed learning convergence, and higher learning accuracy in indeterminate/NsN environments.
引用
收藏
页数:9
相关论文
共 50 条