Приказ основних података о документу

dc.creatorVuković, Najdan
dc.creatorMiljković, Zoran
dc.date.accessioned2022-09-19T17:12:48Z
dc.date.available2022-09-19T17:12:48Z
dc.date.issued2013
dc.identifier.issn0893-6080
dc.identifier.urihttps://machinery.mas.bg.ac.rs/handle/123456789/1723
dc.description.abstractRadial basis function (RBF) neural network is constructed of certain number of RBF neurons, and these networks are among the most used neural networks for modeling of various nonlinear problems in engineering. Conventional RBF neuron is usually based on Gaussian type of activation function with single width for each activation function. This feature restricts neuron performance for modeling the complex nonlinear problems. To accommodate limitation of a single scale, this paper presents neural network with similar but yet different activation function hyper basis function (HBF). The HBF allows different scaling of input dimensions to provide better generalization property when dealing with complex nonlinear problems in engineering practice. The HBF is based on generalization of Gaussian type of neuron that applies Mahalanobis-like distance as a distance metrics between input training sample and prototype vector. Compared to the RBF, the HBF neuron has more parameters to optimize, but HBF neural network needs less number of HBF neurons to memorize relationship between input and output sets in order to achieve good generalization property. However, recent research results of HBF neural network performance have shown that optimal way of constructing this type of neural network is needed; this paper addresses this issue and modifies sequential learning algorithm for HBF neural network that exploits the concept of neuron's significance and allows growing and pruning of HBF neuron during learning process. Extensive experimental study shows that HBF neural network, trained with developed learning algorithm, achieves lower prediction error and more compact neural network.en
dc.publisherPergamon-Elsevier Science Ltd, Oxford
dc.relationinfo:eu-repo/grantAgreement/MESTD/Technological Development (TD or TR)/35004/RS//
dc.rightsrestrictedAccess
dc.sourceNeural Networks
dc.subjectSequential learningen
dc.subjectNeuron's significanceen
dc.subjectHyper basis functionen
dc.subjectFeedforward neural networksen
dc.subjectExtended Kalman Filteren
dc.titleA growing and pruning sequential learning algorithm of hyper basis function neural network for function approximationen
dc.typearticle
dc.rights.licenseARR
dc.citation.epage226
dc.citation.other46: 210-226
dc.citation.rankM21
dc.citation.spage210
dc.citation.volume46
dc.identifier.doi10.1016/j.neunet.2013.06.004
dc.identifier.pmid23811384
dc.identifier.scopus2-s2.0-84879753900
dc.identifier.wos000325308900022
dc.type.versionpublishedVersion


Документи

Thumbnail

Овај документ се појављује у следећим колекцијама

Приказ основних података о документу