Constructive Training of Probabilistic Neural Network

Cite This

Files in this item

Checksum: MD5:317b1ae9cb2020f8cfd220112da2e7e6

BERTHOLD, Michael R., Jay DIAMOND, 1998. Constructive Training of Probabilistic Neural Network. In: Neurocomputing. 19, pp. 167-183. Available under: doi: 10.1016/S0925-2312(97)00063-5

@article{Berthold1998Const-5586, title={Constructive Training of Probabilistic Neural Network}, year={1998}, doi={10.1016/S0925-2312(97)00063-5}, volume={19}, journal={Neurocomputing}, pages={167--183}, author={Berthold, Michael R. and Diamond, Jay} }

<rdf:RDF xmlns:dcterms="" xmlns:dc="" xmlns:rdf="" xmlns:bibo="" xmlns:dspace="" xmlns:foaf="" xmlns:void="" xmlns:xsd="" > <rdf:Description rdf:about=""> <dspace:isPartOfCollection rdf:resource=""/> <dcterms:rights rdf:resource=""/> <dc:format>application/pdf</dc:format> <foaf:homepage rdf:resource="http://localhost:8080/jspui"/> <dcterms:issued>1998</dcterms:issued> <dcterms:title>Constructive Training of Probabilistic Neural Network</dcterms:title> <bibo:uri rdf:resource=""/> <dc:contributor>Diamond, Jay</dc:contributor> <dcterms:hasPart rdf:resource=""/> <dc:contributor>Berthold, Michael R.</dc:contributor> <dc:language>eng</dc:language> <dcterms:abstract xml:lang="eng">This paper presents an easy to use, constructive training algorithm for Probabilistic Neural Networks a special type of Radial Basis Function Networks. In contrast to other algorithms, predefinition of the network topology is not required. The proposed algorithm introduces new hidden units whenever necessary and adjusts the shape of already existing units individually to minimize the risk of misclassification. This leads to smaller networks compared to classical PNNs and therefore enables the use of large datasets. Using eight classification benchmarks from the StatLog project, the new algorithm is compared to other state of the art classification methods. It is demonstrated that the proposed algorithm generates Probabilistic Neural Networks that achieve a comparable classification performance on these datasets. Only two rather uncritical parameters are required to be adjusted manually and there is no danger of overtraining - the algorithm clearly indicates the end of training. In addition, the networks generated are small due to the lack of redundant neurons in the hidden layer.</dcterms:abstract> <dcterms:isPartOf rdf:resource=""/> <dcterms:available rdf:datatype="">2011-03-24T15:56:36Z</dcterms:available> <dc:creator>Berthold, Michael R.</dc:creator> <dc:rights>Attribution-NonCommercial-NoDerivs 2.0 Generic</dc:rights> <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/> <dspace:hasBitstream rdf:resource=""/> <dc:date rdf:datatype="">2011-03-24T15:56:36Z</dc:date> <dcterms:bibliographicCitation>First publ. in: Neurocomputing 19 (1998), pp. 167-183</dcterms:bibliographicCitation> <dc:creator>Diamond, Jay</dc:creator> </rdf:Description> </rdf:RDF>

Downloads since Oct 1, 2014 (Information about access statistics)

BeDi98_dda_neurocomp.pdf 493

This item appears in the following Collection(s)

Attribution-NonCommercial-NoDerivs 2.0 Generic Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivs 2.0 Generic

Search KOPS


My Account