Constructive Training of Probabilistic Neural Network


Dateien zu dieser Ressource

Prüfsumme: MD5:317b1ae9cb2020f8cfd220112da2e7e6

BERTHOLD, Michael R., Jay DIAMOND, 1998. Constructive Training of Probabilistic Neural Network. In: Neurocomputing. 19, pp. 167-183. Available under: doi: 10.1016/S0925-2312(97)00063-5

@article{Berthold1998Const-5586, title={Constructive Training of Probabilistic Neural Network}, year={1998}, doi={10.1016/S0925-2312(97)00063-5}, volume={19}, journal={Neurocomputing}, pages={167--183}, author={Berthold, Michael R. and Diamond, Jay} }

<rdf:RDF xmlns:dcterms="" xmlns:dc="" xmlns:rdf="" xmlns:bibo="" xmlns:dspace="" xmlns:foaf="" xmlns:void="" xmlns:xsd="" > <rdf:Description rdf:about=""> <dspace:isPartOfCollection rdf:resource=""/> <foaf:homepage rdf:resource="http://localhost:8080/jspui"/> <dc:format>application/pdf</dc:format> <dcterms:rights rdf:resource=""/> <dcterms:issued>1998</dcterms:issued> <dcterms:title>Constructive Training of Probabilistic Neural Network</dcterms:title> <dc:rights>terms-of-use</dc:rights> <bibo:uri rdf:resource=""/> <dc:contributor>Diamond, Jay</dc:contributor> <dcterms:hasPart rdf:resource=""/> <dc:contributor>Berthold, Michael R.</dc:contributor> <dc:language>eng</dc:language> <dcterms:abstract xml:lang="eng">This paper presents an easy to use, constructive training algorithm for Probabilistic Neural Networks a special type of Radial Basis Function Networks. In contrast to other algorithms, predefinition of the network topology is not required. The proposed algorithm introduces new hidden units whenever necessary and adjusts the shape of already existing units individually to minimize the risk of misclassification. This leads to smaller networks compared to classical PNNs and therefore enables the use of large datasets. Using eight classification benchmarks from the StatLog project, the new algorithm is compared to other state of the art classification methods. It is demonstrated that the proposed algorithm generates Probabilistic Neural Networks that achieve a comparable classification performance on these datasets. Only two rather uncritical parameters are required to be adjusted manually and there is no danger of overtraining - the algorithm clearly indicates the end of training. In addition, the networks generated are small due to the lack of redundant neurons in the hidden layer.</dcterms:abstract> <dcterms:isPartOf rdf:resource=""/> <dcterms:available rdf:datatype="">2011-03-24T15:56:36Z</dcterms:available> <dc:creator>Berthold, Michael R.</dc:creator> <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/> <dspace:hasBitstream rdf:resource=""/> <dc:date rdf:datatype="">2011-03-24T15:56:36Z</dc:date> <dcterms:bibliographicCitation>First publ. in: Neurocomputing 19 (1998), pp. 167-183</dcterms:bibliographicCitation> <dc:creator>Diamond, Jay</dc:creator> </rdf:Description> </rdf:RDF>

Dateiabrufe seit 01.10.2014 (Informationen über die Zugriffsstatistik)

BeDi98_dda_neurocomp.pdf 287

Das Dokument erscheint in:

terms-of-use Solange nicht anders angezeigt, wird die Lizenz wie folgt beschrieben: terms-of-use

KOPS Suche


Mein Benutzerkonto