KOPS - The Institutional Repository of the University of Konstanz

Modeling Morphological Priming in German With Naive Discriminative Learning

Modeling Morphological Priming in German With Naive Discriminative Learning

Cite This

Files in this item

Checksum: MD5:81d70df84f9be75eae8f246ec9236028

BAAYEN, R. Harald, Eva SMOLKA, 2020. Modeling Morphological Priming in German With Naive Discriminative Learning. In: Frontiers in Communication. Frontiers Media. 5, 17. eISSN 2297-900X. Available under: doi: 10.3389/fcomm.2020.00017

@article{Baayen2020-04-08Model-50100, title={Modeling Morphological Priming in German With Naive Discriminative Learning}, year={2020}, doi={10.3389/fcomm.2020.00017}, volume={5}, journal={Frontiers in Communication}, author={Baayen, R. Harald and Smolka, Eva}, note={Article Number: 17} }

<rdf:RDF xmlns:dcterms="http://purl.org/dc/terms/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:bibo="http://purl.org/ontology/bibo/" xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:void="http://rdfs.org/ns/void#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > <rdf:Description rdf:about="https://kops.uni-konstanz.de/rdf/resource/123456789/50100"> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/rdf/resource/123456789/45"/> <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/> <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/50100/3/Baayen_2-1c7vxumxn36n60.pdf"/> <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/50100"/> <dcterms:title>Modeling Morphological Priming in German With Naive Discriminative Learning</dcterms:title> <dc:contributor>Baayen, R. Harald</dc:contributor> <dc:creator>Baayen, R. Harald</dc:creator> <foaf:homepage rdf:resource="http://localhost:8080/jspui"/> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/rdf/resource/123456789/45"/> <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2020-07-02T11:25:55Z</dc:date> <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/50100/3/Baayen_2-1c7vxumxn36n60.pdf"/> <dcterms:rights rdf:resource="https://creativecommons.org/licenses/by/4.0/"/> <dc:creator>Smolka, Eva</dc:creator> <dc:contributor>Smolka, Eva</dc:contributor> <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2020-07-02T11:25:55Z</dcterms:available> <dcterms:abstract xml:lang="eng">Both localist and connectionist models, based on experimental results obtained for English and French, assume that the degree of semantic compositionality of a morphologically complex word is reflected in how it is processed. Since priming experiments using English and French morphologically related prime-target pairs reveal stronger priming when complex words are semantically transparent (e.g., refill–fill) compared to semantically more opaque pairs (e.g., restrain–strain), localist models set up connections between complex words and their stems only for semantically transparent pairs. Connectionist models have argued that the effect of transparency should arise as an epiphenomenon in PDP networks. However, for German, a series of studies has revealed equivalent priming for both transparent and opaque prime-target pairs, which suggests mediation of lexical access by the stem, independent of degrees of semantic compositionality. This study reports a priming experiment that replicates equivalent priming for transparent and opaque pairs. We show that these behavioral results can be straightforwardly modeled by a computational implementation of Word and Paradigm Morphology (WPM), Naive Discriminative Learning (NDL). Just as WPM, NDL eschews the theoretical construct of the morpheme. NDL succeeds in modeling the German priming data by inspecting the extent to which a discrimination network pre-activates the target lexome from the orthographic properties of the prime. Measures derived from an NDL network, complemented with a semantic similarity measure derived from distributional semantics, predict lexical decision latencies with somewhat improved precision compared to classical measures, such as word frequency, prime type, and human association ratings. We discuss both the methodological implications of our results, as well as their implications for models of the mental lexicon.</dcterms:abstract> <dc:language>eng</dc:language> <dcterms:issued>2020-04-08</dcterms:issued> </rdf:Description> </rdf:RDF>

Downloads since Jul 2, 2020 (Information about access statistics)

Baayen_2-1c7vxumxn36n60.pdf 24

This item appears in the following Collection(s)

https://creativecommons.org/licenses/by/4.0/ Except where otherwise noted, this item's license is described as https://creativecommons.org/licenses/by/4.0/

Search KOPS


Browse

My Account