Publikation: From prosthetic memory to prosthetic denial : auditing whether large language models are prone to mass atrocity denialism
Dateien
Datum
Autor:innen
Herausgeber:innen
ISSN der Zeitschrift
Electronic ISSN
ISBN
Bibliografische Daten
Verlag
Schriftenreihe
Auflagebezeichnung
DOI (zitierfähiger Link)
Internationale Patentnummer
Angaben zur Forschungsförderung
Projekt
Open Access-Veröffentlichung
Core Facility der Universität Konstanz
Titel in einer weiteren Sprache
Publikationstyp
Publikationsstatus
Erschienen in
Zusammenfassung
The proliferation of large language models (LLMs) can influence how historical narratives are disseminated and perceived. This study explores the implications of LLMs’ responses on the representation of mass atrocity memory, examining whether generative AI systems contribute to prosthetic memory, i.e., mediated experiences of historical events, or to what we term “prosthetic denial,” the AI-mediated erasure or distortion of atrocity memories. We argue that LLMs function as interfaces that can elicit prosthetic memories and, therefore, act as experiential sites for memory transmission, but also introduce risks of denialism, particularly when their outputs align with contested or revisionist narratives. To empirically assess these risks, we conducted a comparative audit of five LLMs—Claude, GPT, Llama, Mixtral, and Gemini—across four historical case studies: the Holodomor, the Holocaust, the Cambodian Genocide, and the genocide against the Tutsi in Rwanda. Each model was prompted with questions addressing common denialist claims in English and an alternative language relevant to each case (Ukrainian, German, Khmer, and French). Our findings reveal that while LLMs generally produce accurate responses for widely documented events like the Holocaust, significant inconsistencies and susceptibility to denialist framings are observed for more underrepresented cases like the Cambodian Genocide. The disparities highlight the influence of training data availability and the probabilistic nature of LLM responses on memory integrity. We conclude that while LLMs extend the concept of prosthetic memory, their unmoderated use risks reinforcing historical denialism, raising ethical concerns for (digital) memory preservation, and potentially challenging the advantageous role of technology associated with the original values of prosthetic memory.
Zusammenfassung in einer weiteren Sprache
Fachgebiet (DDC)
Schlagwörter
Konferenz
Rezension
Zitieren
ISO 690
ULLOA, Roberto, Eve M. ZUCKER, Daniel BULTMANN, David J. SIMON, Mykola MAKHORTYKH, 2025. From prosthetic memory to prosthetic denial : auditing whether large language models are prone to mass atrocity denialism. In: AI & Society. Springer. ISSN 0951-5666. eISSN 1435-5655. Verfügbar unter: doi: 10.1007/s00146-025-02719-7BibTex
@article{Ulloa2025-11-10prost-76206,
title={From prosthetic memory to prosthetic denial : auditing whether large language models are prone to mass atrocity denialism},
year={2025},
doi={10.1007/s00146-025-02719-7},
issn={0951-5666},
journal={AI & Society},
author={Ulloa, Roberto and Zucker, Eve M. and Bultmann, Daniel and Simon, David J. and Makhortykh, Mykola}
}RDF
<rdf:RDF
xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:bibo="http://purl.org/ontology/bibo/"
xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
xmlns:foaf="http://xmlns.com/foaf/0.1/"
xmlns:void="http://rdfs.org/ns/void#"
xmlns:xsd="http://www.w3.org/2001/XMLSchema#" >
<rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/76206">
<dc:contributor>Ulloa, Roberto</dc:contributor>
<foaf:homepage rdf:resource="http://localhost:8080/"/>
<dc:creator>Ulloa, Roberto</dc:creator>
<dc:creator>Zucker, Eve M.</dc:creator>
<dc:contributor>Zucker, Eve M.</dc:contributor>
<dc:creator>Makhortykh, Mykola</dc:creator>
<dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2026-02-18T10:25:30Z</dcterms:available>
<dcterms:abstract>The proliferation of large language models (LLMs) can influence how historical narratives are disseminated and perceived. This study explores the implications of LLMs’ responses on the representation of mass atrocity memory, examining whether generative AI systems contribute to prosthetic memory, i.e., mediated experiences of historical events, or to what we term “prosthetic denial,” the AI-mediated erasure or distortion of atrocity memories. We argue that LLMs function as interfaces that can elicit prosthetic memories and, therefore, act as experiential sites for memory transmission, but also introduce risks of denialism, particularly when their outputs align with contested or revisionist narratives. To empirically assess these risks, we conducted a comparative audit of five LLMs—Claude, GPT, Llama, Mixtral, and Gemini—across four historical case studies: the Holodomor, the Holocaust, the Cambodian Genocide, and the genocide against the Tutsi in Rwanda. Each model was prompted with questions addressing common denialist claims in English and an alternative language relevant to each case (Ukrainian, German, Khmer, and French). Our findings reveal that while LLMs generally produce accurate responses for widely documented events like the Holocaust, significant inconsistencies and susceptibility to denialist framings are observed for more underrepresented cases like the Cambodian Genocide. The disparities highlight the influence of training data availability and the probabilistic nature of LLM responses on memory integrity. We conclude that while LLMs extend the concept of prosthetic memory, their unmoderated use risks reinforcing historical denialism, raising ethical concerns for (digital) memory preservation, and potentially challenging the advantageous role of technology associated with the original values of prosthetic memory.</dcterms:abstract>
<dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43613"/>
<dc:rights>Attribution 4.0 International</dc:rights>
<dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2026-02-18T10:25:30Z</dc:date>
<dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43613"/>
<dc:contributor>Bultmann, Daniel</dc:contributor>
<bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/76206"/>
<dc:contributor>Makhortykh, Mykola</dc:contributor>
<dcterms:rights rdf:resource="http://creativecommons.org/licenses/by/4.0/"/>
<dc:contributor>Simon, David J.</dc:contributor>
<dc:creator>Bultmann, Daniel</dc:creator>
<dcterms:issued>2025-11-10</dcterms:issued>
<dc:language>eng</dc:language>
<dcterms:title>From prosthetic memory to prosthetic denial : auditing whether large language models are prone to mass atrocity denialism</dcterms:title>
<void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
<dc:creator>Simon, David J.</dc:creator>
</rdf:Description>
</rdf:RDF>