Publikation: The Point of Blaming AI Systems
Dateien
Datum
Autor:innen
Herausgeber:innen
ISSN der Zeitschrift
Electronic ISSN
ISBN
Bibliografische Daten
Verlag
Schriftenreihe
Auflagebezeichnung
URI (zitierfähiger Link)
DOI (zitierfähiger Link)
Internationale Patentnummer
Link zur Lizenz
Angaben zur Forschungsförderung
Projekt
Open Access-Veröffentlichung
Sammlungen
Core Facility der Universität Konstanz
Titel in einer weiteren Sprache
Publikationstyp
Publikationsstatus
Erschienen in
Zusammenfassung
As Christian List has recently argued, the increasing arrival of powerful AI systems that operate autonomously in high-stakes contexts creates a need for “future-proofing” our regulatory frameworks, i.e., for reassessing them in the face of these developments. One core part of our regulatory frameworks that dominates our everyday moral interactions is blame. Therefore, “future-proofing” our extant regulatory frameworks in the face of the increasing arrival of powerful AI systems requires, among other things, that we ask whether it makes sense to extend our blaming practices to these systems. In this paper, we argue for the admittedly surprising thesis that this question should be answered in the affirmative: contrary to what one might initially think, it can make a lot of sense to blame AI systems since, as we furthermore argue, many of the important functions that are fulfilled by blaming humans can also be served by blaming AI systems. The paper concludes that this result gives us a good pro tanto reason to extend our blame practices to AI systems.
Zusammenfassung in einer weiteren Sprache
Fachgebiet (DDC)
Schlagwörter
Konferenz
Rezension
Zitieren
ISO 690
ALTEHENGER, Hannah, Leonhard MENGES, 2024. The Point of Blaming AI Systems. In: Journal of Ethics and Social Philosophy. University of Southern California. 2024, 27(2), S. 287-314. eISSN 1559-3061. Verfügbar unter: doi: 10.26556/jesp.v27i2.3060BibTex
@article{Altehenger2024-05-23Point-75744,
title={The Point of Blaming AI Systems},
year={2024},
doi={10.26556/jesp.v27i2.3060},
number={2},
volume={27},
journal={Journal of Ethics and Social Philosophy},
pages={287--314},
author={Altehenger, Hannah and Menges, Leonhard}
}RDF
<rdf:RDF
xmlns:dcterms="http://purl.org/dc/terms/"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:bibo="http://purl.org/ontology/bibo/"
xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
xmlns:foaf="http://xmlns.com/foaf/0.1/"
xmlns:void="http://rdfs.org/ns/void#"
xmlns:xsd="http://www.w3.org/2001/XMLSchema#" >
<rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/75744">
<dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/75744/4/Altehenger_2-deztfwezu8om4.pdf"/>
<dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2026-01-19T14:37:28Z</dc:date>
<dcterms:rights rdf:resource="http://creativecommons.org/licenses/by-nc-nd/4.0/"/>
<dc:rights>Attribution-NonCommercial-NoDerivatives 4.0 International</dc:rights>
<dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2026-01-19T14:37:28Z</dcterms:available>
<dcterms:issued>2024-05-23</dcterms:issued>
<dc:creator>Menges, Leonhard</dc:creator>
<dc:contributor>Menges, Leonhard</dc:contributor>
<dc:language>eng</dc:language>
<dc:contributor>Altehenger, Hannah</dc:contributor>
<dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/40"/>
<bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/75744"/>
<dcterms:title>The Point of Blaming AI Systems</dcterms:title>
<dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/40"/>
<foaf:homepage rdf:resource="http://localhost:8080/"/>
<dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/75744/4/Altehenger_2-deztfwezu8om4.pdf"/>
<dc:creator>Altehenger, Hannah</dc:creator>
<dcterms:abstract>As Christian List has recently argued, the increasing arrival of powerful AI systems that operate autonomously in high-stakes contexts creates a need for “future-proofing” our regulatory frameworks, i.e., for reassessing them in the face of these developments. One core part of our regulatory frameworks that dominates our everyday moral interactions is blame. Therefore, “future-proofing” our extant regulatory frameworks in the face of the increasing arrival of powerful AI systems requires, among other things, that we ask whether it makes sense to extend our blaming practices to these systems. In this paper, we argue for the admittedly surprising thesis that this question should be answered in the affirmative: contrary to what one might initially think, it can make a lot of sense to blame AI systems since, as we furthermore argue, many of the important functions that are fulfilled by blaming humans can also be served by blaming AI systems. The paper concludes that this result gives us a good pro tanto reason to extend our blame practices to AI systems.</dcterms:abstract>
<void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
</rdf:Description>
</rdf:RDF>