Publikation:

Trust Junk and Evil Knobs: Calibrating Trust in AI Visualization

Lade...
Vorschaubild

Dateien

Zu diesem Dokument gibt es keine Dateien.

Datum

2024

Autor:innen

Wall, Emily
Matzen, Laura
Masters, Peta
Hosseinpour, Helia
Endert, Alex
Borgo, Rita
Chau, Polo
et al.

Herausgeber:innen

Kontakt

ISSN der Zeitschrift

Electronic ISSN

ISBN

Bibliografische Daten

Verlag

Schriftenreihe

Auflagebezeichnung

URI (zitierfähiger Link)
ArXiv-ID

Internationale Patentnummer

Angaben zur Forschungsförderung

Projekt

Open Access-Veröffentlichung
Core Facility der Universität Konstanz

Gesperrt bis

Titel in einer weiteren Sprache

Publikationstyp
Beitrag zu einem Konferenzband
Publikationsstatus
Published

Erschienen in

2024 IEEE 17th Pacific Visualization Conference, PacificVis 2024, Tokyo, Japan 23-26 April 2024 : Proceedings. Los Alamitos, CA ; u.a.: IEEE, 2024, pp. 22-31. ISBN 979-8-3503-9380-4. Available under: doi: 10.1109/pacificvis60374.2024.00012

Zusammenfassung

Many papers make claims about specific visualization techniques that are said to enhance or calibrate trust in AI systems. But a design choice that enhances trust in some cases appears to damage it in others. In this paper, we explore this inherent duality through an analogy with "knobs". Turning a knob too far in one direction may result in under-trust, too far in the other, over-trust or, turned up further still, in a confusing distortion. While the designs or so-called "knobs" are not inherently evil, they can be misused or used in an adversarial context and thereby manipulated to mislead users or promote unwarranted levels of trust in AI systems. When a visualization that has no meaningful connection with the underlying model or data is employed to enhance trust, we refer to the result as "trust junk." From a review of 65 papers, we identify nine commonly made claims about trust calibration. We synthesize them into a framework of knobs that can be used for good or "evil," and distill our findings into observed pitfalls for the responsible design of human-AI systems.

Zusammenfassung in einer weiteren Sprache

Fachgebiet (DDC)
150 Psychologie

Schlagwörter

Konferenz

2024 IEEE 17th Pacific Visualization Conference (PacificVis), 23. Apr. 2024 - 26. Apr. 2024, Tokyo, Japan
Rezension
undefined / . - undefined, undefined

Forschungsvorhaben

Organisationseinheiten

Zeitschriftenheft

Zugehörige Datensätze in KOPS

Zitieren

ISO 690WALL, Emily, Laura MATZEN, Mennatallah EL-ASSADY, Peta MASTERS, Helia HOSSEINPOUR, Alex ENDERT, Rita BORGO, Polo CHAU, Harald T. SCHUPP, Hendrik STROBELT, 2024. Trust Junk and Evil Knobs: Calibrating Trust in AI Visualization. 2024 IEEE 17th Pacific Visualization Conference (PacificVis). Tokyo, Japan, 23. Apr. 2024 - 26. Apr. 2024. In: 2024 IEEE 17th Pacific Visualization Conference, PacificVis 2024, Tokyo, Japan 23-26 April 2024 : Proceedings. Los Alamitos, CA ; u.a.: IEEE, 2024, S. 22-31. ISBN 979-8-3503-9380-4. Verfügbar unter: doi: 10.1109/pacificvis60374.2024.00012
BibTex
@inproceedings{Wall2024-04-23Trust-70093,
  year={2024},
  doi={10.1109/pacificvis60374.2024.00012},
  title={Trust Junk and Evil Knobs: Calibrating Trust in AI Visualization},
  isbn={979-8-3503-9380-4},
  publisher={IEEE},
  address={Los Alamitos, CA ; u.a.},
  booktitle={2024 IEEE 17th Pacific Visualization Conference, PacificVis 2024, Tokyo, Japan 23-26 April 2024 : Proceedings},
  pages={22--31},
  author={Wall, Emily and Matzen, Laura and El-Assady, Mennatallah and Masters, Peta and Hosseinpour, Helia and Endert, Alex and Borgo, Rita and Chau, Polo and Schupp, Harald T. and Strobelt, Hendrik}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/70093">
    <dc:contributor>Schupp, Harald T.</dc:contributor>
    <dcterms:title>Trust Junk and Evil Knobs: Calibrating Trust in AI Visualization</dcterms:title>
    <dcterms:abstract>Many papers make claims about specific visualization techniques that are said to enhance or calibrate trust in AI systems. But a design choice that enhances trust in some cases appears to damage it in others. In this paper, we explore this inherent duality through an analogy with "knobs". Turning a knob too far in one direction may result in under-trust, too far in the other, over-trust or, turned up further still, in a confusing distortion. While the designs or so-called "knobs" are not inherently evil, they can be misused or used in an adversarial context and thereby manipulated to mislead users or promote unwarranted levels of trust in AI systems. When a visualization that has no meaningful connection with the underlying model or data is employed to enhance trust, we refer to the result as "trust junk." From a review of 65 papers, we identify nine commonly made claims about trust calibration. We synthesize them into a framework of knobs that can be used for good or "evil," and distill our findings into observed pitfalls for the responsible design of human-AI systems.</dcterms:abstract>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43"/>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2024-06-11T11:18:03Z</dc:date>
    <dc:contributor>Hosseinpour, Helia</dc:contributor>
    <dc:contributor>Masters, Peta</dc:contributor>
    <dc:contributor>El-Assady, Mennatallah</dc:contributor>
    <dcterms:issued>2024-04-23</dcterms:issued>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2024-06-11T11:18:03Z</dcterms:available>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dc:creator>Masters, Peta</dc:creator>
    <dc:creator>El-Assady, Mennatallah</dc:creator>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43"/>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/70093"/>
    <dc:contributor>Wall, Emily</dc:contributor>
    <dc:creator>Hosseinpour, Helia</dc:creator>
    <dc:creator>Schupp, Harald T.</dc:creator>
    <dc:creator>Matzen, Laura</dc:creator>
    <dc:creator>Strobelt, Hendrik</dc:creator>
    <dc:creator>Chau, Polo</dc:creator>
    <dc:contributor>Borgo, Rita</dc:contributor>
    <dc:contributor>Matzen, Laura</dc:contributor>
    <dc:creator>Borgo, Rita</dc:creator>
    <dc:creator>Endert, Alex</dc:creator>
    <dc:contributor>Endert, Alex</dc:contributor>
    <dc:contributor>Chau, Polo</dc:contributor>
    <dc:contributor>Strobelt, Hendrik</dc:contributor>
    <dc:language>eng</dc:language>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dc:creator>Wall, Emily</dc:creator>
  </rdf:Description>
</rdf:RDF>

Interner Vermerk

xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter

Kontakt
URL der Originalveröffentl.

Prüfdatum der URL

Prüfungsdatum der Dissertation

Finanzierungsart

Kommentar zur Publikation

Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Ja
Begutachtet
Diese Publikation teilen