Publikation: A Survey of Human‐Centered Evaluations in Human‐Centered Machine Learning
Dateien
Datum
Autor:innen
Herausgeber:innen
ISSN der Zeitschrift
Electronic ISSN
ISBN
Bibliografische Daten
Verlag
Schriftenreihe
Auflagebezeichnung
URI (zitierfähiger Link)
DOI (zitierfähiger Link)
Internationale Patentnummer
Link zur Lizenz
Angaben zur Forschungsförderung
Projekt
Open Access-Veröffentlichung
Core Facility der Universität Konstanz
Titel in einer weiteren Sprache
Publikationstyp
Publikationsstatus
Erschienen in
Zusammenfassung
Visual analytics systems integrate interactive visualizations and machine learning to enable expert users to solve complex analysis tasks. Applications combine techniques from various fields of research and are consequently not trivial to evaluate. The result is a lack of structure and comparability between evaluations. In this survey, we provide a comprehensive overview of evaluations in the field of human-centered machine learning. We particularly focus on human-related factors that influence trust, interpretability, and explainability. We analyze the evaluations presented in papers from top conferences and journals in information visualization and human-computer interaction to provide a systematic review of their setup and findings. From this survey, we distill design dimensions for structured evaluations, identify evaluation gaps, and derive future research opportunities.
Zusammenfassung in einer weiteren Sprache
Fachgebiet (DDC)
Schlagwörter
Konferenz
Rezension
Zitieren
ISO 690
SPERRLE, Fabian, Mennatallah EL-ASSADY, Grace GUO, Rita BORGO, Duen Horng CHAU, Alex ENDERT, Daniel A. KEIM, 2021. A Survey of Human‐Centered Evaluations in Human‐Centered Machine Learning. In: Computer Graphics Forum. Wiley. 2021, 40(3), pp. 543-567. ISSN 0167-7055. eISSN 1467-8659. Available under: doi: 10.1111/cgf.14329BibTex
@article{Sperrle2021Surve-54160, year={2021}, doi={10.1111/cgf.14329}, title={A Survey of Human‐Centered Evaluations in Human‐Centered Machine Learning}, number={3}, volume={40}, issn={0167-7055}, journal={Computer Graphics Forum}, pages={543--567}, author={Sperrle, Fabian and El-Assady, Mennatallah and Guo, Grace and Borgo, Rita and Chau, Duen Horng and Endert, Alex and Keim, Daniel A.} }
RDF
<rdf:RDF xmlns:dcterms="http://purl.org/dc/terms/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:bibo="http://purl.org/ontology/bibo/" xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:void="http://rdfs.org/ns/void#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/54160"> <dc:creator>Sperrle, Fabian</dc:creator> <dcterms:title>A Survey of Human‐Centered Evaluations in Human‐Centered Machine Learning</dcterms:title> <dc:contributor>Sperrle, Fabian</dc:contributor> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/> <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/> <dc:creator>Endert, Alex</dc:creator> <dc:creator>Keim, Daniel A.</dc:creator> <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2021-06-30T13:52:30Z</dcterms:available> <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2021-06-30T13:52:30Z</dc:date> <dc:contributor>Endert, Alex</dc:contributor> <dc:contributor>Chau, Duen Horng</dc:contributor> <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/54160/1/Sperrle_2-kic64u4cegeh4.pdf"/> <dcterms:rights rdf:resource="http://creativecommons.org/licenses/by/4.0/"/> <dc:language>eng</dc:language> <foaf:homepage rdf:resource="http://localhost:8080/"/> <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/54160"/> <dc:creator>Borgo, Rita</dc:creator> <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/54160/1/Sperrle_2-kic64u4cegeh4.pdf"/> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/> <dc:contributor>Borgo, Rita</dc:contributor> <dc:creator>El-Assady, Mennatallah</dc:creator> <dc:creator>Chau, Duen Horng</dc:creator> <dc:contributor>Keim, Daniel A.</dc:contributor> <dc:contributor>El-Assady, Mennatallah</dc:contributor> <dcterms:abstract xml:lang="eng">Visual analytics systems integrate interactive visualizations and machine learning to enable expert users to solve complex analysis tasks. Applications combine techniques from various fields of research and are consequently not trivial to evaluate. The result is a lack of structure and comparability between evaluations. In this survey, we provide a comprehensive overview of evaluations in the field of human-centered machine learning. We particularly focus on human-related factors that influence trust, interpretability, and explainability. We analyze the evaluations presented in papers from top conferences and journals in information visualization and human-computer interaction to provide a systematic review of their setup and findings. From this survey, we distill design dimensions for structured evaluations, identify evaluation gaps, and derive future research opportunities.</dcterms:abstract> <dcterms:issued>2021</dcterms:issued> <dc:creator>Guo, Grace</dc:creator> <dc:contributor>Guo, Grace</dc:contributor> <dc:rights>Attribution 4.0 International</dc:rights> </rdf:Description> </rdf:RDF>