Publikation:

Transparency in Interactive Feature-based Machine Learning : Challenges and Solutions

Lade...
Vorschaubild

Dateien

Stoffel_2-u2rr3yg1on061.pdf
Stoffel_2-u2rr3yg1on061.pdfGröße: 17.89 MBDownloads: 381

Datum

2018

Autor:innen

Herausgeber:innen

Kontakt

ISSN der Zeitschrift

Electronic ISSN

ISBN

Bibliografische Daten

Verlag

Schriftenreihe

Auflagebezeichnung

DOI (zitierfähiger Link)
ArXiv-ID

Internationale Patentnummer

Angaben zur Forschungsförderung

Projekt

Open Access-Veröffentlichung
Open Access Green
Core Facility der Universität Konstanz

Gesperrt bis

Titel in einer weiteren Sprache

Publikationstyp
Dissertation
Publikationsstatus
Published

Erschienen in

Zusammenfassung

Machine learning is ubiquitous in everyday life; techniques from the area of automated data analysis are used in various application scenarios, ranging from recommendations for movies over routes to drive to automated analysis of data in critical domains. To make appropriate use of such techniques, a calibration between human trust and trustworthiness of the machine learning techniques is required. If the calibration does not take place, research shows that disuse and misuse of machine learning techniques may happen. In this thesis, we elaborate on the problem of providing transparency in feature-based machine learning. In particular, we outline a number of challenges and present solutions for transparency. The solutions are based on interactive visual interfaces operating on feature-level. First, we elaborate on the connection between trust and transparency and outline the fundamental framework that builds the ground for this thesis and introduce different audiences of transparency. In the following, we present interactive, visualization and visual analytics-based solutions for specific aspects of transparency. First, the solution for the task of error analysis in supervised learning is presented. The proposed visual analytics system contains a number of coordinated views that facilitate sensemaking and reasoning of the influence of single features or groups of features in the machine learning process. The second solution is a visualization technique tailored to the interactive, visual exploration of ambiguous feature sets that arise in certain machine learning scenarios. Statistical and semantical information is combined to present a clear picture of the targeted type of ambiguities that can be interactively modified, eventually leading to a more specific feature set with fewer ambiguities. Afterward we illustrate how the concept of transparency and observable behavior can be of use in a real-world scenario. We contribute an interactive, visualization-driven system to explore a spatial clustering, giving the human control of the feature set, feature weights, and associated hyperparameters. To observe different behaviors of the spatial clustering, an interactive visualization is provided that allows the comparison of different feature combinations and hyperparameters. In the same application domain, we contribute a visual analytics system that enables analysts to interactively visualize the output of a machine learning system in context with additional data that have a common, spatial context. The system bridges the gap between the analysts utilizing a machine learning system and users of the results, which in the targeted scenario are two different user groups. Our solutions show that both groups profit from insights in the feature set of the machine learning. The thesis concludes with a reflection regarding further research directions and a summary of the results.

Zusammenfassung in einer weiteren Sprache

Fachgebiet (DDC)
004 Informatik

Schlagwörter

Konferenz

Rezension
undefined / . - undefined, undefined

Forschungsvorhaben

Organisationseinheiten

Zeitschriftenheft

Zugehörige Datensätze in KOPS

Zitieren

ISO 690STOFFEL, Florian, 2018. Transparency in Interactive Feature-based Machine Learning : Challenges and Solutions [Dissertation]. Konstanz: University of Konstanz
BibTex
@phdthesis{Stoffel2018Trans-44228,
  year={2018},
  title={Transparency in Interactive Feature-based Machine Learning : Challenges and Solutions},
  author={Stoffel, Florian},
  address={Konstanz},
  school={Universität Konstanz}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/44228">
    <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/44228/3/Stoffel_2-u2rr3yg1on061.pdf"/>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/44228"/>
    <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/44228/3/Stoffel_2-u2rr3yg1on061.pdf"/>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dc:contributor>Stoffel, Florian</dc:contributor>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dc:language>eng</dc:language>
    <dcterms:title>Transparency in Interactive Feature-based Machine Learning : Challenges and Solutions</dcterms:title>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2018-12-10T08:37:38Z</dc:date>
    <dc:creator>Stoffel, Florian</dc:creator>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2018-12-10T08:37:38Z</dcterms:available>
    <dcterms:rights rdf:resource="https://rightsstatements.org/page/InC/1.0/"/>
    <dc:rights>terms-of-use</dc:rights>
    <dcterms:abstract xml:lang="eng">Machine learning is ubiquitous in everyday life; techniques from the area of automated data analysis are used in various application scenarios, ranging from recommendations for movies over routes to drive to automated analysis of data in critical domains. To make appropriate use of such techniques, a calibration between human trust and trustworthiness of the machine learning techniques is required. If the calibration does not take place, research shows that disuse and misuse of machine learning techniques may happen. In this thesis, we elaborate on the problem of providing transparency in feature-based machine learning. In particular, we outline a number of challenges and present solutions for transparency. The solutions are based on interactive visual interfaces operating on feature-level. First, we elaborate on the connection between trust and transparency and outline the fundamental framework that builds the ground for this thesis and introduce different audiences of transparency. In the following, we present interactive, visualization and visual analytics-based solutions for specific aspects of transparency. First, the solution for the task of error analysis in supervised learning is presented. The proposed visual analytics system contains a number of coordinated views that facilitate sensemaking and reasoning of the influence of single features or groups of features in the machine learning process. The second solution is a visualization technique tailored to the interactive, visual exploration of ambiguous feature sets that arise in certain machine learning scenarios. Statistical and semantical information is combined to present a clear picture of the targeted type of ambiguities that can be interactively modified, eventually leading to a more specific feature set with fewer ambiguities. Afterward we illustrate how the concept of transparency and observable behavior can be of use in a real-world scenario. We contribute an interactive, visualization-driven system to explore a spatial clustering, giving the human control of the feature set, feature weights, and associated hyperparameters. To observe different behaviors of the spatial clustering, an interactive visualization is provided that allows the comparison of different feature combinations and hyperparameters. In the same application domain, we contribute a visual analytics system that enables analysts to interactively visualize the output of a machine learning system in context with additional data that have a common, spatial context. The system bridges the gap between the analysts utilizing a machine learning system and users of the results, which in the targeted scenario are two different user groups. Our solutions show that both groups profit from insights in the feature set of the machine learning. The thesis concludes with a reflection regarding further research directions and a summary of the results.</dcterms:abstract>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dcterms:issued>2018</dcterms:issued>
  </rdf:Description>
</rdf:RDF>

Interner Vermerk

xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter

Kontakt
URL der Originalveröffentl.

Prüfdatum der URL

Prüfungsdatum der Dissertation

October 5, 2018
Hochschulschriftenvermerk
Konstanz, Univ., Diss., 2018
Finanzierungsart

Kommentar zur Publikation

Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Ja
Begutachtet
Diese Publikation teilen