Publikation:

Visual, Interactive Deep Model Debugging : Supporting AI Development and Explainability

Lade...
Vorschaubild

Dateien

Spinner_2-gejp1cwe65xm0.pdf
Spinner_2-gejp1cwe65xm0.pdfGröße: 14.02 MBDownloads: 164

Datum

2024

Autor:innen

Herausgeber:innen

Kontakt

ISSN der Zeitschrift

Electronic ISSN

ISBN

Bibliografische Daten

Verlag

Schriftenreihe

Auflagebezeichnung

DOI (zitierfähiger Link)
ArXiv-ID

Internationale Patentnummer

Angaben zur Forschungsförderung

Projekt

Open Access-Veröffentlichung
Open Access Green
Core Facility der Universität Konstanz

Gesperrt bis

Titel in einer weiteren Sprache

Publikationstyp
Dissertation
Publikationsstatus
Published

Erschienen in

Zusammenfassung

Despite the significant advancements in deep learning, understanding the inner workings of such models remains a considerable challenge. While this opacity hinders trustworthiness, reliability, and fairness, it also leaves developers unable to identify and mitigate model defects effectively. Methods for explainable artificial intelligence (XAI) have been proposed to mitigate these issues but remain underutilized due to their limited accessibility within typical model development workflows. To address these issues, we first introduce explAIner, a visual analytics framework that structures the model explanation process and seamlessly integrates XAI techniques into the developer's workflow to improve model understanding, diagnosis, and refinement. The explAIner system enables developers to quickly identify issues with the model's predictions by applying state-of-the-art XAI methods in a comprehensive interface. While XAI techniques can expose shortcomings in the model's behavior, they often fail to draw connections between errors and their root causes in the model architecture, leaving developers reliant on guesswork and trial and error. Addressing this gap, we introduce iNNspector, a visual interactive framework that establishes theoretical foundations and employs novel mechanisms for the debugging of deep learning models. By correlating architecture and data, the iNNspector system aids developers in pinpointing and rectifying model flaws by providing interactive techniques to explore architectural entities and apply tools to them to investigate underlying data. The challenges of model debugging shift when dealing with Large Language Models (LLMs), where transformers primarily define the model architecture. Instead, recent advancements suggest that data quality substantially influences LLM performance, moving the debugging focus from the architecture toward the model's inputs and outputs. In existing interfaces, when generating text, the model's outputs are typically presented as running text, concealing the underlying search process, hindering the understanding of uncertainties in the outputs, and omitting viable alternatives. To address this, we present generAItor, which visually unfolds the beam search process, empowering users and computer linguists to comprehend, explore, and refine LLM outputs. Overall, this thesis contributes innovative visual approaches to assist model developers in understanding and debugging machine-learning models. It presents techniques for enhanced explanation, systematic exploration, and interactive engagement with the model's architecture and decision-making processes.

Zusammenfassung in einer weiteren Sprache

Fachgebiet (DDC)
004 Informatik

Schlagwörter

Konferenz

Rezension
undefined / . - undefined, undefined

Forschungsvorhaben

Organisationseinheiten

Zeitschriftenheft

Zugehörige Datensätze in KOPS

Zitieren

ISO 690SPINNER, Thilo, 2024. Visual, Interactive Deep Model Debugging : Supporting AI Development and Explainability [Dissertation]. Konstanz: Universität Konstanz
BibTex
@phdthesis{Spinner2024Visua-70810,
  year={2024},
  title={Visual, Interactive Deep Model Debugging : Supporting AI Development and Explainability},
  author={Spinner, Thilo},
  address={Konstanz},
  school={Universität Konstanz}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/70810">
    <dc:creator>Spinner, Thilo</dc:creator>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dc:language>eng</dc:language>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/70810"/>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dc:rights>terms-of-use</dc:rights>
    <dc:contributor>Spinner, Thilo</dc:contributor>
    <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/70810/4/Spinner_2-gejp1cwe65xm0.pdf"/>
    <dcterms:title>Visual, Interactive Deep Model Debugging : Supporting AI Development and Explainability</dcterms:title>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dcterms:abstract>Despite the significant advancements in deep learning, understanding the inner workings of such models remains a considerable challenge. While this opacity hinders trustworthiness, reliability, and fairness, it also leaves developers unable to identify and mitigate model defects effectively. Methods for explainable artificial intelligence (XAI) have been proposed to mitigate these issues but remain underutilized due to their limited accessibility within typical model development workflows.
To address these issues, we first introduce explAIner, a visual analytics framework that structures the model explanation process and seamlessly integrates XAI techniques into the developer's workflow to improve model understanding, diagnosis, and refinement. The explAIner system enables developers to quickly identify issues with the model's predictions by applying state-of-the-art XAI methods in a comprehensive interface.
While XAI techniques can expose shortcomings in the model's behavior, they often fail to draw connections between errors and their root causes in the model architecture, leaving developers reliant on guesswork and trial and error. Addressing this gap, we introduce iNNspector, a visual interactive framework that establishes theoretical foundations and employs novel mechanisms for the debugging of deep learning models. By correlating architecture and data, the iNNspector system aids developers in pinpointing and rectifying model flaws by providing interactive techniques to explore architectural entities and apply tools to them to investigate underlying data.
The challenges of model debugging shift when dealing with Large Language Models (LLMs), where transformers primarily define the model architecture. Instead, recent advancements suggest that data quality substantially influences LLM performance, moving the debugging focus from the architecture toward the model's inputs and outputs. In existing interfaces, when generating text, the model's outputs are typically presented as running text, concealing the underlying search process, hindering the understanding of uncertainties in the outputs, and omitting viable alternatives. To address this, we present generAItor, which visually unfolds the beam search process, empowering users and computer linguists to comprehend, explore, and refine LLM outputs.
Overall, this thesis contributes innovative visual approaches to assist model developers in understanding and debugging machine-learning models. It presents techniques for enhanced explanation, systematic exploration, and interactive engagement with the model's architecture and decision-making processes.</dcterms:abstract>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2024-09-18T10:15:53Z</dcterms:available>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2024-09-18T10:15:53Z</dc:date>
    <dcterms:rights rdf:resource="https://rightsstatements.org/page/InC/1.0/"/>
    <dcterms:issued>2024</dcterms:issued>
    <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/70810/4/Spinner_2-gejp1cwe65xm0.pdf"/>
  </rdf:Description>
</rdf:RDF>

Interner Vermerk

xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter

Kontakt
URL der Originalveröffentl.

Prüfdatum der URL

Prüfungsdatum der Dissertation

May 24, 2024
Hochschulschriftenvermerk
Konstanz, Univ., Diss., 2024
Finanzierungsart

Kommentar zur Publikation

Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Nein
Begutachtet
Diese Publikation teilen