Visual, Interactive Deep Model Debugging : Supporting AI Development and Explainability

dc.contributor.authorSpinner, Thilo
dc.date.accessioned2024-09-18T10:15:53Z
dc.date.available2024-09-18T10:15:53Z
dc.date.issued2024
dc.description.abstractDespite the significant advancements in deep learning, understanding the inner workings of such models remains a considerable challenge. While this opacity hinders trustworthiness, reliability, and fairness, it also leaves developers unable to identify and mitigate model defects effectively. Methods for explainable artificial intelligence (XAI) have been proposed to mitigate these issues but remain underutilized due to their limited accessibility within typical model development workflows. To address these issues, we first introduce explAIner, a visual analytics framework that structures the model explanation process and seamlessly integrates XAI techniques into the developer's workflow to improve model understanding, diagnosis, and refinement. The explAIner system enables developers to quickly identify issues with the model's predictions by applying state-of-the-art XAI methods in a comprehensive interface. While XAI techniques can expose shortcomings in the model's behavior, they often fail to draw connections between errors and their root causes in the model architecture, leaving developers reliant on guesswork and trial and error. Addressing this gap, we introduce iNNspector, a visual interactive framework that establishes theoretical foundations and employs novel mechanisms for the debugging of deep learning models. By correlating architecture and data, the iNNspector system aids developers in pinpointing and rectifying model flaws by providing interactive techniques to explore architectural entities and apply tools to them to investigate underlying data. The challenges of model debugging shift when dealing with Large Language Models (LLMs), where transformers primarily define the model architecture. Instead, recent advancements suggest that data quality substantially influences LLM performance, moving the debugging focus from the architecture toward the model's inputs and outputs. In existing interfaces, when generating text, the model's outputs are typically presented as running text, concealing the underlying search process, hindering the understanding of uncertainties in the outputs, and omitting viable alternatives. To address this, we present generAItor, which visually unfolds the beam search process, empowering users and computer linguists to comprehend, explore, and refine LLM outputs. Overall, this thesis contributes innovative visual approaches to assist model developers in understanding and debugging machine-learning models. It presents techniques for enhanced explanation, systematic exploration, and interactive engagement with the model's architecture and decision-making processes.
dc.description.versionpublisheddeu
dc.identifier.ppn1902915356
dc.identifier.urihttps://kops.uni-konstanz.de/handle/123456789/70810
dc.language.isoeng
dc.rightsterms-of-use
dc.rights.urihttps://rightsstatements.org/page/InC/1.0/
dc.subject.ddc004
dc.titleVisual, Interactive Deep Model Debugging : Supporting AI Development and Explainabilityeng
dc.typeDOCTORAL_THESIS
dspace.entity.typePublication
kops.citation.bibtex
@phdthesis{Spinner2024Visua-70810,
  year={2024},
  title={Visual, Interactive Deep Model Debugging : Supporting AI Development and Explainability},
  author={Spinner, Thilo},
  address={Konstanz},
  school={Universität Konstanz}
}
kops.citation.iso690SPINNER, Thilo, 2024. Visual, Interactive Deep Model Debugging : Supporting AI Development and Explainability [Dissertation]. Konstanz: Universität Konstanzdeu
kops.citation.iso690SPINNER, Thilo, 2024. Visual, Interactive Deep Model Debugging : Supporting AI Development and Explainability [Dissertation]. Konstanz: University of Konstanzeng
kops.citation.rdf
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/70810">
    <dc:creator>Spinner, Thilo</dc:creator>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dc:language>eng</dc:language>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/70810"/>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dc:rights>terms-of-use</dc:rights>
    <dc:contributor>Spinner, Thilo</dc:contributor>
    <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/70810/4/Spinner_2-gejp1cwe65xm0.pdf"/>
    <dcterms:title>Visual, Interactive Deep Model Debugging : Supporting AI Development and Explainability</dcterms:title>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dcterms:abstract>Despite the significant advancements in deep learning, understanding the inner workings of such models remains a considerable challenge. While this opacity hinders trustworthiness, reliability, and fairness, it also leaves developers unable to identify and mitigate model defects effectively. Methods for explainable artificial intelligence (XAI) have been proposed to mitigate these issues but remain underutilized due to their limited accessibility within typical model development workflows.
To address these issues, we first introduce explAIner, a visual analytics framework that structures the model explanation process and seamlessly integrates XAI techniques into the developer's workflow to improve model understanding, diagnosis, and refinement. The explAIner system enables developers to quickly identify issues with the model's predictions by applying state-of-the-art XAI methods in a comprehensive interface.
While XAI techniques can expose shortcomings in the model's behavior, they often fail to draw connections between errors and their root causes in the model architecture, leaving developers reliant on guesswork and trial and error. Addressing this gap, we introduce iNNspector, a visual interactive framework that establishes theoretical foundations and employs novel mechanisms for the debugging of deep learning models. By correlating architecture and data, the iNNspector system aids developers in pinpointing and rectifying model flaws by providing interactive techniques to explore architectural entities and apply tools to them to investigate underlying data.
The challenges of model debugging shift when dealing with Large Language Models (LLMs), where transformers primarily define the model architecture. Instead, recent advancements suggest that data quality substantially influences LLM performance, moving the debugging focus from the architecture toward the model's inputs and outputs. In existing interfaces, when generating text, the model's outputs are typically presented as running text, concealing the underlying search process, hindering the understanding of uncertainties in the outputs, and omitting viable alternatives. To address this, we present generAItor, which visually unfolds the beam search process, empowering users and computer linguists to comprehend, explore, and refine LLM outputs.
Overall, this thesis contributes innovative visual approaches to assist model developers in understanding and debugging machine-learning models. It presents techniques for enhanced explanation, systematic exploration, and interactive engagement with the model's architecture and decision-making processes.</dcterms:abstract>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2024-09-18T10:15:53Z</dcterms:available>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2024-09-18T10:15:53Z</dc:date>
    <dcterms:rights rdf:resource="https://rightsstatements.org/page/InC/1.0/"/>
    <dcterms:issued>2024</dcterms:issued>
    <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/70810/4/Spinner_2-gejp1cwe65xm0.pdf"/>
  </rdf:Description>
</rdf:RDF>
kops.date.examination2024-05-24
kops.date.yearDegreeGranted2024
kops.description.openAccessopenaccessgreen
kops.flag.knbibliographyfalse
kops.identifier.nbnurn:nbn:de:bsz:352-2-gejp1cwe65xm0
relation.isAuthorOfPublicationb7a5ad07-06be-4278-a8fc-0656f6908915
relation.isAuthorOfPublication.latestForDiscoveryb7a5ad07-06be-4278-a8fc-0656f6908915

Dateien

Originalbündel

Gerade angezeigt 1 - 1 von 1
Vorschaubild nicht verfügbar
Name:
Spinner_2-gejp1cwe65xm0.pdf
Größe:
14.02 MB
Format:
Adobe Portable Document Format
Spinner_2-gejp1cwe65xm0.pdf
Spinner_2-gejp1cwe65xm0.pdfGröße: 14.02 MBDownloads: 1015

Lizenzbündel

Gerade angezeigt 1 - 1 von 1
Vorschaubild nicht verfügbar
Name:
license.txt
Größe:
3.96 KB
Format:
Item-specific license agreed upon to submission
Beschreibung:
license.txt
license.txtGröße: 3.96 KBDownloads: 0