Publikation:

Levels of Explainability for Human-AI Interaction in Visual Text Analytics

Lade...
Vorschaubild

Dateien

El-Assady_2-iqv9n4uddwkx5.pdf
El-Assady_2-iqv9n4uddwkx5.pdfGröße: 46.39 MBDownloads: 11

Datum

2023

Herausgeber:innen

Kontakt

ISSN der Zeitschrift

Electronic ISSN

ISBN

Bibliografische Daten

Verlag

Schriftenreihe

Auflagebezeichnung

DOI (zitierfähiger Link)
ArXiv-ID

Internationale Patentnummer

Link zur Lizenz

Angaben zur Forschungsförderung

Projekt

Open Access-Veröffentlichung
Core Facility der Universität Konstanz

Gesperrt bis

Titel in einer weiteren Sprache

Publikationstyp
Dissertation
Publikationsstatus
Published

Erschienen in

Zusammenfassung

The increasing demand for accountable decision-making with complex artificial intelligence systems has accelerated the need to understand, diagnose, and refine their underlying machine learning models. Hence, these three tasks have become the focus of explainable artificial intelligence research. This dissertation examines how we can make the refinement and optimization of content and topic modeling accessible to different stakeholders. Based on a theoretical framework for the explainability of artificial intelligence, we derive three Explainability Spaces that are tailored to four user groups. The Explainability Spaces construct a continuous spectrum and inform design decisions for interactive, explainable visual analytics systems. We place machine learning experts and domain experts at opposite ends of the spectrum, in the Model Space and the Semantic Space, respectively, and position model analysts and domain analysts in the middle ground, in the Input-Output Space.

The first main content chapter of this dissertation presents an encompassing conceptual framework for the central challenge of designing tailored mixed-initiative approaches to interactive and explainable machine learning. This framework combines aspects from the analysis domain and the machine learning model to introduce interaction and explainability processes and strategies. In addition, we propose and discuss the Explainability Spaces as a targeted design rationale for tailoring the tasks of understanding, diagnosis, and refinement to the main stakeholders of explainable artificial intelligence. Based on the proposed design space, we derive four instantiations that demonstrate the applicability of this framework. Hence, this dissertation’s remaining content chapters present four techniques, each utilizing novel intelligence augmentation paradigms to explain, diagnose, and refine comparable models.

For domain experts, we present a technique in the Semantic Space that allows them to externalize their domain knowledge while remaining model-agnostic. This approach maintains two content-representation hierarchies that operate within a shared vector space, enabling experts to perform guided machine teaching for topic-modeling refinement. Using word-embedding projections, experts can refine concept regions independent of a particular document collection or topic model. Their interactions directly affect the semantic relations of the underlying vectors, which, in turn, induce changes in the topic modeling.

For the domain analysts, we present a technique designed in the Input-Output Space (with a focus on semantics). It refines topic models through a user-driven, progressive reinforcement learning process that does not require a deep understanding of the underlying algorithms. The system initializes two model configurations based on a parameter-space analysis that enhances document separability and lets the models compete for the analyst’s satisfaction. Using automatic topic matching, topic summaries, and parameter distribution views, analysts can investigate the modeling results before providing document-based relevance feedback. This feedback is used to distill a user-endorsed topic distribution, allowing the system to train new model instances and restart the feedback process to iteratively converge on a refinement.

For the model analysts, we present a technique designed in the Input-Output Space (with a focus on the model). It untangles reply chains by combining supervised and unsupervised machine learning models to model thematic content relations. This approach enables analysts to create and compare various reconstruction models, enriching them with user-defined queries and rule-based heuristics. To investigate the models’ inner workings and performance, we visualize the model decision spaces, including all candidate relations considered. The proposed system enables model analysts to understand and diagnose models, thereby fine-tuning them using a rich set of computed and user-derived features.

Lastly, for the model experts, we present an intelligible topic modeling technique designed in the Model Space. It relies on an incremental hierarchical topic modeling algorithm to visualize the algorithmic decision-making process. Model experts can use this system to understand the model's inner workings across different parameters. To effectively assess the potential consequences of human interventions, we introduce speculative execution as a paradigm for creating user-steerable preview mechanisms. Whenever the measured model quality deteriorates, the system automatically triggers a speculative execution of various optimization strategies and requests external intervention. Experts compare the proposed optimizations to the current model state, previewing their effect on the next model iterations before applying one.

Overall, this dissertation presents four techniques that contribute to novel visual analytics workflows that follow tailored interaction and explainability methods. They are instantiations of the proposed Explainability Spaces. All techniques are designed to address the same tasks: understanding, diagnosing, and refining topic and thematic content models. We have conducted comparable evaluations of all techniques to assess their effectiveness across different stakeholders. Our evaluations confirm that we can enhance problem-solving and decision-making support for mixed-initiative systems through tailored interfaces using the proposed interaction and explainability workflows.

Zusammenfassung in einer weiteren Sprache

Fachgebiet (DDC)
004 Informatik

Schlagwörter

Konferenz

Rezension
undefined / . - undefined, undefined

Forschungsvorhaben

Organisationseinheiten

Zeitschriftenheft

Zugehörige Datensätze in KOPS

Zitieren

ISO 690EL-ASSADY, Mennatallah, 2023. Levels of Explainability for Human-AI Interaction in Visual Text Analytics [Dissertation]. Konstanz: Universität Konstanz
BibTex
@phdthesis{ElAssady2023-01-23Level-76407,
  title={Levels of Explainability for Human-AI Interaction in Visual Text Analytics},
  year={2023},
  author={El-Assady, Mennatallah},
  address={Konstanz},
  school={Universität Konstanz}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/76407">
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2026-03-02T06:56:22Z</dc:date>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/76407/4/El-Assady_2-iqv9n4uddwkx5.pdf"/>
    <dcterms:abstract>The increasing demand for accountable decision-making with complex artificial intelligence systems has accelerated the need to understand, diagnose, and refine their underlying machine learning models. Hence, these three tasks have become the focus of explainable artificial intelligence research. This dissertation examines how we can make the refinement and optimization of content and topic modeling accessible to different stakeholders. Based on a theoretical framework for the explainability of artificial intelligence, we derive three Explainability Spaces that are tailored to four user groups. The Explainability Spaces construct a continuous spectrum and inform design decisions for interactive, explainable visual analytics systems. We place machine learning experts and domain experts at opposite ends of the spectrum, in the Model Space and the Semantic Space, respectively, and position model analysts and domain analysts in the middle ground, in the Input-Output Space. 

The first main content chapter of this dissertation presents an encompassing conceptual framework for the central challenge of designing tailored mixed-initiative approaches to interactive and explainable machine learning. This framework combines aspects from the analysis domain and the machine learning model to introduce interaction and explainability processes and strategies. In addition, we propose and discuss the Explainability Spaces as a targeted design rationale for tailoring the tasks of understanding, diagnosis, and refinement to the main stakeholders of explainable artificial intelligence. Based on the proposed design space, we derive four instantiations that demonstrate the applicability of this framework. Hence, this dissertation’s remaining content chapters present four techniques, each utilizing novel intelligence augmentation paradigms to explain, diagnose, and refine comparable models. 

For domain experts, we present a technique in the Semantic Space that allows them to externalize their domain knowledge while remaining model-agnostic. This approach maintains two content-representation hierarchies that operate within a shared vector space, enabling experts to perform guided machine teaching for topic-modeling refinement. Using word-embedding projections, experts can refine concept regions independent of a particular document collection or topic model. Their interactions directly affect the semantic relations of the underlying vectors, which, in turn, induce changes in the topic modeling. 

For the domain analysts, we present a technique designed in the Input-Output Space (with a focus on semantics). It refines topic models through a user-driven, progressive reinforcement learning process that does not require a deep understanding of the underlying algorithms. The system initializes two model configurations based on a parameter-space analysis that enhances document separability and lets the models compete for the analyst’s satisfaction. Using automatic topic matching, topic summaries, and parameter distribution views, analysts can investigate the modeling results before providing document-based relevance feedback. This feedback is used to distill a user-endorsed topic distribution, allowing the system to train new model instances and restart the feedback process to iteratively converge on a refinement. 

For the model analysts, we present a technique designed in the Input-Output Space (with a focus on the model). It untangles reply chains by combining supervised and unsupervised machine learning models to model thematic content relations. This approach enables analysts to create and compare various reconstruction models, enriching them with user-defined queries and rule-based heuristics. To investigate the models’ inner workings and performance, we visualize the model decision spaces, including all candidate relations considered. The proposed system enables model analysts to understand and diagnose models, thereby fine-tuning them using a rich set of computed and user-derived features. 

Lastly, for the model experts, we present an intelligible topic modeling technique designed in the Model Space. It relies on an incremental hierarchical topic modeling algorithm to visualize the algorithmic decision-making process. Model experts can use this system to understand the model's inner workings across different parameters. To effectively assess the potential consequences of human interventions, we introduce speculative execution as a paradigm for creating user-steerable preview mechanisms. Whenever the measured model quality deteriorates, the system automatically triggers a speculative execution of various optimization strategies and requests external intervention. Experts compare the proposed optimizations to the current model state, previewing their effect on the next model iterations before applying one. 

Overall, this dissertation presents four techniques that contribute to novel visual analytics workflows that follow tailored interaction and explainability methods. They are instantiations of the proposed Explainability Spaces. All techniques are designed to address the same tasks: understanding, diagnosing, and refining topic and thematic content models. We have conducted comparable evaluations of all techniques to assess their effectiveness across different stakeholders. Our evaluations confirm that we can enhance problem-solving and decision-making support for mixed-initiative systems through tailored interfaces using the proposed interaction and explainability workflows.</dcterms:abstract>
    <dcterms:title>Levels of Explainability for Human-AI Interaction in Visual Text Analytics</dcterms:title>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/76407"/>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dc:contributor>El-Assady, Mennatallah</dc:contributor>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dcterms:issued>2023-01-23</dcterms:issued>
    <dcterms:rights rdf:resource="http://creativecommons.org/licenses/by-nc-nd/4.0/"/>
    <dc:language>eng</dc:language>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2026-03-02T06:56:22Z</dcterms:available>
    <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/76407/4/El-Assady_2-iqv9n4uddwkx5.pdf"/>
    <dc:creator>El-Assady, Mennatallah</dc:creator>
    <dc:rights>Attribution-NonCommercial-NoDerivatives 4.0 International</dc:rights>
  </rdf:Description>
</rdf:RDF>

Interner Vermerk

xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter

Kontakt
URL der Originalveröffentl.

Prüfdatum der URL

Prüfungsdatum der Dissertation

January 23, 2023
Hochschulschriftenvermerk
Konstanz, Univ., Diss., 2023
Finanzierungsart

Kommentar zur Publikation

Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Nein
Begutachtet
Diese Publikation teilen