Publikation:

Learning Contextualized User Preferences for Co‐Adaptive Guidance in Mixed‐Initiative Topic Model Refinement

Lade...
Vorschaubild

Dateien

Sperrle_2-1puacsni9ezc06.pdf
Sperrle_2-1puacsni9ezc06.pdfGröße: 1.78 MBDownloads: 183

Datum

2021

Herausgeber:innen

Kontakt

ISSN der Zeitschrift

Electronic ISSN

ISBN

Bibliografische Daten

Verlag

Schriftenreihe

Auflagebezeichnung

DOI (zitierfähiger Link)
ArXiv-ID

Internationale Patentnummer

Link zur Lizenz

Angaben zur Forschungsförderung

Projekt

Open Access-Veröffentlichung
Open Access Hybrid
Core Facility der Universität Konstanz

Gesperrt bis

Titel in einer weiteren Sprache

Publikationstyp
Zeitschriftenartikel
Publikationsstatus
Published

Erschienen in

Computer Graphics Forum. Wiley. 2021, 40(3), pp. 215-226. ISSN 0167-7055. eISSN 1467-8659. Available under: doi: 10.1111/cgf.14301

Zusammenfassung

Mixed-initiative visual analytics systems support collaborative human-machine decision-making processes. However, many multi-objective optimization tasks, such as topic model refinement, are highly subjective and context-dependent. Hence, systems need to adapt their optimization suggestions throughout the interactive refinement process to provide efficient guidance. To tackle this challenge, we present a technique for learning context-dependent user preferences and demonstrate its applicability to topic model refinement. We deploy agents with distinct associated optimization strategies that compete for the user's acceptance of their suggestions. To decide when to provide guidance, each agent maintains an intelligible, rule-based classifier over context vectorizations that captures the development of quality metrics between distinct analysis states. By observing implicit and explicit user feedback, agents learn in which contexts to provide their specific guidance operation. An agent in topic model refinement might, for example, learn to react to declining model coherence by suggesting to split a topic. Our results confirm that the rules learned by agents capture contextual user preferences. Further, we show that the learned rules are transferable between similar datasets, avoiding common cold-start problems and enabling a continuous refinement of agents across corpora.

Zusammenfassung in einer weiteren Sprache

Fachgebiet (DDC)
004 Informatik

Schlagwörter

Konferenz

Rezension
undefined / . - undefined, undefined

Forschungsvorhaben

Organisationseinheiten

Zeitschriftenheft

Zugehörige Datensätze in KOPS

Zitieren

ISO 690SPERRLE, Fabian, Hanna SCHÄFER, Daniel A. KEIM, Mennatallah EL-ASSADY, 2021. Learning Contextualized User Preferences for Co‐Adaptive Guidance in Mixed‐Initiative Topic Model Refinement. In: Computer Graphics Forum. Wiley. 2021, 40(3), pp. 215-226. ISSN 0167-7055. eISSN 1467-8659. Available under: doi: 10.1111/cgf.14301
BibTex
@article{Sperrle2021Learn-54161,
  year={2021},
  doi={10.1111/cgf.14301},
  title={Learning Contextualized User Preferences for Co‐Adaptive Guidance in Mixed‐Initiative Topic Model Refinement},
  number={3},
  volume={40},
  issn={0167-7055},
  journal={Computer Graphics Forum},
  pages={215--226},
  author={Sperrle, Fabian and Schäfer, Hanna and Keim, Daniel A. and El-Assady, Mennatallah}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/54161">
    <dc:rights>Attribution-NonCommercial-NoDerivatives 4.0 International</dc:rights>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/54161"/>
    <dc:contributor>Schäfer, Hanna</dc:contributor>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2021-06-30T14:02:53Z</dcterms:available>
    <dcterms:rights rdf:resource="http://creativecommons.org/licenses/by-nc-nd/4.0/"/>
    <dc:contributor>Keim, Daniel A.</dc:contributor>
    <dc:contributor>El-Assady, Mennatallah</dc:contributor>
    <dc:creator>El-Assady, Mennatallah</dc:creator>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dcterms:title>Learning Contextualized User Preferences for Co‐Adaptive Guidance in Mixed‐Initiative Topic Model Refinement</dcterms:title>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2021-06-30T14:02:53Z</dc:date>
    <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/54161/1/Sperrle_2-1puacsni9ezc06.pdf"/>
    <dc:creator>Keim, Daniel A.</dc:creator>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/54161/1/Sperrle_2-1puacsni9ezc06.pdf"/>
    <dcterms:issued>2021</dcterms:issued>
    <dc:creator>Sperrle, Fabian</dc:creator>
    <dc:contributor>Sperrle, Fabian</dc:contributor>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dc:language>eng</dc:language>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dc:creator>Schäfer, Hanna</dc:creator>
    <dcterms:abstract xml:lang="eng">Mixed-initiative visual analytics systems support collaborative human-machine decision-making processes. However, many multi-objective optimization tasks, such as topic model refinement, are highly subjective and context-dependent. Hence, systems need to adapt their optimization suggestions throughout the interactive refinement process to provide efficient guidance. To tackle this challenge, we present a technique for learning context-dependent user preferences and demonstrate its applicability to topic model refinement. We deploy agents with distinct associated optimization strategies that compete for the user's acceptance of their suggestions. To decide when to provide guidance, each agent maintains an intelligible, rule-based classifier over context vectorizations that captures the development of quality metrics between distinct analysis states. By observing implicit and explicit user feedback, agents learn in which contexts to provide their specific guidance operation. An agent in topic model refinement might, for example, learn to react to declining model coherence by suggesting to split a topic. Our results confirm that the rules learned by agents capture contextual user preferences. Further, we show that the learned rules are transferable between similar datasets, avoiding common cold-start problems and enabling a continuous refinement of agents across corpora.</dcterms:abstract>
  </rdf:Description>
</rdf:RDF>

Interner Vermerk

xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter

Kontakt
URL der Originalveröffentl.

Prüfdatum der URL

Prüfungsdatum der Dissertation

Finanzierungsart

Kommentar zur Publikation

Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Ja
Begutachtet
Ja
Diese Publikation teilen