Metz, Yannick
Forschungsvorhaben
Organisationseinheiten
Berufsbeschreibung
Nachname
Vorname
Name
Suchergebnisse Publikationen
VISITOR : Visual Interactive State Sequence Exploration for Reinforcement Learning
2023-06, Metz, Yannick, Bykovets, Eugene, Joos, Lucas, Keim, Daniel A., El-Assady, Mennatallah
Understanding the behavior of deep reinforcement learning agents is a crucial requirement throughout their development. Existing work has addressed the identification of observable behavioral patterns in state sequences or analysis of isolated internal representations; however, the overall decision-making of deep-learning RL agents remains opaque. To tackle this, we present VISITOR, a visual analytics system enabling the analysis of entire state sequences, the diagnosis of singular predictions, and the comparison between agents. A sequence embedding view enables the multiscale analysis of state sequences, utilizing custom embedding techniques for a stable spatialization of the observations and internal states. We provide multiple layers: (1) a state space embedding, highlighting different groups of states inside the state-action sequences, (2) a trajectory view, emphasizing decision points, (3) a network activation mapping, visualizing the relationship between observations and network activations, (4) a transition embedding, enabling the analysis of state-to-state transitions. The embedding view is accompanied by an interactive reward view that captures the temporal development of metrics, which can be linked directly to states in the embedding. Lastly, a model list allows for the quick comparison of models across multiple metrics. Annotations can be exported to communicate results to different audiences. Our two-stage evaluation with eight experts confirms the effectiveness in identifying states of interest, comparing the quality of policies, and reasoning about the internal decision-making processes.
Semantic Color Mapping : A Pipeline for Assigning Meaningful Colors to Text
2022-10, El-Assady, Mennatallah, Kehlbeck, Rebecca, Metz, Yannick, Schlegel, Udo, Sevastjanova, Rita, Sperrle, Fabian, Spinner, Thilo
Current visual text analytics applications do not regard color assignment as a prominent design consideration. We argue that there is a need for applying meaningful colors to text, enhancing comprehension and comparability. Hence, in this paper, we present a guideline to facilitate the choice of colors in text visualizations. The semantic color mapping pipeline is derived from literature and experiences in text visualization design and sums up design considerations, lessons learned, and best practices. The proposed pipeline starts by extracting labeled data from raw text, choosing an aggregation level to create an appropriate vector representation, then defining the unit of analysis to project the data into a low-dimensional space, and finally assigning colors based on the selected color space. We argue that applying such a pipeline enhances the understanding of attribute relations in text visualizations, as confirmed by two applications.
A Comprehensive Workflow for Effective Imitation and Reinforcement Learning with Visual Analytics
2022, Metz, Yannick, Schlegel, Udo, Seebacher, Daniel, El-Assady, Mennatallah, Keim, Daniel A.
Multiple challenges hinder the application of reinforcement learning algorithms in experimental and real-world use cases even with recent successes in such areas. Such challenges occur at different stages of the development and deployment of such models. While reinforcement learning workflows share similarities with machine learning approaches, we argue that distinct challenges can be tackled and overcome using visual analytic concepts. Thus, we propose a comprehensive workflow for reinforcement learning and present an implementation of our workflow incorporating visual analytic concepts integrating tailored views and visualizations for different stages and tasks of the workflow.
RLHF-Blender : A Configurable Interactive Interface for Learning from Diverse Human Feedback
2023, Metz, Yannick, Lindner, David, Baur, Raphaël, Keim, Daniel A., El-Assady, Mennatallah
To use reinforcement learning from human feedback (RLHF) in practical applications, it is crucial to learn reward models from diverse sources of human feedback, and to consider human factors involved in providing feedback of different types. However, systematic study of learning from diverse types of feedback is held back by limited standardized tooling available to researchers. To bridge this gap, we propose RLHF-Blender, a configurable, interactive interface for learning from human feedback. RLHF-Blender provides a modular experimentation framework and implementation that enables researchers to systematically investigate the properties and qualities of human feedback for reward learning. The system facilitates the exploration of various feedback types, including demonstrations, rankings, comparisons, and natural language instructions, as well as studies considering the impact of human factors on their effectiveness. We discuss a set of concrete research opportunities enabled by RLHF-Blender. More information is available at our website.
PRIMAGE - An Artifical Intelligence-based Clinical Decision Support System for Optimized Cancer Diagnosis and Risk Assessment : A Progress Update
2022, Nieto, Adela Cañete, Ladenstein, Ruth, Hero, Barbara, Taschner-Mandl, Sabine, Pötschger, Ulrike, Düster, Vanessa, Martinez De Las Heras, Blanca, Fischer, Maximilian T., Metz, Yannick, Keim, Daniel A.
Task-based Visual Interactive Modeling : Decision Trees and Rule-based Classifiers
2021-01-13, Streeb, Dirk, Metz, Yannick, Schlegel, Udo, Schneider, Bruno, El-Assady, Mennatallah, Neth, Hansjörg, Chen, Min, Keim, Daniel A.
Visual analytics enables the coupling of machine learning models and humans in a tightly integrated workflow, addressing various analysis tasks. Each task poses distinct demands to analysts and decision-makers. In this survey, we focus on one canonical technique for rule-based classification, namely decision tree classifiers. We provide an overview of available visualizations for decision trees with a focus on how visualizations differ with respect to 16 tasks. Further, we investigate the types of visual designs employed, and the quality measures presented. We find that (i) interactive visual analytics systems for classifier development offer a variety of visual designs, (ii) utilization tasks are sparsely covered, (iii) beyond classifier development, node-link diagrams are omnipresent, (iv) even systems designed for machine learning experts rarely feature visual representations of quality measures other than accuracy. In conclusion, we see a potential for integrating algorithmic techniques, mathematical quality measures, and tailored interactive visualizations to enable human experts to utilize their knowledge more effectively.
A text and image analysis workflow using citizen science data to extract relevant social media records : Combining red kite observations from Flickr, eBird and iNaturalist
2022-11, Hartmann, Maximilian C., Schott, Moritz, Dsouza, Alishiba, Metz, Yannick, Volpi, Michele, Purves, Ross S.
There is an urgent need to develop new methods to monitor the state of the environment. One potential approach is to use new data sources, such as User-Generated Content, to augment existing approaches. However, to date, studies typically focus on a single date source and modality. We take a new approach, using citizen science records recording sightings of red kites (Milvus milvus) to train and validate a Convolutional Neural Network (CNN) capable of identifying images containing red kites. This CNN is integrated in a sequential workflow which also uses an off-the-shelf bird classifier and text metadata to retrieve observations of red kites in the Chilterns, England. Our workflow reduces an initial set of more than 600,000 images to just 3065 candidate images. Manual inspection of these images shows that our approach has a precision of 0.658. A workflow using only text identifies 14% less images than that including image content analysis, and by combining image and text classifiers we achieve almost perfect precision of 0.992. Images retrieved from social media records complement those recorded by citizen scientists spatially and temporally, and our workflow is sufficiently generic that it can easily be transferred to other species.
BARReL : Bottleneck Attention for Adversarial Robustness in Vision-Based Reinforcement Learning
2022, Bykovets, Eugene, Metz, Yannick, El-Assady, Mennatallah, Keim, Daniel A., Buhmann, Joachim M.
Robustness to adversarial perturbations has been explored in many areas of computer vision. This robustness is particularly relevant in vision-based reinforcement learning, as the actions of autonomous agents might be safety-critic or impactful in the real world. We investigate the susceptibility of vision-based reinforcement learning agents to gradient-based adversarial attacks and evaluate a potential defense. We observe that Bottleneck Attention Modules (BAM) included in CNN architectures can act as potential tools to increase robustness against adversarial attacks. We show how learned attention maps can be used to recover activations of a convolutional layer by restricting the spatial activations to salient regions. Across a number of RL environments, BAM-enhanced architectures show increased robustness during inference. Finally, we discuss potential future research directions.
Interactive Webtool for Tempospatial Data and Visual Audio Analysis
2018, Bäumle, Benedikt, Boesecke, Ina, Buchmüller, Raphael, Metz, Yannick, Buchmüller, Juri F., Cakmak, Eren, Jentner, Wolfgang, Keim, Daniel A.
To solve VAST Mini Challenge 1, we build an interactive visualization tool that allows hypothesis testing and exploratory analysis of the data. The tool contains different visualizations for metadata and audio data analysis. To analyze the recorded bird calls, we trained a Gradient Boosting-classifier to distinguish different bird species. Our tool integrates these results and visualizes them in combination with additional data allowing the user to get context information and confirm the results.