Reiterer, Harald

Lade...
Profilbild
E-Mail-Adresse
Geburtsdatum
Forschungsvorhaben
Organisationseinheiten
Berufsbeschreibung
Nachname
Reiterer
Vorname
Harald
Name

Suchergebnisse Publikationen

Gerade angezeigt 1 - 10 von 345
Lade...
Vorschaubild
Veröffentlichung

Hybrid User Interfaces : Complementary Interfaces for Mixed Reality Interaction

2023-10-20, Hubenschmid, Sebastian, Zagermann, Johannes, Dachselt, Raimund, Elmqvist, Niklas, Feiner, Steven, Feuchtner, Tiare, Lee, Benjamin, Reiterer, Harald, Schmalstieg, Dieter

Lade...
Vorschaubild
Veröffentlichung

ARound the Smartphone: Investigating the Efects of Virtually-Extended Display Size on Spatial Memory

2023-04, Hubenschmid, Sebastian, Zagermann, Johannes, Leicht, Daniel, Reiterer, Harald, Feuchtner, Tiare

Smartphones conveniently place large information spaces in the palms of our hands. While research has shown that larger screens positively affect spatial memory, workload, and user experience, smartphones remain fairly compact for the sake of device ergonomics and portability. Thus, we investigate the use of hybrid user interfaces to virtually increase the available display size by complementing the smartphone with an augmented reality head-worn display. We thereby combine the benefts of familiar touch interaction with the near-infnite visual display space aforded by augmented reality. To better understand the potential of virtually-extended displays and the possible issues of splitting the user’s visual attention between two screens (real and virtual), we conducted a within-subjects experiment with 24 participants completing navigation tasks using diferent virtually-augmented display sizes. Our findings reveal that a desktop monitor size represents a “sweet spot” for extending smartphones with augmented reality, informing the design of hybrid user interfaces.

Lade...
Vorschaubild
Veröffentlichung

Understanding and Creating Spatial Interactions with Distant Displays Enabled by Unmodified Off-The-Shelf Smartphones

2022-10-19, Babic, Teo, Reiterer, Harald, Haller, Michael

Over decades, many researchers developed complex in-lab systems with the overall goal to track multiple body parts of the user for a richer and more powerful 2D/3D interaction with a distant display. In this work, we introduce a novel smartphone-based tracking approach that eliminates the need for complex tracking systems. Relying on simultaneous usage of the front and rear smartphone cameras, our solution enables rich spatial interactions with distant displays by combining touch input with hand-gesture input, body and head motion, as well as eye-gaze input. In this paper, we firstly present a taxonomy for classifying distant display interactions, providing an overview of enabling technologies, input modalities, and interaction techniques, spanning from 2D to 3D interactions. Further, we provide more details about our implementation—using off-the-shelf smartphones. Finally, we validate our system in a user study by a variety of 2D and 3D multimodal interaction techniques, including input refinement.

Lade...
Vorschaubild
Veröffentlichung

Human–Computer Integration : towards Integrating the Human Body with the Computational Machine

2022, Mueller, Florian 'Floyd', Semertzidis, Nathan, Andres, Josh, Weigel, Martin, Nanayakkara, Suranga, Patibanda, Rakesh, Li, Zhuying, Strohmeier, Paul, Knibbe, Jarrod, Greuter, Stefan, Obrist, Marianna, Maes, Pattie, Wang, Dakuo, Wolf, Katrin, Gerber, Liz, Marshall, Joe, Kunze, Kai, Grudin, Jonathan, Reiterer, Harald, Byrne, Richard

Human-Computer Integration (HInt) is an emerging new paradigm in the human-computer interaction (HCI) field. Its goal is to integrate the human body and the computational machine. This monograph presents two key dimensions of Human-Computer Integration (bodily agency and bodily ownership) and proposes a set of challenges that we believe need to be resolved in order to bring the paradigm forward. Ultimately, our work aims to facilitate a more structured investigation into human body and computational machine integration.

Lade...
Vorschaubild
Veröffentlichung

Relaxed forced choice improves performance of visual quality assessment methods

2023-06, Jenadeleh, Mohsen, Zagermann, Johannes, Reiterer, Harald, Reips, Ulf-Dietrich, Hamzaoui, Raouf, Saupe, Dietmar

In image quality assessment, a collective visual quality score for an image or video is obtained from the individual ratings of many subjects. One commonly used format for these experiments is the two-alternative forced choice method. Two stimuli with the same content but differing visual quality are presented sequentially or side-by-side. Subjects are asked to select the one of better quality, and when uncertain, they are required to guess. The relaxed alternative forced choice format aims to reduce the cognitive load and the noise in the responses due to the guessing by providing a third response option, namely, "not sure". This work presents a large and comprehensive crowdsourcing experiment to compare these two response formats: the one with the ``not sure'' option and the one without it. To provide unambiguous ground truth for quality evaluation, subjects were shown pairs of images with differing numbers of dots and asked each time to choose the one with more dots. Our crowdsourcing study involved 254 participants and was conducted using a within-subject design. Each participant was asked to respond to 40 pair comparisons with and without the "not sure" response option and completed a questionnaire to evaluate their cognitive load for each testing condition. The experimental results show that the inclusion of the "not sure" response option in the forced choice method reduced mental load and led to models with better data fit and correspondence to ground truth. We also tested for the equivalence of the models and found that they were different. The dataset is available at http://database.mmsp-kn.de/cogvqa-database.html.

Lade...
Vorschaubild
Veröffentlichung

A Survey on Measuring Cognitive Workload in Human-Computer Interaction

2023-01-31, Kosch, Thomas, Karolus, Jakob, Zagermann, Johannes, Reiterer, Harald, Schmidt, Albrecht, Woźniak, Paweł W.

The ever-increasing number of computing devices around us results in more and more systems competing for our attention, making cognitive workload a crucial factor for the user experience of human-computer interfaces. Research in Human-Computer Interaction (HCI) has used various metrics to determine users’ mental demands. However, there needs to be a systematic way to choose an appropriate and effective measure for cognitive workload in experimental setups, posing a challenge to their reproducibility. We present a literature survey of past and current metrics for cognitive workload used throughout HCI literature to address this challenge. By initially exploring what cognitive workload resembles in the HCI context, we derive a categorization supporting researchers and practitioners in selecting cognitive workload metrics for system design and evaluation. We conclude with three following research gaps: (1) defining and interpreting cognitive workload in HCI, (2) the hidden cost of the NASA-TLX, and (3) HCI research as a catalyst for workload-aware systems, highlighting that HCI research has to deepen and conceptualize the understanding of cognitive workload in the context of interactive computing systems.

Lade...
Vorschaubild
Veröffentlichung

Adapting visualizations and interfaces to the user

2022-08-31, Chiossi, Francesco, Zagermann, Johannes, Karolus, Jakob, Rodrigues, Nils, Balestrucci, Priscilla, Weiskopf, Daniel, Ehinger, Benedikt, Feuchtner, Tiare, Reiterer, Harald, Chuang, Lewis L.

Adaptive visualization and interfaces pervade our everyday tasks to improve interaction from the point of view of user performance and experience. This approach allows using several user inputs, whether physiological, behavioral, qualitative, or multimodal combinations, to enhance the interaction. Due to the multitude of approaches, we outline the current research trends of inputs used to adapt visualizations and user interfaces. Moreover, we discuss methodological approaches used in mixed reality, physiological computing, visual analytics, and proficiency-aware systems. With this work, we provide an overview of the current research in adaptive systems.

Lade...
Vorschaubild
Veröffentlichung

ARound the Smartphone : Investigating the Effects of Virtually-Extended Display Size on Spatial Memory

2023-04, Hubenschmid, Sebastian, Zagermann, Johannes, Leicht, Daniel, Reiterer, Harald, Feuchtner, Tiare

Smartphones conveniently place large information spaces in the palms of our hands. While research has shown that larger screens positively affect spatial memory, workload, and user experience, smartphones remain fairly compact for the sake of device ergonomics and portability. Thus, we investigate the use of hybrid user interfaces to virtually increase the available display size by complementing the smartphone with an augmented reality head-worn display. We thereby combine the benefts of familiar touch interaction with the near-infnite visual display space afforded by augmented reality. To better understand the potential of virtually-extended displays and the possible issues of splitting the user’s visual attention between two screens (real and virtual), we conducted a within-subjects experiment with 24 participants completing navigation tasks using diferent virtually-augmented display sizes. Our findings reveal that a desktop monitor size represents a “sweet spot” for extending smartphones with augmented reality, informing the design of hybrid user interfaces.

Lade...
Vorschaubild
Veröffentlichung

MoPeDT : A Modular Head-Mounted Display Toolkit to Conduct Peripheral Vision Research

2023, Albrecht, Matthias, Assländer, Lorenz, Reiterer, Harald, Streuber, Stephan

Peripheral vision plays a significant role in human perception and orientation. However, its relevance for human-computer interaction, especially head-mounted displays, has not been fully explored yet. In the past, a few specialized appliances were developed to display visual cues in the periphery, each designed for a single specific use case only. A multi-purpose headset to exclusively augment peripheral vision did not exist yet. We introduce MoPeDT: Modular Peripheral Display Toolkit, a freely available, flexible, reconfigurable, and extendable headset to conduct peripheral vision research. MoPeDT can be built with a 3D printer and off-the-shelf components. It features multiple spatially configurable near-eye display modules and full 3D tracking inside and outside the lab. With our system, researchers and designers may easily develop and prototype novel peripheral vision interaction and visualization techniques. We demonstrate the versatility of our headset with several possible applications for spatial awareness, balance, interaction, feedback, and notifications. We conducted a small study to evaluate the usability of the system. We found that participants were largely not irritated by the peripheral cues, but the headset's comfort could be further improved. We also evaluated our system based on established heuristics for human-computer interaction toolkits to show how MoPeDT adapts to changing requirements, lowers the entry barrier for peripheral vision research, and facilitates expressive power in the combination of modular building blocks.

Lade...
Vorschaubild
Veröffentlichung

ReLive : Bridging In-Situ and Ex-Situ Visual Analytics for Analyzing Mixed Reality User Studies

2022, Hubenschmid, Sebastian, Wieland, Jonathan, Fink, Daniel, Batch, Andrea, Zagermann, Johannes, Elmqvist, Niklas, Reiterer, Harald

The nascent field of mixed reality is seeing an ever-increasing need for user studies and field evaluation, which are particularly challenging given device heterogeneity, diversity of use, and mobile deployment. Immersive analytics tools have recently emerged to support such analysis in situ, yet the complexity of the data also warrants an ex-situ analysis using more traditional non-immersive visual analytics setups. To bridge the gap between both approaches, we introduce ReLive: a mixed-immersion visual analytics framework for exploring and analyzing mixed reality user studies. ReLive combines an in-situ virtual reality view with a complementary ex-situ desktop view. While the virtual reality view allows users to relive interactive spatial recordings replicating the original study, the synchronized desktop view provides a familiar interface for analyzing aggregated data. We validated our concepts in a two-step evaluation consisting of a design walkthrough and an empirical expert user study.