Grand Challenges in Human-Food Interaction
2024-03, Mueller, Florian ‘Floyd’, Obrist, Marianna, Bertran, Ferran Altarriba, Makam, Neharika, Kim, Soh, Dawes, Christopher, Marti, Patrizia, Reiterer, Harald, Wang, Hongyue, Wang, Yan
There is an increasing interest in combining interactive technology with food, leading to a new research area called human-food interaction. While food experiences are increasingly benefiting from interactive technology, for example in the form of food tracking apps, 3D-printed food and projections on dining tables, a more systematic advancement of the field is hindered because, so far, there is no comprehensive articulation of the grand challenges the field is facing. To further and consolidate conversations around this topic, we invited 21 HFI experts to a 5-day seminar. The goal was to review our own and prior work to identify the grand challenges in human-food interaction. The result is an articulation of 10 grand challenges in human-food interaction across 4 categories (technology, users, design and ethics). By presenting these grand challenges, we aim to help researchers move the human-food interaction research field forward.
Colibri : A Toolkit for Rapid Prototyping of Networking Across Realities
2023-10-16, Hubenschmid, Sebastian, Fink, Daniel I., Zagermann, Johannes, Wieland, Jonathan, Reiterer, Harald, Feuchtner, Tiare
We present Colibri, an open source networking toolkit for data exchange, model synchronization, and voice transmission to support rapid development of distributed cross reality research prototypes. Development of such prototypes often involves multiple heterogeneous components, which necessitates data exchange across a network. However, existing networking solutions are often unsuitable for research prototypes as they require significant development resources and may be lacking in terms of data privacy, logging capabilities, latency requirements, or supporting heterogeneous devices. In contrast, Colibri is specifically designed for networking in interactive research prototypes: Colibri facilitates the most common tasks for establishing communication between cross reality components with little to no code necessary. We describe the usage and implementation of Colibri and report on its application in three cross reality prototypes to demonstrate the toolkit’s capabilities. Lastly, we discuss open challenges to better support the creation of cross reality prototypes.
A Survey on Measuring Cognitive Workload in Human-Computer Interaction
2023-07-13, Kosch, Thomas, Karolus, Jakob, Zagermann, Johannes, Reiterer, Harald, Schmidt, Albrecht, Woźniak, Paweł W.
The ever-increasing number of computing devices around us results in more and more systems competing for our attention, making cognitive workload a crucial factor for the user experience of human-computer interfaces. Research in Human-Computer Interaction (HCI) has used various metrics to determine users’ mental demands. However, there needs to be a systematic way to choose an appropriate and effective measure for cognitive workload in experimental setups, posing a challenge to their reproducibility. We present a literature survey of past and current metrics for cognitive workload used throughout HCI literature to address this challenge. By initially exploring what cognitive workload resembles in the HCI context, we derive a categorization supporting researchers and practitioners in selecting cognitive workload metrics for system design and evaluation. We conclude with three following research gaps: (1) defining and interpreting cognitive workload in HCI, (2) the hidden cost of the NASA-TLX, and (3) HCI research as a catalyst for workload-aware systems, highlighting that HCI research has to deepen and conceptualize the understanding of cognitive workload in the context of interactive computing systems.
MoPeDT : A Modular Head-Mounted Display Toolkit to Conduct Peripheral Vision Research
2023, Albrecht, Matthias, Assländer, Lorenz, Reiterer, Harald, Streuber, Stephan
Peripheral vision plays a significant role in human perception and orientation. However, its relevance for human-computer interaction, especially head-mounted displays, has not been fully explored yet. In the past, a few specialized appliances were developed to display visual cues in the periphery, each designed for a single specific use case only. A multi-purpose headset to exclusively augment peripheral vision did not exist yet. We introduce MoPeDT: Modular Peripheral Display Toolkit, a freely available, flexible, reconfigurable, and extendable headset to conduct peripheral vision research. MoPeDT can be built with a 3D printer and off-the-shelf components. It features multiple spatially configurable near-eye display modules and full 3D tracking inside and outside the lab. With our system, researchers and designers may easily develop and prototype novel peripheral vision interaction and visualization techniques. We demonstrate the versatility of our headset with several possible applications for spatial awareness, balance, interaction, feedback, and notifications. We conducted a small study to evaluate the usability of the system. We found that participants were largely not irritated by the peripheral cues, but the headset's comfort could be further improved. We also evaluated our system based on established heuristics for human-computer interaction toolkits to show how MoPeDT adapts to changing requirements, lowers the entry barrier for peripheral vision research, and facilitates expressive power in the combination of modular building blocks.
Aircraft Cockpit Interaction in Virtual Reality with Visual, Auditive, and Vibrotactile Feedback
2023-10-31, Auer, Stefan, Anthes, Christoph, Reiterer, Harald, Jetter, Hans-Christian
Safety-critical interactive spaces for supervision and time-critical control tasks are usually characterized by many small displays and physical controls, typically found in control rooms or automotive, railway, and aviation cockpits. Using Virtual Reality (VR) simulations instead of a physical system can significantly reduce the training costs of these interactive spaces without risking real-world accidents or occupying expensive physical simulators. However, the user's physical interactions and feedback methods must be technologically mediated. Therefore, we conducted a within-subjects study with 24 participants and compared performance, task load, and simulator sickness during training of authentic aircraft cockpit manipulation tasks. The participants were asked to perform these tasks inside a VR flight simulator (VRFS) for three feedback methods (acoustic, haptic, and acoustic+haptic) and inside a physical flight simulator (PFS) of a commercial airplane cockpit. The study revealed a partial equivalence of VRFS and PFS, control-specific differences input elements, irrelevance of rudimentary vibrotactile feedback, slower movements in VR, as well as a preference for PFS.
Challenges and Opportunities for Collaborative Immersive Analytics with Hybrid User Interfaces
2023-10-16, Zagermann, Johannes, Hubenschmid, Sebastian, Fink, Daniel I., Wieland, Jonathan, Reiterer, Harald, Feuchtner, Tiare
Over the past years, we have seen an increase in the number of user studies involving mixed reality interfaces. As these environments usually exceed standardized user study settings that only measure time and error, we developed, designed, and evaluated a mixed- immersion evaluation framework called RELIVE. Its combination of in-situ and ex-situ analysis approaches allows for the holistic and malleable analysis and exploration of mixed reality user study data of an individual analyst in a step-by-step approach that we previously described as an asynchronous hybrid user interface. Yet, collaboration was coined as a key aspect for visual and immersive analytics – potentially allowing multiple analysts to synchronously explore mixed reality user study data from different but complemen- tary angles of evaluation using hybrid user interfaces. This leads to a variety of fundamental challenges and opportunities for research and design of hybrid user interfaces regarding e.g., allocation of tasks, the interplay between views, user representations, and collaborative coupling that are outlined in this position paper.
Relaxed forced choice improves performance of visual quality assessment methods
2023-06, Jenadeleh, Mohsen, Zagermann, Johannes, Reiterer, Harald, Reips, Ulf-Dietrich, Hamzaoui, Raouf, Saupe, Dietmar
In image quality assessment, a collective visual quality score for an image or video is obtained from the individual ratings of many subjects. One commonly used format for these experiments is the two-alternative forced choice method. Two stimuli with the same content but differing visual quality are presented sequentially or side-by-side. Subjects are asked to select the one of better quality, and when uncertain, they are required to guess. The relaxed alternative forced choice format aims to reduce the cognitive load and the noise in the responses due to the guessing by providing a third response option, namely, "not sure". This work presents a large and comprehensive crowdsourcing experiment to compare these two response formats: the one with the ``not sure'' option and the one without it. To provide unambiguous ground truth for quality evaluation, subjects were shown pairs of images with differing numbers of dots and asked each time to choose the one with more dots. Our crowdsourcing study involved 254 participants and was conducted using a within-subject design. Each participant was asked to respond to 40 pair comparisons with and without the "not sure" response option and completed a questionnaire to evaluate their cognitive load for each testing condition. The experimental results show that the inclusion of the "not sure" response option in the forced choice method reduced mental load and led to models with better data fit and correspondence to ground truth. We also tested for the equivalence of the models and found that they were different. The dataset is available at http://database.mmsp-kn.de/cogvqa-database.html.
Opportunities and Challenges of Hybrid User Interfaces for Optimization of Mixed Reality Interfaces
2023-10-16, Zaky, Abdelrahman, Zagermann, Johannes, Reiterer, Harald, Feuchtner, Tiare
Current research highlights the importance of adaptive mixed reality interfaces, as increased adoption leads to increasingly diverse, complex and unconstrained interaction scenarios. An interesting approach for adaptation, is the optimization of interface layout and behaviour. We thereby consider three distinct types of context to which the interface adapts: the user, the activity, and the environment. The latter of these includes a myriad of interactive devices surrounding the user, the capabilities of which we propose to take advantage of by integrating them in a hybrid user interface. Hybrid user interfaces offer many opportunities to address distinct usability issues, such as visibility, reachability, and ergonomics. However, considering additional interactive devices for optimizing mixed reality interfaces introduces a number of additional challenges, such as detecting available and suitable devices and modeling the respective interaction costs. Moreover, using different devices potentially introduces a switching cost e.g., in terms of cognitive load and time. In this paper, we aim to discuss different opportunities and challenges of using hybrid user interfaces for the optimization of mixed reality interfaces and thereby highlight directions for future work.
ARound the Smartphone : Investigating the Effects of Virtually-Extended Display Size on Spatial Memory
2023-04, Hubenschmid, Sebastian, Zagermann, Johannes, Leicht, Daniel, Reiterer, Harald, Feuchtner, Tiare
Smartphones conveniently place large information spaces in the palms of our hands. While research has shown that larger screens positively affect spatial memory, workload, and user experience, smartphones remain fairly compact for the sake of device ergonomics and portability. Thus, we investigate the use of hybrid user interfaces to virtually increase the available display size by complementing the smartphone with an augmented reality head-worn display. We thereby combine the benefts of familiar touch interaction with the near-infnite visual display space afforded by augmented reality. To better understand the potential of virtually-extended displays and the possible issues of splitting the user’s visual attention between two screens (real and virtual), we conducted a within-subjects experiment with 24 participants completing navigation tasks using diferent virtually-augmented display sizes. Our findings reveal that a desktop monitor size represents a “sweet spot” for extending smartphones with augmented reality, informing the design of hybrid user interfaces.