Colibri : A Toolkit for Rapid Prototyping of Networking Across Realities
2023-10-16, Hubenschmid, Sebastian, Fink, Daniel I., Zagermann, Johannes, Wieland, Jonathan, Reiterer, Harald, Feuchtner, Tiare
We present Colibri, an open source networking toolkit for data exchange, model synchronization, and voice transmission to support rapid development of distributed cross reality research prototypes. Development of such prototypes often involves multiple heterogeneous components, which necessitates data exchange across a network. However, existing networking solutions are often unsuitable for research prototypes as they require significant development resources and may be lacking in terms of data privacy, logging capabilities, latency requirements, or supporting heterogeneous devices. In contrast, Colibri is specifically designed for networking in interactive research prototypes: Colibri facilitates the most common tasks for establishing communication between cross reality components with little to no code necessary. We describe the usage and implementation of Colibri and report on its application in three cross reality prototypes to demonstrate the toolkit’s capabilities. Lastly, we discuss open challenges to better support the creation of cross reality prototypes.
MoPeDT : A Modular Head-Mounted Display Toolkit to Conduct Peripheral Vision Research
2023, Albrecht, Matthias, Assländer, Lorenz, Reiterer, Harald, Streuber, Stephan
Peripheral vision plays a significant role in human perception and orientation. However, its relevance for human-computer interaction, especially head-mounted displays, has not been fully explored yet. In the past, a few specialized appliances were developed to display visual cues in the periphery, each designed for a single specific use case only. A multi-purpose headset to exclusively augment peripheral vision did not exist yet. We introduce MoPeDT: Modular Peripheral Display Toolkit, a freely available, flexible, reconfigurable, and extendable headset to conduct peripheral vision research. MoPeDT can be built with a 3D printer and off-the-shelf components. It features multiple spatially configurable near-eye display modules and full 3D tracking inside and outside the lab. With our system, researchers and designers may easily develop and prototype novel peripheral vision interaction and visualization techniques. We demonstrate the versatility of our headset with several possible applications for spatial awareness, balance, interaction, feedback, and notifications. We conducted a small study to evaluate the usability of the system. We found that participants were largely not irritated by the peripheral cues, but the headset's comfort could be further improved. We also evaluated our system based on established heuristics for human-computer interaction toolkits to show how MoPeDT adapts to changing requirements, lowers the entry barrier for peripheral vision research, and facilitates expressive power in the combination of modular building blocks.
Arrow, Bézier Curve, or Halos? : Comparing 3D Out-of-View Object Visualization Techniques for Handheld Augmented Reality
2022, Wieland, Jonathan, Hegemann Garcia, Rudolf C., Reiterer, Harald, Feuchtner, Tiare
Handheld augmented reality (AR) applications allow users to interact with their virtually augmented environment on the screen of their tablet or smartphone by simply pointing its camera at nearby objects or “points of interest” (POIs). However, this often requires users to carefully scan their surroundings in search of POIs that are out of view. Proposed 2D guides for out-of-view POIs can, unfortunately, be ambiguous due to the projection of a 3D position to 2D screen space. We address this by using 3D visualizations that directly encode the POI’s 3D direction and distance. Based on related work, we implemented three such visualization techniques: (1) 3D Arrow, (2) 3D Bézier Curve, and (3) 3D Halos. We confirmed the applicability of these three techniques in a case study and then compared them in a user study, evaluating performance, workload, and user experience. Participants performed best using 3D Arrow, while surprisingly, 3D Halos led to poor results. We discuss the design implications of these results that can inform future 3D out-of-view object visualization techniques.
Challenges and Opportunities for Collaborative Immersive Analytics with Hybrid User Interfaces
2023-10-16, Zagermann, Johannes, Hubenschmid, Sebastian, Fink, Daniel I., Wieland, Jonathan, Reiterer, Harald, Feuchtner, Tiare
Over the past years, we have seen an increase in the number of user studies involving mixed reality interfaces. As these environments usually exceed standardized user study settings that only measure time and error, we developed, designed, and evaluated a mixed- immersion evaluation framework called RELIVE. Its combination of in-situ and ex-situ analysis approaches allows for the holistic and malleable analysis and exploration of mixed reality user study data of an individual analyst in a step-by-step approach that we previously described as an asynchronous hybrid user interface. Yet, collaboration was coined as a key aspect for visual and immersive analytics – potentially allowing multiple analysts to synchronously explore mixed reality user study data from different but complemen- tary angles of evaluation using hybrid user interfaces. This leads to a variety of fundamental challenges and opportunities for research and design of hybrid user interfaces regarding e.g., allocation of tasks, the interplay between views, user representations, and collaborative coupling that are outlined in this position paper.
Re-locations : Augmenting Personal and Shared Workspaces to Support Remote Collaboration in Incongruent Spaces
2022, Fink, Daniel I., Zagermann, Johannes, Reiterer, Harald, Jetter, Hans-Christian
Augmented reality (AR) can create the illusion of being virtually co-located during remote collaboration, e.g., by visualizing remote co-workers as avatars. However, spatial awareness of each other’s activities is limited as physical spaces, including the position of physical devices, are often incongruent. Therefore, alignment methods are needed to support activities on physical devices. In this paper, we present the concept of Re-locations, a method for enabling remote collaboration with augmented reality in incongruent spaces. The idea of the concept is to enrich remote collaboration activities on multiple physical devices with attributes of co-located collaboration such as spatial awareness and spatial referencing by locally relocating remote user representations to user-defined workspaces. We evaluated the Re-locations concept in an explorative user study with dyads using an authentic, collaborative task. Our findings indicate that Re-locations introduce attributes of co-located collaboration like spatial awareness and social presence. Based on our findings, we provide implications for future research and design of remote collaboration systems using AR.
Separation, Composition, or Hybrid? : Comparing Collaborative 3D Object Manipulation Techniques for Handheld Augmented Reality
2021, Wieland, Jonathan, Zagermann, Johannes, Müller, Jens, Reiterer, Harald
Augmented Reality (AR) supported collaboration is a popular topic in HCI research. Previous work has shown the benefits of collaborative 3D object manipulation and identified two possibilities: Either separate or compose users’ inputs. However, their experimental comparison using handheld AR displays is still missing. We, therefore, conducted an experiment in which we tasked 24 dyads with collaboratively positioning virtual objects in handheld AR using three manipulation techniques: 1) Separation – performing only different manipulation tasks (i. e., translation or rotation) simultaneously, 2) Composition – performing only the same manipulation tasks simultaneously and combining individual inputs using a merge policy, and 3) Hybrid – performing any manipulation tasks simultaneously, enabling dynamic transitions between Separation and Composition. While all techniques were similarly effective, Composition was least efficient, with higher subjective workload and worse user experience. Preferences were polarized between clear work division (Separation) and freedom of action (Hybrid). Based on our findings, we offer research and design implications.
ARound the Smartphone : Investigating the Effects of Virtually-Extended Display Size on Spatial Memory
2023-04, Hubenschmid, Sebastian, Zagermann, Johannes, Leicht, Daniel, Reiterer, Harald, Feuchtner, Tiare
Smartphones conveniently place large information spaces in the palms of our hands. While research has shown that larger screens positively affect spatial memory, workload, and user experience, smartphones remain fairly compact for the sake of device ergonomics and portability. Thus, we investigate the use of hybrid user interfaces to virtually increase the available display size by complementing the smartphone with an augmented reality head-worn display. We thereby combine the benefts of familiar touch interaction with the near-infnite visual display space afforded by augmented reality. To better understand the potential of virtually-extended displays and the possible issues of splitting the user’s visual attention between two screens (real and virtual), we conducted a within-subjects experiment with 24 participants completing navigation tasks using diferent virtually-augmented display sizes. Our findings reveal that a desktop monitor size represents a “sweet spot” for extending smartphones with augmented reality, informing the design of hybrid user interfaces.
ReLive : Bridging In-Situ and Ex-Situ Visual Analytics for Analyzing Mixed Reality User Studies
2022, Hubenschmid, Sebastian, Wieland, Jonathan, Fink, Daniel I., Batch, Andrea, Zagermann, Johannes, Elmqvist, Niklas, Reiterer, Harald
The nascent field of mixed reality is seeing an ever-increasing need for user studies and field evaluation, which are particularly challenging given device heterogeneity, diversity of use, and mobile deployment. Immersive analytics tools have recently emerged to support such analysis in situ, yet the complexity of the data also warrants an ex-situ analysis using more traditional non-immersive visual analytics setups. To bridge the gap between both approaches, we introduce ReLive: a mixed-immersion visual analytics framework for exploring and analyzing mixed reality user studies. ReLive combines an in-situ virtual reality view with a complementary ex-situ desktop view. While the virtual reality view allows users to relive interactive spatial recordings replicating the original study, the synchronized desktop view provides a familiar interface for analyzing aggregated data. We validated our concepts in a two-step evaluation consisting of a design walkthrough and an empirical expert user study.
KiTT - The Kinaesthetics Transfer Teacher : Design and Evaluation of a Tablet-based System to Promote the Learning of Ergonomic Patient Transfers
2021, Dürr, Maximilian, Borowski, Marcel, Gröschel, Carla, Pfeil, Ulrike, Müller, Jens, Reiterer, Harald
Nurses frequently transfer patients as part of their daily work. However, manual patient transfers pose a major risk to nurses’ health. Although the Kinaesthetics care conception can help address this issue, existing support to learn the concept is low. We present KiTT, a tablet-based system, to promote the learning of ergonomic patient transfers based on the Kinaesthetics care conception. KiTT supports the training of Kinaesthetics-based patient transfers by two nurses. The nurses are guided by the phases (i) interactive instructions, (ii) training of transfer conduct, and (iii) feedback and reflection. We evaluated KiTT with 26 nursing-care students in a nursing-care school. Our results indicate that KiTT provides a good subjective support for the learning of Kinaesthetics. Our results also suggest that KiTT can promote the ergonomically correct conduct of patient transfers while providing a good user experience adequate to the nursing-school context, and reveal how KiTT can extend existing practices.