Exploring the Benefits and Barriers of Using Computational Notebooks for Collaborative Programming Assignments
2020, Borowski, Marcel, Zagermann, Johannes, Klokmose, Clemens N., Reiterer, Harald, Rädle, Roman
Programming assignments in computer science courses are often processed in pairs or groups of students. While working together, students face several shortcomings in today's software: The lack of real-time collaboration capabilities, the setup time of the development environment, and the use of different devices or operating systems can hamper students when working together on assignments. Text processing platforms like Google Docs solve these problems for the writing process of prose text, and computational notebooks like Google Colaboratory for data analysis tasks. However, none of these platforms allows users to implement interactive applications. We deployed a web-based literate programming system for three months during an introductory course on application development to explore how collaborative programming practices unfold and how the structure of computational notebooks affect the development. During the course, pairs of students solved weekly programming assignments. We analyzed data from weekly questionnaires, three focus groups with students and teaching assistants, and keystroke-level log data to facilitate the understanding of the subtleties of collaborative programming with computational notebooks. Findings reveal that there are distinct collaboration patterns; the preferred collaboration pattern varied between pairs and even varied within pairs over the course of three months. Recognizing these distinct collaboration patterns can help to design future computational notebooks for collaborative programming assignments.
Remote Collaboration With Mixed Reality Displays : How Shared Virtual Landmarks Facilitate Spatial Referencing
2017, Müller, Jens, Rädle, Roman, Reiterer, Harald
HCI research has demonstrated Mixed Reality (MR) as being beneficial for co-located collaborative work. For remote collaboration, however, the collaborators' visual contexts do not coincide due to their individual physical environments. The problem becomes apparent when collaborators refer to physical landmarks in their individual environments to guide each other's attention. In an experimental study with 16 dyads, we investigated how the provisioning of shared virtual landmarks (SVLs) influences communication behavior and user experience. A quantitative analysis revealed that participants used significantly less ambiguous spatial expressions and reported an improved user experience when SVLs were provided. Based on these findings and a qualitative video analysis we provide implications for the design of MRs to facilitate remote collaboration.
Deployable Cross-Device Experiences : Proposing Additional Web Standards
2015, Schreiner, Mario, Rädle, Roman, O’Hara, Kenton, Reiterer, Harald
Cross-device interaction is rarely observed in everyday life and outside of research facilities. In this position paper we explore potential reasons for this shortcoming and discuss why the web is a promising enabling technology for crossdevice interactions. We propose a concept for new, crossdevice centric web standards that would allow to develop, deploy, and use cross-device applications in everyday life.
An Experimental Comparison of Vertical and Horizontal Dynamic Peephole Navigation
2015, Müller, Jens, Rädle, Roman, Jetter, Hans-Christian, Reiterer, Harald
Dynamic peephole navigation represents a technique for navigating large information spaces in an egocentric way. Studies have shown cognitive benefits for a vertical peephole orientation, when compared to non-egocentric interaction styles. To see how the aspect of canvas orientation effects user performance, we conducted a study (N=16) which revealed that canvas orientation has no significant effect on either navigation performance or spatial memory. We also found a significantly lower physical demand and a higher mental demand in the horizontal orientation. For short-term activities we therefore propose a vertical orientation, while for long-term activities horizontal dynamic peephole navigation is more suitable.
PolarTrack : Optical Outside-In Device Tracking that Exploits Display Polarization
2018, Rädle, Roman, Jetter, Hans-Christian, Fischer, Jonathan, Gabriel, Inti, Klokmose, Clemens N., Reiterer, Harald, Holz, Christian
PolarTrack is a novel camera-based approach to detecting and tracking mobile devices inside the capture volume. In PolarTrack, a polarization filter continuously rotates in front of an off-the-shelf color camera, which causes the displays of observed devices to periodically blink in the camera feed. The periodic blinking results from the physical characteristics of current displays, which shine polarized light either through an LC overlay to produce images or through a polarizer to reduce light reflections on OLED displays. PolarTrack runs a simple detection algorithm on the camera feed to segment displays and track their locations and orientations, which makes PolarTrack particularly suitable as a tracking system for cross-device interaction with mobile devices. Our evaluation of PolarTrack's tracking quality and comparison with state-of-the-art camera-based multi-device tracking showed a better tracking accuracy and precision with similar tracking reliability. PolarTrack works as standalone multi-device tracking but is also compatible with existing camera-based tracking systems and can complement them to compensate for their limitations.
Virtual Objects as Spatial Cues in Collaborative Mixed Reality Environments : How They Shape Communication Behavior and User Task Load
2016, Müller, Jens, Rädle, Roman, Reiterer, Harald
In collaborative activities, collaborators can use physical objects in their shared environment as spatial cues to guide each other's attention. Collaborative mixed reality environments (MREs) include both, physical and digital objects. To study how virtual objects influence collaboration and whether they are used as spatial cues, we conducted a controlled lab experiment with 16 dyads. Results of our study show that collaborators favored the digital objects as spatial cues over the physical environment and the physical objects: Collaborators used significantly less deictic gestures in favor of more disambiguous verbal references and a decreased subjective workload when virtual objects were present. This suggests adding additional virtual objects as spatial cues to MREs to improve user experience during collaborative mixed reality tasks.
Reporting Experiences on Group Activities in Cross-Device Settings
2015, Zagermann, Johannes, Pfeil, Ulrike, Schreiner, Mario, Rädle, Roman, Jetter, Hans-Christian, Reiterer, Harald
Even though mobile devices are ubiquitous and users often own several of them, using them in concert to achieve a common goal is not well supported and remains a challenge for HCI. In this paper, we report on our observations of cross-device usage within groups when they engaged in a dyadic collaborative sensemaking task. Based on our findings, we discuss limitations of a state-of-the-art cross-device setting and present a set of design recommendations. We then propose an alternative design that aims for greater flexibility when using mobile devices to enable a free configuration of workspaces depending on users’ current activity.
Is Two Enough?! : Studying Benefits, Barriers, and Biases of Multi-Tablet Use for Collaborative Visualization
2017, Plank, Thomas, Jetter, Hans-Christian, Rädle, Roman, Klokmose, Clemens N., Luger, Thomas, Reiterer, Harald
A sizable part of HCI research on cross-device interaction is driven by the vision of users conducting complex knowledge work seamlessly across multiple mobile devices. This is based on the Weiserian assumption that people will be inclined to distribute their work across multiple ``pads' if such are available. We observed that this is not the reality today, even when devices were in abundance. We present a study with 24 participants in 12 dyads completing a collaborative visualization task with up to six tablets. They could choose between three different visualization types to answer questions about economic data. Tasks were designed to afford simultaneous use of tablets, either with linked or independent views. We found that users typically utilized only one tablet per user. A quantitative and qualitative analysis revealed a ``legacy bias' that introduced barriers for using more tablets and reduced the overall benefit of multi-device visualization.
When Tablets meet Tabletops : The Effect of Tabletop Size on Around-the-Table Collaboration with Personal Tablets
2016, Zagermann, Johannes, Pfeil, Ulrike, Rädle, Roman, Jetter, Hans-Christian, Klokmose, Clemens, Reiterer, Harald
Cross-device collaboration with tablets is an increasingly popular topic in HCI. Previous work has shown that tablet-only collaboration can be improved by an additional shared workspace on an interactive tabletop. However, large tabletops are costly and need space, raising the question to what extent the physical size of shared horizontal surfaces really pays off. In order to analyse the suitability of smaller-than-tabletop devices (e.g. tablets) as a low-cost alternative, we studied the effect of the size of a shared horizontal interactive workspace on users' attention, awareness, and efficiency during cross-device collaboration. In our study, 15 groups of two users executed a sensemaking task with two personal tablets (9.7") and a horizontal shared display of varying sizes (10.6", 27", and 55"). Our findings show that different sizes lead to differences in participants' interaction with the tabletop and in the groups' communication styles. To our own surprise we found that larger tabletops do not necessarily improve collaboration or sensemaking results, because they can divert users' attention away from their collaborators and towards the shared display.
Spatially-aware or spatially-agnostic? : Elicitation and Evaluation of User-Defined Cross-Device Interactions
2015, Rädle, Roman, Jetter, Hans-Christian, Schreiner, Mario, Lu, Zhihao, Reiterer, Harald, Rogers, Yvonne
Cross-device interaction between multiple mobile devices is a popular field of research in HCI. However, the appropriate design of this interaction is still an open question, with competing approaches such as spatially-aware vs. spatially-agnostic techniques. In this paper, we present the results of a two-phase user study that explores this design space: In phase 1, we elicited gestures for typical mobile cross-device tasks from 4 focus groups (N=17). The results show that 71% of the elicited gestures were spatially-aware and that participants strongly associated cross-device tasks with interacting and thinking in space. In phase 2, we implemented one spatially-agnostic and two spatially-aware techniques from phase 1 and compared them in a controlled experiment (N=12). The results indicate that spatially-aware techniques are preferred by users and can decrease mental demand, effort, and frustration, but only when they are designed with great care. We conclude with a summary of findings to inform the design of future cross-device interactions.