Immersive Analytics with Abstract 3D Visualizations: A Survey

After a long period of scepticism, more and more publications describe basic research but also practical approaches to how abstract data can be presented in immersive environments for effective and efficient data understanding. Central aspects of this important research question in immersive analytics research are concerned with the use of 3D for visualization, the embedding in the immersive space, the combination with spatial data, suitable interaction paradigms and the evaluation of use cases. We provide a characterization that facilitates the comparison and categorization of published works and present a survey of publications that gives an overview of the state of the art, current trends, and gaps and challenges in current research.


Introduction
Immersive analytics (IA) is a field of research concerned with the design and application of engaging analysis tools to support data understanding and decision making [DMI*18]. It combines efforts from scientific visualization (SciVis), information visualization (InfoVis), visual analytics (VA), human-computer interaction (HCI) and related fields to examine which and how immersive technologies can be used to improve data analysis and communication [CCC*15, DMI*18, SPO*19]. Thus, it extends the scope of VA [TC06], for example, by employing technologies along the virtuality continuum. As defined by Milgram and Kishino [MK94], the continuum extends from the real environment via augmented reality (AR) to augmented virtuality (AV) and to virtual reality (VR). Following this definition, unlike VA, IA examines the impact of the technology used to remove the barriers between the analyst and the data for the exploration, interpretation and understanding of datadriven problems [LBDM19]. In addition, it is not limited to visual representations of data but can use various stimuli, including sound and haptics.
In SciVis-related areas, the use of virtual environments for the visualization of spatial data has been common for decades -already, the first CAVEs in the early 90s were often dedicated to SciVis [Bry96,CNSD93]. IA employs many methods of SciVis and is used in various application areas such as archaeology [KF12,SKD*13], geosciences [HOJ*07] and the life sciences [CHK*18, SBH*14]. Also in industry, CAVEs and other AR/VR environments are steadily gaining in popularity in various sectors, such as in healthcare, aviation or automotive industry [vir21]. However, many challenges remain for IA to be effectively deployed across a range of application areas [EBC*21]. Especially applications with abstract data representations in 3D are considered problematic and often criticized in the visualization community [Mun14]. Thus, further fundamental research is required to investigate the potential of immersive visualizations for data exploration and analysis [KKF*21].

Can novel technologies and methodologies help address previous criticism of abstract data visualization in 3D? How can visualizations such as network layouts, scatterplots and parallel coordinates in immersive environments be designed to improve upon classic desktop setups? Are there convincing examples of such approaches?
To find answers to these and related questions, we evaluated the publications of 83 proceedings of eight conferences related to IA between 1990 and 2020 in this survey.
We placed special emphasis on visualization and excluded research, which focuses on other, non-visual stimuli such as sonification [YBL04,MBMW20] or olfaction [PBE19,BNL20]. Our focus is on visualizations of abstract data, that is, data without a natural physical or spatial representation, in immersive 3D environments, and we reviewed scientific literature that uses stereoscopic 3D for the inspection and analysis of visualizations. Hence, 3D visualizations that are exclusively inspected on 2D screens are excluded from our survey. Even with these restrictions, we could identify a significant increase in the number of publications over the last few years, indicating the need for and the interest in this research area.
Besides providing an overview of the field, our analysis revealed several interesting findings. While much literature on abstract visualizations and immersive environments exists separately, we found relatively few papers (58) that combine both fields by deploying immersive environments for abstract 3D visualizations. A number of interesting trends emerged, also with regard to the technology typically used. While for the 1990s and early 2000s, mainly CAVEs were found in our analysis, since 2017, HMD-related technologies dominate the publications, with a focus on VR, although AR is also an interesting area for IA research. The reason for this could be the still limited technical sophistication of AR (e.g. small field of view, limited interaction possibilities). Based on our analysis, we discuss the potential benefits of stereoscopic 3D visualizations, opportunities and challenges with regard to navigation and interaction in immersive environments, and the potential of immersive environments for collaborative analysis procedures in the context of abstract data analysis.
The structure of this paper is depicted in Figure 1. In the following section, we will first provide an overview of related surveys before describing our methodology and classification scheme in Section 3. The subdivision of this section provides the structure for all of the following parts. Thereupon we proceed to the actual core part in Section 4 in which we proceed through all analysed dimensions. Each subsection is structured similarly. First, an overview of the distribution of all analysed papers with regard to the respective dimension is provided. Subsequently, findings for each class of the dimension are presented while similarities and differences between approaches are highlighted. After revealing our results, high-level implications for IA are discussed in Section 5. In the final discussion (Section 6), we reflect on our findings and discuss various facets of IA with regard to abstract 3D visualization.

Related work
In this section, we give an overview of related surveys that structure IA techniques according to various criteria and do not focus on individual application areas. The aim of this section is to provide the reader with a meta-analysis on IA and to illustrate the relevance of a systematic literature review on this topic.
Brooks was one of the first researchers to discuss positive and negative aspects of VR based on various applications [Bro99]. In his early literature review, he came to the conclusion that despite the high cost, low resolution and limited range of trackers, VR really works for specific domains such as flight simulators, automotive engineering or astronaut training. However, some key features, like interacting with the virtual worlds or better modelling of the real world, remain challenging. One year later, van Dam et al. [VFL*00] highlighted VR applications for SciVis. Examples that benefit from the integration of VR are in the area of archaeology for a better perception of ancient structures or in the medical field for a better understanding of the 3D geometry of blood vessels. Both works focus on the integration of VR to replicate real-world scenarios or to display SciVis. In our survey, we focus on abstract data visualizations for IA.
Laha and Bowman reviewed VR techniques for visualizing volume data [LB12]. In their literature review, they concluded that more controlled experiments are needed to explore the benefits of individual components of immersion. As a starting point, they came up with a task taxonomy that can later be used in user studies to generalize the results. Our survey is not limited to volumetric data sets but also includes relational and multidimensional data. Reda et al. [RFK*13] specifically focused on summarizing research for hybrid reality environments like the CAVE2. Despite the advantage of a high-resolution screen in combination with optional stereoscopic depth, the authors emphasize the possibility of collaborative data analysis. In our survey, we do not limit ourselves to hybrid reality environments but cover all technologies that make use of stereoscopic depth (e.g. CAVE, CAVE2, volumetric displays, HMDs).
Brath [Bra15] collected evidence in the form of application examples that 3D visualizations offer advantages beyond 2D. Although the author does not focus on stereoscopic 3D, he mentions the benefits of an immersive interface. In contrast to his work, we explicitly restrict ourselves to immersive displays and not 2D screens.
The literature review by McIntire and Liggett [ML15] is closely related to the current one. The authors discussed the possible utility of stereoscopic 3D displays for InfoVis. They focused on abstract data visualization and presented experiments in favour of and against stereoscopic 3D. We build on this work and additionally include application and evaluation papers from different domains.
García-Hernández et al. [GHAWK16] focused on using VR environments for visual data mining. Similar to our work, research on the representation of abstract visualizations like 3D scatterplots or 3D parallel coordinates in VR environments is investigated. As opposed to their work, we do not limit ourselves to VR environments but also collect research in the field of AR.
Sommer et al. [SBB*17] presented current research projects developed in collaboration between Monash University and the University of Konstanz. They concluded that stereoscopic 3D is advantageous in various application domains. While the authors presented seven research projects that make use of stereoscopic 3D, we do not restrict ourselves to specific projects but give a more comprehensive overview of research in the field of IA.
Just recently, Fonnet and Prié [FP19] surveyed 177 publications in the domain of IA and provide an excellent overview of different rendering technologies, data, sensory mappings and interaction means, which have been used to build IA systems. The biggest difference to the current work is that we restrict ourselves to abstract data visualizations and analyse corresponding papers in more detail by setting the focus on visualization types, analysis tasks and the discussion of the applicability of abstract data visualization in immersive environments.
In summary, we are not aware of any previous work in which IA of abstract data for InfoVis was systematically reviewed.

Methodology
In our systematic literature review, we focused on the selection of papers according to two different characteristics: immersive environments and abstract data visualizations. In order to be included in the survey, papers must use hardware that enables an immersive experience and employ techniques for visualizing abstract data. We consider all papers that meet both criteria but evaluate them in terms of their relation to IA. Although we made every effort to be accurate in our characterization of papers, there could, of course, be papers that are somehow related but still not included in our collection process (see Section 3.3). Given the current interest in IA research and presuming that this trend will continue, we can expect that further relevant papers will be published in the future. We, therefore, provide a customized online interface that allows extending our current survey by adding new publications on this topic and also provides interactive access to our collection: https://iasurvey.dbvis.de.

Immersive environments: sampling characteristics
A prerequisite is that relevant papers are located in Milgram et al.'s virtuality continuum [MTUK95], which encompasses the entire range between the two extremes of the real and the virtual environment. The area between these two poles is called mixed reality, including AR (where the virtual augments the real) and AV (where the real augments the virtual). Therefore, the papers discussed in this work must be located in the domain of immersive environments, leading to a mixed reality experience. This means that abstract 3D visualizations must be presented in a mixed or VR environment in which the hardware and user interact closely: The hardware monitors human behaviour and reacts by stimulating human perception [LO94]. Thus, a paper that uses abstract 3D visualizations presented on a 3D projector is excluded by this restriction since no immersive environment in the classical sense is created, and there is no interaction between observer and systems. This means that head and body movements of the user have no influence on the perceived visualization, and the visualization is not fixed to a certain location in the visually perceived (real/virtual) environment of the user. Similarly, we excluded AR approaches in which AR is purely created by handheld devices as head movements do not have any impact on the image perceived by the user -that is, a virtual object depicted on the screen of a handheld device does not change perspective when the user looks on the display from different angles. This strict criterion also leads to the exclusion of approaches that create immersion with powerwalls.
In order to search for all contemplable papers, we created a keyword list with terms related to immersive environments, such as '3D', 'VR', 'AR', and 'immersion' (see the complete list in Appendix II). These terms were compiled from the experience of the authors and from the typical jargon of previously known prominent literature in this domain like the work of Slater and Wilbur [SW97]. All keywords were preprocessed using standard natural language processing algorithms such as stemming to increase the chance of a positive match. As a result, we collected a first keyword list with 38 keywords covering the concept of immersive environments.

Abstract 3D visualizations: sampling characteristics
Abstract data can be defined as data that has no inherent spatial structure or physical representation [Eic95]. Abstraction in visualization is achieved through the use of colour and shapes that are not directly related to the object in question [PBLH02]. We investigate publications dealing with abstract data visualizations. Therefore, we introduce three inclusion criteria that were used to filter the large number of IA papers. In this way, we consider three categories 1. Visualizations of abstract data (i.e. data that has no inherent mapping to space, including all visualizations of abstract data). 2. Visualizations of abstract data in a spatial context that are situated representations [WJD17] (i.e. the abstract visualization element displays data in proximity to data references but does not spatially coincide with data references). 3. Embedded visualizations of abstract data in a spatial context that encode more than one attribute with visual variables (e.g. glyphs, space-time cubes).
At the same time, we exclude papers with visualizations from the following categories: 1. Pure non-abstract data visualizations (e.g. a 3D visualization of a brain, an engine or a map). 2. Embedded visualizations [WJD17] of abstract data in a spatial context that use only a single visual variable (i.e. visualizations in which the displayed data match data references, such as coloured blood vessels in a 3D brain model, text labels associated with a 3D engine visualization, a 3D map with coloured dots representing specific locations).
Literature that introduces abstract data visualizations must specify either the data type as abstract or the visualization technique used to display the respective data. Therefore, we focused on both aspects when we created a second keyword list for filtering contemplable papers, which contained keywords such as 'scatterplot', 'abstract data', 'high-dimensional' and 'PCP' (see the full list in Appendix III  [Mun14]. Again, all keywords were preprocessed to avoid the influence of affixes on matching terms. As a result, we collected a second keyword list containing 79 keywords covering the concept of abstract data visualizations.

Paper sampling
We parsed the proceedings of the most important conferences in the field (i.e. BDVA, CHI, ERVR, EuroVis, IEEE VIS, IEEE VR, SD&A, UIST -see Appendix I) and applied a full-text keyword search. The keywords were chosen to be descriptive for immersive environments and abstract data visualizations (see Sections 3.1 and 3.2 -list of keywords in Appendix II and III). Whenever a combination of keywords from both categories was found in the text, the respective paper was selected as a potential candidate. After this automatic parsing and matching process, we created a candidate corpus of 256 papers. The initial selection of papers was intentionally given weak constraints in order to not exclude relevant papers and to create an extensive pool of candidates. Due to these weak constraints, the pool contained many false positives that did not meet our predefined criteria. Therefore, the set of 256 papers was reviewed and filtered in a manual screening process based on our strict definition of immersive environments (3.1) and abstract data visualizations (3.2). We also identified surveys and state-ofthe-art reports and excluded them from further analysis. Such reports would distort a detailed analysis because they do not focus on a single new approach but describe several techniques in a single paper. The overall sampling process is depicted in Figure 2.
To give an example: the title of the paper 'Objective and subjective assessment of stereoscopically separated labels in AR' [PAE08] sounds promising in the context of our review. However, after reviewing the paper, it turned out that (a) it does not apply IA princi- Figure 2: The sampling process of our survey is based on parsing and keyword filtering of a large set of papers in PDF format. After a subsequent manual filtering step, the initial set of included papers is generated, which is then used as a starting point for the final expansion by manually parsing the reference lists of included papers (snowballing).
ples and (b) the labels mentioned in the title identify airplanes rendered as simple 3D objects -thus 'representing embedded visualization of abstract data in a spatial context that only uses a single visual variable'. Therefore, this paper was excluded. Another example is the work of Greffard et al. [GPK15]. Although the abstract data visualization criteria are very well-suited for this work as it evaluates graph visualization, it does not fit the second criterion, which concerns the immersive environment: Although a stereoscopic screen is used for the visualization, the paper uses a static visualization, and no head-tracking is involved -hence, the degree of immersion is relatively low. Our manual screening procedure resulted in a set of 35 papers (our basic set) and served as a starting point for further paper acquisition.
To broaden the scope of our initial semi-automatic sampling strategy, we used a snowball sampling technique [Woh14]. More precisely, we recursively scanned the references from all papers in our basic set and checked them for relevance. Using this approach, we collected another 23 papers in two iterations, so that a total 58 papers were subjected to our detailed review process. With this approach, we were able to cover a wide range of journal papers in addition to the originally parsed set of conference proceedings. Some papers found during the recursive parsing procedure were not detected during the semi-automatic sampling because they were not included in the paper pool (different venues, excluded years), because parsing errors led to mismatching keywords, or because no or only one keyword was used in the paper. We carefully tried to optimize the PDF parsing process and did our best to identify papers with parsing errors in order to scan them manually for relevance. However, with a set of in total over 20,700 papers, it is impossible to guarantee that not a single paper with parsing errors was overlooked.

Analysed characteristics
In the following, we will introduce the classification used to group and organize the set of inspected papers. We classified the papers according to six characteristics: paper type, technology, environment type, data type, visualization technique and analysis task. The classifications of paper type and data type were adopted from Munzner [Mun08], whereas the classifications of the other characteristics were derived in a bottom-up approach from the inspected literature. All papers were assigned to one or more classes per characteristic. Only for paper type, each paper was assigned to exactly one class.

Visualization technique
Starting from categories found in VA-related literature like in Munzner [Mun14], we identified seven types of visualization techniques used in the investigated set of papers. Additionally, we grouped techniques appearing only once into the supplementary category 'others'.

Analysis task
Based on Andrienko and Andrienko's task taxonomy [AA06], we classified the papers into higher-level (synoptic) tasks and elementary tasks. For a more detailed analysis, we further distinguished seven categories of analysis task if the respective task was mentioned as a valid analysis task for the proposed technique. Papers that are not assignable to any of the seven classes or do not explicitly state an analysis task are categorized as 'other' or 'not specified', respectively.

Paper type
All papers were assigned to exactly one of the following paper types (adopted from Munzner [Mun08]).
1. Technique: Papers presenting novel algorithms and techniques. 2. Evaluation: Papers with the focus on the assessment of an application, approach or technique. 3. System: Papers describing the architecture of a framework. 4. Model: Papers providing a theoretical view of things. 5. Design study/Application: Papers presenting the application of existing techniques to solve a certain problem in a certain domain.

Technology
Immersive technologies were categorized into three groups, which cover all technologies deployed in the set of papers under consideration.
1. Monitor/Projector: 3D monitors or projectors used to create semi-immersive environments. 2. CAVE: Video-wall environments of different shapes. 3. HMD: Head-mounted displays worn by a user to enter AR or VR environments.

Environment type
Since a prerequisite for the inclusion of papers was the embedding of the approach on the virtuality continuum, and we are not aware of any paper making use of AV under given conditions, the resulting corpus contains only AR and VR papers.
1. AR: Augmented reality -virtual objects are embedded into the real environment. 2. VR: Virtual reality -a purely virtual environment is perceived.

Data type
The datasets used were categorized into four classes (adopted from Munzner [Mun08]).

Table:
Items in a table refer to individual data points, whereas attributes or dimensions refer to the data dimensions of the data points. The combination of an item and an attribute is reflected in a single cell containing a value. 2. Field: In a continuous domain, fields represent attribute values associated with cells. The resolution may change depending on whether the density measures are closer together, which leads to a higher resolution, or whether they are further apart in a coarser grid. 3. Geometry: Geometric items can be points, lines, curves, 2D surfaces or 3D shapes with an explicit spatial position. Geometric datasets can come with additional attributes, making their visualization a challenging task, or without. 4. Network: Data points that have a relationship to each other can be specified in the abstract concept of a network with nodes and links. These nodes and links can be associated with attributes specified in tables. In this paper, we do not distinguish between networks with cycles and hierarchical structures.

Literature Review
In the following subsections, we present our results for each of the six dimensions considered in the same order as introduced in the previous section. In each subsection (dimension), we will first provide an overview on the dimension itself and its development over time to then have a close look at all its classes. The classes are categorical and, therefore, presented without a specific order. Papers within each class are discussed in semantic, then chronological order. For an overview of all analysed papers, we provide a link to an online browser that offers advanced search, filtering, and comparison options: https://iasurvey.dbvis.de (see Figure 3). The online platform not only allows the overview to be expanded to include future work but also missing papers or even missing dimensions to be added. The overall distribution of papers considered in this survey is shown in Figure 4. While research interest was high in the late 1990s and early 2000s, we observed a decline in research papers in the late 2000s and early 2010s. This could be explained by increased research efforts regarding the application of immersive environments for non-abstract immersive visualizations and a generally declining research interest in abstract data. In recent years, we notice a strong research trend towards abstract 3D visualizations in immersive environments. This could be due to the steady progress of technology and improved availability.

Visualization techniques
While a variety of visualization techniques has been developed to meet different requirements regarding data types, visualization aims and tasks, these techniques also have their own requirements, affor- dances and restrictions regarding the environment in which they are used and the associated interaction operations. As a result, the potential design space for their integration, and the effectiveness and efficiency of the techniques, can vary depending on the immersive environments and devices. Related aspects such as field of view and field of regard, resolution, screen size and computational power may affect the suitability accordingly. A considerable number of papers employ multiple visualization techniques, often with different levels of support and description, so multiple entries are possible (see Table 1).
Our analysis of these visualization techniques shows an unbalanced distribution, with a clear focus on two types of techniques under investigation: node-link graph and scatterplot, which comprise 39 out of 58 papers. As can be seen in Figure 5, there has been a sustained interest in these techniques for many years, while the investigation of geographic visualizations has recently experienced a  boom. While text visualization is a natural component of many visualization approaches, e.g. for labelling, and there are approaches to analyse text corpora, we have not found any paper that focuses on a text visualization technique for IA approaches. This may be related to the fact that for many of the immersive technologies, such as VR and AR HMDs, current devices have some shortcomings in terms of dynamic text rendering due to relatively low resolution and the additional impact of the distance and perspective of 3D objects, which makes it difficult to create high-quality text visualizations. Various papers contained volume and geographic visualizations but were excluded due to insufficient levels of abstraction. For example, papers were excluded that only visualize 3D models of blood vessels or present 3D geo-maps without encoding additional information. A few papers did not fit into a clear-cut classification or would form a single-element class of their own, such as a SOM-based visualization approach for multidimensional data [WLM11] or heightmap visualizations [KBS*19], and were therefore subsumed under 'other'. In the following, we take a close look at each class of visualization technique.

Node-link graphs
Since they lack predefined axes, dimensions and directions, network visualizations offer much freedom in creating visual represen-tations. However, this also means that there is usually a less unified user experience, and many design decisions can distort the validity and effectiveness of solutions. In addition, there are a variety of characteristics in networks created from application data, including scale, but also structural features. Techniques might be suitable only for very specific subclasses of networks, and the impact of the immersive environment might further strongly influence usability. Therefore, the practical evaluation of techniques in user studies is of utmost importance. On the other hand, the potential of visual network analysis in immersive environments has already been demonstrated in a number of studies that focus on aspects such as improved collaboration and interaction, better perception of network features through stereoscopic views and visual scalability (see following examples). With the ever-increasing size and complexity of datasets, the question of how to support the human mental map for navigation in network visualizations is also gaining importance in current research.
Perception & human factors -In their seminal work on stereoscopic 3D perception of networks, Ware and Franck [WF96] evaluated the influence of stereoscopic 3D visualization and motion cues compared to 2D visualization under various conditions, including head tracking, in a setup with shutter glasses and fish tank VR. Expanding on their earlier report [WF94], they found clear improvements when the head-coupled stereo condition was applied, but also argued that the type of motion applied, for example, automatic rotation, should depend on the application and the required interactions. In view of the special setup and the relatively small size of the random networks used, the results must be checked for more general evidence. Belcher et al. [BBHS03] investigated the use of AR for the analysis of complex networks and provided a user study based on artificially created networks. They reproduced the classical experiments by Ware and Franck [WF96] and compared AR with 2D and 3D screen settings using simple node-link visualizations. They conclude that the limitations of the AR technology, for example, regarding colour and contrast, might still hamper its effectiveness in task performance. Given the improvement in technology in recent years, this hypothesis could be reevaluated with current technology. While some investigate human factors and perception on network visualizations in oder to improve them or associated tasks, others simply make use of graph visualizations as a means for general assessments on perception or human factors. For instance, Krekhov [CDK*17] compared the task performance in graph analysis tasks, triangle counting and shortest path finding, for a collaborative setting with two participants in a team. While participants were faster in the HMD setting and movement differed between team members in the CAVE setting, no other significant differences were found in the collaborative task solving, which shows the potential of the VR HMD technology for such setups. Using VR HMDs with hand-held controllers, Drogemuller et al. [DCW*18] evaluated the task effectiveness of four navigation techniques for graph analysis, with oneand two-handed flying was perceived by participants as faster and more preferred than teleportation in search tasks (see Figure 6). Slay et al. [SPVT01] used AR with fiducial marker-based interaction for the visual analysis of trees and graphs to demonstrate new object manipulation techniques.
Layouting graphs -Kwon et al. [KMLM16] investigated the use of a spherical layout to improve network perception and interaction in VR and performed a comparative analysis of 2D and 3D graph layouts. They found that in their setup with networks of up to 297 nodes and 2359 edges, participants solved tasks with the spherical layout and the corresponding interaction technique significantly faster and with a significant increase in correct answers for larger graphs.
Analytic provenance & processes as graphs -Besides such typical approaches that are concerned with the visualization of network or hierarchical data, other visualizations exist that fall into the same category of 'node-link graphs'. For instance, Hayatpur et al. [HXW20] present a visualization approach for analytics provenance graphs, which are generated throughout analysis procedures. Even though this is not a typical graph visualization problem, it still is, in principle, a node-link graph visualization of abstract data. Similarly, the approach by Zenner et al. [ZMK*20], in which abstract process models are transformed into interactive 3D environments, resemble node-link visualizations.
Domains -Since biology is one of the most important application areas for network analysis and visualization, a major research focus is on techniques that take into account the specifics of the corresponding datasets, tasks and notations. In particular, the flood of data resulting from high-throughput 'omics' technologies, for example, for proteomics and genomics analysis, and the resulting need for methods that can cope with the scale and complexity are a driving factor for current research. Ferey et al. [FGHG05] described the Genome3DExplorer for the investigation of genome data. Networks are used to model binary relations between genomic entities, such as yeast gene coexpression, and provide an interactive 3D visualization making use of a force-directed layout approach. Maes et al. [MMD*18] presented MinOmics, an analysis pipeline for multi-omics data, and discussed several scenarios for using interactive, immersive environments such as stereoscopic display walls and VR HMDs while describing implementation work in progress in this direction. Stolk et al. [SAK*02] presented an approach to mine genomics data in which relationships between entities are represented as node-link representations in 3D VR without focusing on a specific environment or device. Due to the density of the resulting networks, they resort to edge filtering based on similarity values. As abstract data are often associated with spatial data, representation and navigation in such cases must take both data types into account. Sommer et al. [SWX*15] investigated a combined 2D and 3D approach for navigation and demonstrated it with an application on cytological network exploration that links network visualizations to 3D cell model rendering.

Scatterplots
Scatterplots and scatterplot matrices are well-known visualization techniques for the analysis of high-dimensional data in 2D. Starting in the late 90s, researchers investigate their performance in VR environments. With that, the visualization technique gained more and more in popularity -also as a secondary tool for evaluating human factors or user experience with largely independent interaction techniques or hardware. In total, we analysed 24 papers that make use of scatterplot visualizations.
Immersion, visual perception & user experience -Although many use cases were introduced, experimental evidence demonstrating the benefits of 3D scatterplots in VR is rare. Arns et al. [ACCN99] and Nelson et al. [NCCN99] investigated participants during a cluster identification task in a VRE and on a desktop monitor. Results showed that participants in the VRE performed almost twice as well but needed a little more time to become familiar with the interaction possibilities. The authors attribute this to the display of 'true' three-dimensionality and the improved perception of structures in the VRE. However, the question of whether the benefits are due to the additional third dimension or to the fact that analysts are more immersed in the analysis is still pending. To study the effect of physical engagement in an immersive environment, Raja et al. [RBLN04] conducted an experiment using a CAVE with one and four walls. Participants had to analyse data in a 3D scatterplot. Additionally, the authors enabled head tracking for one group of participants to increase the level of immersion. Results suggest that participants are more efficient in a highly physically immersed environment such as a CAVE with four walls and head tracking. The VR-Miner tool [APGV06] makes use of scatterplots for interactive mining of multimedia data in VR and maps visual variables to display different types of data in an abstracted form. The authors argue for the benefit of the deployed VR environment due to the easier perception of the presented data and direct interaction capabilities. However, the authors did not perform a direct comparison comparing the performance or usability of their VR solution with a conventional screen-based setup. Filho et al. [WRFN17] compared screen-based setups with HMDs for visual exploration tasks on multidimensional data represented as scatterplots. Their study results showed beneficial effects of immersion in terms of distance perception and outlier identification tasks. In addition, they demonstrated higher accuracy and engagement scores when participants were in the immersive environment. Similarly, Whitlock et al. [WSS20] used scatterplots to investigate a possible difference in the perception of visual variables between different media (AR, VR, screen).
Orienation & navigation -Etemadpour et al. [EML13] compared different segregation and precision tasks performed in VREs and non-immersive 2D environments on the screen. The threedimensional data were displayed as 3D scatterplots. Correctness, timing and confidence were higher for several tasks when participants were in the immersive environment. Among other things, they were able to show that participants were able to approximate distances better when they were in the virtual environment. However, they reported a loss of orientation when participants were in the immersed environment. This loss of orientation was also identified by Kraus et al. [KWO*20] in their controlled user study. They propose not to surround users with data points but to provide a restricted area in the VRE where data are displayed. Such an overview prevented the aforementioned loss of orientation during a cluster identification task. In comparison to more abstract representations on 2D screens, the VRE increases memory and orientation capabilities by providing more natural navigation in the data space.
Interaction -First prototypes of 3D scatterplots were introduced by Symanzik et al. [SSC*93] and Teylingen et al. [VRV97]. Both approaches made use of similar interaction techniques like rotating and moving the visualization space and interactive menus to select or filter data. However, there was no evidence whether such a third dimension really improves the analysis of scatterplots. In the following years, the design and interaction space of 3D scatterplots was improved with scatterplot matrices [NGM01], more sophisticated interaction techniques like details-on-demand [NGBV08],  in the environment as axes. Thus, analysts could easily add, rotate or combine axes to create scatterplots, scatterplot matrices or other kinds of visualization techniques. Likewise, Sicat et al. [SLC*19] implemented an approach to create immersive visualizations like 3D scatterplots. Their toolkit comprises a simple grammar and provides reusable templates for easy creation of unique 3D visualizations.

Parallel coordinates
PCPs are an established technique that strives to overcome the limitations of scatterplots for high-dimensional data and to support traceability across all dimensions for a data point. While 3D might help to alleviate the problem of finding the correct order of dimensions, as only correlations between adjacent dimension are clearly perceptible, there is only very limited work on PCPs for IA, mainly describing techniques that are available in software implementations without a deeper analysis of the benefits and the potential in the IA design space.
Dynamically layouting PCPs -The previously mentioned visualization authoring toolkit ImAxes [CCD*17] allows the arbitrary arrangement of axes and with that the creation of PCPs and 3D variants thereof -such as circular arranged ones or 3D PCPs where 2D scatterplots are connected with lines. Similarly, GeoVisor, a system presented by Billow et al. [BC17], allows users to visualize data in various ways -including as PCPs. However, unlike ImAxes, Geo-Visor is not optimized for interactively authoring new visualizations but provides a sample framework for evaluation. Along with the proposed system, they demonstrated its evaluation based on heuristics.  Figure 8). They concluded that their approach facilitates immersion in the data, fluid analysis processes and collaboration. They argue that the combination of AR with touch input could improve usability due to familiar, precise and physically undemanding touch interactions compared to gesture-based interaction capabilities typically provided in AR.   [Bel17]. The appearance of the virtual environment itself represents one single data point. Hence, the created visualization resembles a glyph representation of abstract data. Image courtesy of Martin Bellgardt.

Glyphs, icons & symbols
In proportion to the relative number of general research papers published for techniques based on glyphs, icons and symbols, there is also a relatively small number of papers discussing these techniques in the context of IA. A particular challenge here is how these simplified representations can be lifted into the immersive design space to benefit from the extended possibilities without losing the advantage of simplicity, for example, when exploiting stereoscopic 3D vision.
Glyphs in VR -An early example how this could be accomplished is presented by Teylingen et al. [VRV97] presented the 'Virtual Data Visualizer', an immersive VR environment for visualizing data points as customizable glyphs. The system deploys traditional menus within the VR environment in combination with direct icon manipulations in 3D to allow users to create and customize glyph visualizations by manipulating mappings between variables and glyph elements.
Virtual environments as glyphs -More recently, Bellgradt et al. [Bel17] presented 'Gistualizer', a tool for visualizing single, multidimensional data points as 'immersive glyphs'. A landscape is automatically generated from the properties of the data point, with different dimensions defining the appearance of the landscape. For instance, one attribute defines the number of houses, another the height of mountains. Figure 9 depicts immersive glyphs created with Gistualizer, where one dimension varies between the five data points, affecting the depicted time of the year in the respective glyphs. The visualizations of abstract process models in the form of interactive 3D environments presented by Zenner et al. [ZMK*20] can also be seen as large glyphs. Similar to the approach in 'Gistualizer', the environment is automatically generated based on an abstract data foundation and can be explored by the user.

Geographic
Geographic visualizations are among the earliest data representations due to their use in exploration, navigation, urban planning and agriculture. They involve geospatial referencing information and therefore often use map-based representations that are complemented by other abstract data representations such as availability or consumption of resources [DMT08]. In particular, these visualizations can take advantage of immersive environments in terms of available space and navigation capabilities compared to traditional desktop settings. Since the benefit of specific data representation concepts, especially including stereoscopic 3D representations and interaction, can vary greatly between classic desktop settings and immersive environments, many concepts are revisited to explore the new possibilities. Climate visualization -A promising application area for which the use of stereoscopic 3D visualizations in immersive environments can be further explored is the investigation and presentation of climate data. Important questions include how data dimensions such as temperature, heat flux, precipitation or humidity can be presented in a 3D visualization of the environmental context, how they can be combined without visual occlusion and overload, and how dynamics can be integrated. Helbig et al. [HBR*14] presented a workflow for the integration of heterogeneous data from simulation model variables and observed data with topographic features and other data about the environment in which they are situated. The workflow is designed for a VR environment and demonstrated in a projectionbased stereoscopic virtual environment. Baltabayev et al. [BGB*18] addressed a similar problem, the visualization of collected environmental data from sensors deployed in the environment, and demonstrated a concept based on the reconstruction of the real environment in VR.

Space-time cubes (STCs) -
The STC is one of the standard visualization approaches for spatio-temporal geographic data in which lines are drawn within a cube and their location encodes a geolocations (x+z) and time (y/height). The technique was also ported to immersive environments. For instance, Saenz et al. [SBHP17] investigated the utility of immersive 3D visualizations for geographic data. They deployed AR holograms of STCs and proposed a study design for future investigations. Similarly, Wagner et al. [WSN19] presented a user study for an immersive STC implementation using gestures and tangible controls for interaction and a desk-based metaphor instead of flying or physical walking (see Figure 10). According to them, their study results indicate clear qualitative benefits for the exploration of trajectories with immersive STCs.

Volume
The visualization of 3D volume data has attracted much attention in SciVis research. We identified seven papers that investigate the use of volume visualization with abstract visualization elements within IA approaches. [GPG*19] introduced a Java framework for VR/AR biovisualization that can process mesh and large volumetric data with multiple views, points in time and colour channels using OpenGL and Vulkan rendering APIs. This work presented a simulation of 10,000 agents that together form a sphere and an out-of-core 500 GiB multi-timepoint embryo dataset. DXR [SLC*19] is a unitybased toolkit for creating immersive environments using concise declarative visualization grammar using the in-situ GUI. DXR's visualization pipeline supports templates and customizable graphical marks, which can be used to specify unique and engaging visualizations. DXR infers missing parameters to reasonable defaults and uses the inferred specifications to construct a 3D visualization that can be placed in a VR scene. A main focus of DXR is the visualization of abstract data by using 3D flow fields and streamlines, bar charts, scatterplots in combination with graphical marks and visual encoding parameters. These abstract data elements can be embed-ded in concrete virtual environments, such as a virtual basketball court or airplanes.
Interaction & user experience -Various studies and experiments were conducted on interaction and user experience in immersive volume visualizations. Some of them also satisfy our criteria for abstract visualization elements. For instance, Teylingen et al. [VRV97] presented a tool for virtual heterogeneous data exploration and analysis. The internal data are hierarchically organized in customizable classes. Therefore, abstract data can be the basis for such a class. The system was demonstrated by means of visualizing molecular dynamic simulations of biochemical structures and the fluid dynamic simulation of a tilting rotor blade in hovering mode. In the context of this review, it is interesting that vector glyphs are used to depict the velocity field near the tip of the rotor. Herealready in 1997 -different glyphs and menus provided various interaction methods. Similarly concerned with interaction modalities, Hyde et al. [HHC18] discussed an approach that offers a number of features for viewing and interacting with geological models in VR using the Oculus Rift. It offers human-centric navigation and manipulation, implicit surface editing and visual conditioning. Volumetric grid data, including cross-sections, can be visualized and explored, and uncertainty data can be mapped to abstract and geological surfaces, for example, in the context of drill hole planning.

Flow
Our analysis revealed four papers concerned with flow visualizations in a broader sense. While two demonstrate visualization approaches for flow data, the remaining papers focus on the design and evaluation of interaction methods and user experience by means of flow visualizations.
Trajectories & movement -In an application paper, Hurter et al. [HRD*19] introduced FiberClay, an immersive multidimensional visualization system to visualize and analyse huge amounts of 3D trajectories in VR (see Figure 11). They demonstrated the applicability and usefulness of their approach by means of use cases and expert evaluations from the domains of air traffic control and neurology. Similarly, Homps et al. [HBV20] present an approach for the interactive analysis of 3D trajectories in immersive environments but set the focus on the comparison of different selection modes in which different basic 3D shapes are deployed.
User perception -Barrie et al. [BCC05] focused on the evaluation of user performance when working with flow visualizations in VREs. They presented a study on the impact of immersion on the users' ability to analyse particle flows in a virtual environment. They concluded that an increased field of regard and a high degree of immersion can lead to better comprehension scores for the interpretation of particle flows.  VR. In their approach, they merged two categories of lenses -3D and Caecal -to enable seamless analytical exploration of multigeometry data using the focus+context paradigm in VR. As application cases, the aerodynamics of wind turbines were visualized with flow lines, and for the exploration of the aneurysm, the lens patch provided depth information to improve the perception of surface shape and vessel-blood flow relations.

Other
In this category, we grouped papers that apply rare visualization techniques that could not be assigned to any of the other groups. Wijayasekara et al. [WLM11] aimed to improve the usability of selforganizing maps (SOMs) for multidimensional data by providing an interactive neuron map visualization of SOMs as a 3D cube. They stated that the interactive 3D visualization helps to gain insight into the topology and relationships in the data. The visualization toolkit DXR of Sciat et al. [SLC*19] allows the visualization designer to create all kinds of custom visualizations, such as 3D bar charts, and is not limited to a certain set of visualization techniques. Schroeder et al. [SAHC20] visualized data as a combination of a bubble chart and a bee-swarm plot. Kraus et al. [KAB*20] conducted a study on 3D heightmap visualizations for comparative analysis tasks and compared them with juxtapositioned 2D heatmaps. Their results indicate a potential benefit of immersive environments for certain comparative tasks, such as estimating the relative offset of given locations in heatmaps.

Analysis task
In the previous section, we have seen that the number of studies and their reported successes strongly depend on the visualization method considered. In this section, we shift our focus to analysis tasks and explore how different tasks were investigated in IA solutions (see Figure 12). In a bottom-up approach, we identified seven types of analysis tasks and assigned each paper to one or more of them. Several publications present frameworks [SSL*00, CCD*17, SLC*19] or applications of visualization techniques [NGM01,SBHP17] without explicitly specifying concrete tasks and are, therefore, not regarded in this section. Dominating classes are 'Visual Search' and 'Overview & Details on Demand'. Table 2 gives an overview of the classification of tasks used along with the cited papers. In the following, we take a close look at each class of analysis task and summarize our findings.

Clustering/classification
Most papers that deploy clustering or classification tasks use them as a tool to compare differences in perception and analysis efficiency between different media. There are many works that make use of clustering or classfication as typical visual analysis tasks to evaluate different types of CAVES [NCCN99,ACCN99], to conduct cross-comparisons between multiple media (2D screen, HMD VR, CAVE) [RBLN04,EML13], or to compare HMD VR environments with 2D screen environments [WRFN17, KWO*20]. For instance, Etemadpour et al. [EML13] compared the CAVE environment to a conventional 2D screen setup and measured the task performance of users completing various clustering tasks, such as counting clusters, finding the cluster closest to a given cluster, and detecting the densest cluster. We identified one paper that presents a cluster identification task as a use case without quantitative evaluation [APGV06], and one paper with a qualitative expert evaluation in which cluster identification is treated as a task that can be easily solved with the presented technique [BHM*18]. While all previously mentioned works make use of scatterplot visualizations, the latter is the only one that uses clustering or classification tasks on another type of visualization -namely, a 3D PCP.

Anomaly detection
Even though anomaly detection is an essential component of data analysis, it is rarely investigated in the context of IA. In our set of papers, only two explicitly deploy an outlier detection task in their user studies [RBLN04,WRFN17]. In addition, Arns et al. [ACCN99] did not specify the anomaly completely but let study participants search for 'outstanding characteristics' in statistical data.

Pattern analysis
Similarly to anomaly detection, pattern analysis is a popular data analysis task. Raja [VRV97] presented abstracted volume visualizations of molecules and argued for their capability to convey the structure of the molecules and give an overview of the explored data space. Similarly, Wijayasekara et al. [WLM11] stated that visualizing SOM neurons in a 3D cube helps analysts to understand the topology of the network and to get an overview of relationships in the high-dimensional data.
Apart from overview capabilities on certain topological data types, Bellgardt et al. [Bel17] explicitly elaborated on their detailson-demand approach in the visualization and exploration of highdimensional data. They presented a visualization technique for the inspection of a single data point as an immersive landscape glyph. Moreover, Hayatpur et al. [HXW20] try to exploit improved spatial memory capabilities by lay-outing a users' analytic provenance graph in virtual space. Results of their qualitative user study indicate beneficial effects of the provided spatial layout of the workflow for data exploration and data understanding.

Comparative analysis
We identified six papers that describe comparative analysis tasks with different visualization objectives, such as protein struc-

Data enrichment
Label placement is a frequent and essential task in AR. The required dynamic positioning of text snippets brings together both the classic challenges of label placement in static 2D drawings and the issues arising from viewpoint movement, distance changes and occlusion  in 3D. However, label placement was rarely investigated in the context of abstract 3D visualization. One rare example is presented by Azuma and Furmanski [AF03] who evaluated label placement algorithms, including both cognitive and perceptual issues, and found no clear relation between their label movement metrics and the users' performance. However, they did find indicators that label overlap is a critical factor in readability, and therefore the choice of the right placement algorithm depends on the use case (much vs. less change in user viewpoint). In addition to label placement, two other works present techniques for data enrichment -in graph visualizations as interactive selection and manipulation of nodes [SPVT01] and in scatterplot visualizations in terms of point selections and custom annotations [RBLN04, RFD20].

Paper types
In this section, we group and analyse papers based on their paper type (see Table 3). As shown in Figure 13, papers of the type 'Evaluation' dominate the considered spectrum of papers and are relatively evenly distributed over time, whereas most papers of the type 'Technique' were presented in recent years. In the following, we take a close look at each class of paper type and summarize our findings.

Technique
In the analysed corpus, we identified only a small number of technique papers. To be classified as a technique paper in the scope of In their work, the authors investigate novel interaction techniques tailored to the hybrid environment. In their framework (ART), it is possible to interact with the PCPs via gestures and touch interactions on the table. Also taking distance from pure XR environments, Cordeil et al. [CBC*20] use tangible 'embodied axes' as a controller to interact with abstract 3D visualizations. Qualitative expert feedback, and quantitative results of a controlled user study, indicate that their introduced interaction modality increases the accuracy in selection tasks, for instance, in scatterplots.

Evaluation
Evaluation publications considered in this survey can be divided into three high-level categories: (i) papers presenting the adoption of an existing visualization technique in immersive environments and evaluating its applicability for IA, (ii) papers that deal with the evaluation of fundamental human factors in visualization applications and (iii) papers comparing the conventional medium screen with novel MR/AR/VR mediums for observing abstract visualizations. Human factors -The second category of evaluation papers is concerned with the evaluation of fundamental human factors inherent to immersive environments, with the focus being on the assessment of the impact of immersion on the observation of abstract visualizations. Early on, Ware et al. [WF94] compared the performance of users when inspecting 3D networks in 2D, in stereo 3D, and in VR. In their follow-up work [WF96], the authors elaborated their evaluation and conducted an exhaustive user study, comparing 2D with 3D mediums for inspecting 3D networks and investigating the impact of depth cues for data understanding in 3D network visualizations.

Assessing the suitability of AR/VR -
Focusing on visual perception, Krekhov et al. [KCWK19] made use of the property of stereoscopic vision in VR for highlighting. In the technique 'deadeye' they propose, highlighting is achieved by displaying highlighted objects on one eye only, making their appearance more dominant to the observer. Similarly, Whitlock et al. [WSS20] compared time and accuracy of information conveyed over five different visual channels when observed on a conventional screen, in AR, or in VR. Comparing media -The third category of evaluation papers includes evaluations that compare the conventional medium screen with immersive VR/AR mediums for inspecting abstract visualizations. The visualization most often evaluated in this sense, is the scatterplot. For example, many works compare conventional monitor screens to immersive environments (e.g. Cave [ACCN99,EML13], Cave + HMD [RBLN04], HMD [DDC*15, WRFN17]). Thereby, measuring user performance in typical scatterplot analysis tasks is a popular choice, such as distance estimation, cluster segregation and outlier detection.
Besides scatterplots, graph visualizations are also quite popular for comparing different media. For instance, Belcher et al. [BBHS03] compared the medium screen with an AR HMD for path tracing performance on graph visualizations. The focus was on evaluating the benefit of stereo-vision for path tracing in 3D graphs, which were displayed as 2D projections on a screen, in 3D on a screen, and in 3D in AR. Their results indicate that 3D outperforms 2D in terms of performance, usability, and understandability of the graph structure. However, they could not identify any benefits for using AR, as participants performed equally well in the screen 3D condition. Similarly, Kwon et al. [KMLM16] evaluated user performance for graph visualizations observed with different mediums (screen + VR HMD), but focused on different visualization variants, graph sizes, and task types. They found that participants were faster, used fewer interactions, and gave more correct answers for large graphs when in VR.
Of course, inter-media comparisons are not restricted to scatterplots and graphs, and there are various comparisons that make use of different visualizations. For instance, Wagner et al. [WSN19] evaluated the implementation of a STC in a virtual environment (HMD) and compared it with a monitor screen setup. Their qualitative evaluation included human factors such as usability, required learning curve, mental workload, and simulator sickness. Even rarer techniques were used Schroeder et al. [SAHC20] who deployed bubble charts and bee-swarm plots to investigate differences in user perception between AR HMDs and monitor screens.

System
We identified several system papers that present platforms for the development of abstract visualizations in immersive environments. Most of them present their platforms within a specific visualization demonstration or use case but describe the extensibility and broad applicability of their system.

Research prototypes for CAVEs -Sawant et al.
[SSL*00] provide an overview of a whole collection of visualization systems for CAVE VR environments and propose the 'Tele-Immersive Data Explorer', a system with a distributed architecture for collaborative, interactive visualizations that includes a combination of interactive desks and a CAVE VRE. Similarly, Nagel et al. [NGBV08] presented such a system for creating dynamic scatterplot visualizations with sound cues that offer a list of audiovisual tools. Their system builds on the modular system architecture of a previously developed approach [NGM01] with a similar scope. Wijayasekara et al. [WLM11] proposed 'CAVE-SOM', a framework system that allows the visualization of SOMs as 3D cubes in CAVE environments.

Model
In the considered corpus of papers, we identified only one paper as a model paper. Billow et al. [BC17] reflected on how a system can be evaluated in the domain of IA and presented a heuristic for evaluating IA systems. Their assessment used ten points to evaluate IA systems.

Design study/application
The second most common paper type in our corpus comprises papers with applications of known techniques and approaches that have not previously been used in immersive environments or in different constellations. For instance, Azzag et al. [APGV06] demonstrated the usage of VR for the interactive exploration of multimedia databases. Similarly, for flow visualizations, different applications investigated different directions. For example, they deal with the display and interactive inspection of fluid dynamics [VRV97], vector fields [KTS00] or 3D trajectories [HRD*19, HBV20]. Another example is presented by Reipschläger et al. [RFD20] that demonstrates the creativity in application papers that goes far beyond a simple adaption of existing screen-based techniques in immersive environnments. They combined powerwall displays and AR headsets. In their approach, 2D visualizations presented on large powerwall display are extended and connected to each other with AR visuals.

Technology
In the context of abstract data visualization, we identified three different categories based on technology: monitor, CAVE and HMDs (see Table 4). Figure 14 shows the distribution of papers over time with regard to the Technology category. Although this data sample is relatively small, it reflects historical developments of immersive technologies and their research applications. Around the year 2000, CAVE-related papers were very popular. Although the initial CAVE had been invented some years earlier by Cruz-Neira et al. [CNSD*92], research institutions started around the year 2000 to acquire CAVEs and use them in various research projects. Interestingly, only a few papers are from around the year 2010, indicating that VR was a rather unpopular topic at that time. It is not surprising that with their commercial success and increasing affordability, HMDs were used extensively for various research projects in the last few years. In the following, we take a close look at each type of technology and summarize our findings.

Monitor
We identified few papers that make use of monitors to create IA environments for abstract 3D visualizations that meet our requirements (Section 3). Various approaches use a 3D monitor in combination with shutter glasses [WF94, KTS00, APGV06, SWX*15]. For instance, early experiments were made with high resolution/frame rate monitors in combination with stereo glasses and head tracking in the context of a path tracing task [WF94,WF96] or the perception of visual variables, such as shape, colour and texture, representing multimedia data in scatterplots [APGV06]. Recently, Sommer et al.
[SWX*15] used a commercial approach, the zSpace -a passive stereoscopic monitor supporting spatial tracking of the head and a specific pen -to explore abstract variables of a car model and a biological cell. Moreover, this work used hybrid-dimensional visualization, using a 2D monitor to visualize a simple 2D network representation and the zSpace to explore the data semi-immersively. We also found papers that experiment with large powerwall setups. For instance, Maes et al.
[MMD*18] used a powerwall-setup and compared it to HMDs, while Reipschläger et al. [RFD20] investigated the interplay between powerwall and AR headsets in an evaluation study of augmentation approaches for static 2D visualizations.

CAVE
For more than two decades, CAVEs were popular devices in VRrelated research with a broad range of applications. However, due to high acquisition and maintenance costs, accessibility is limited to a relatively small circle of researchers and end-users.
Hardware diversity -A wide range of different CAVE setups exists -also in the context of abstract visualization. Some papers provide a very detailed definition of the CAVE setup used. To give an example of an early, well-described CAVE configuration: Symanzik et al. [SSC*93] used a CAVE of 12x12x9 ft. where stereo images were projected on three walls and the floor and shutter glasses were used in combination with a magnetic-based tracker, a cyberglove and a handheld wand. This 'wand' is a handheld input device for interaction with 3D objects and menus in the immersive environment and is a popular device for analytical tasks in the CAVE [KTS00,WLM11]. Most CAVE setups use passive stereoscopic glasses but a few, especially older approaches, use shutter glasses [SSC*93, BCC05,NGBV08]. While some works only describe modules of their framework in most detail, others limit themselves to a very shallow description of the deployed hardware. For instance, Raja et al. [RBLN04] elaborated on the value of head tracking in a CAVE during a study and describe this technological component in most detail while not elaborating too much on the composition of the CAVE itself, and Ferey et al. [FGHG05] describe a CAVE-like setup with two rear-projected orthogonal screens without further elaboration.
Domains and tasks -Especially since the commercial success of HMDs, collaboration has become a central CAVE domain. Sawant et al. [SSL*00] presented a collaborative environment with an interplay between CAVE and interactive, non-immersive desks. Various analysis tasks and techniques were explored in the CAVE context, including network analysis [SAK*02], particle flow analysis [BCC05], data mining and statistics [NCCN99,NGBV08], scatterplots [EML13] and glyphs [Bel17]. Cordeil et al. [CDK*17] compared CAVE and HMD for collaborative network analysis.

HMD
HMDs were used in the aforementioned comparative studies in which monitor and projector-based visualization were compared with immersive visualizations [WRFN17, MMD*18]. HMDs became very popular and were used in aforementioned application, technique and evaluation papers due to their high affordability -especially in contrast to the previously discussed CAVEs. In this survey, we avoid a direct comparison of different hardware setups under consideration of their suitability for abstract 3D visualization due to the enormous landscape of different devices and a very narrow field of quantitatively assessed setups, which makes it difficult to objectively judge and generalize the contextual quality of a certain device. For comparing hardware specifications of state-of-the-art AR/VR HMDs, we refer to up-to-date online resources (e.g. [ben21, Roa21, Wik]). The following paragraphs break down, which HMDs were most dominantly deployed for certain abstract 3D visualizations discussed in this paper.
Early HMDs -HMDs were already used in early works. For example, Teylingen et al. [VRV97] developed the Virtual Data Visualizer system by using a SGI Indigo Elan, the Crimson Reality Engine and a standard HMD setup. Slay et al. [SPVT01] used the DSTO InVision system to evaluate different interaction modes in AR. Belcher et al. [BBHS03] used a SONY Glasstron LDI-100B HMD in combination with an ELMO mini camera and a cardboard disc with tracking markers to visualize manipulable virtual elements.
VR-HMDs -Nowadays, a number of HMDs for consumer and/or industry usage are on the market. Checa et al. [CB20] evaluated the usage of HMDs in the context of immersive serious VR games and came to the conclusion that the HTC VIVE and Oculus Rift seem to be the most popular VR-HMDs. This view seems to be also supported by the selection of our papers: In work discussed in this paper, the HTC VIVE was used for PCPs [  In addition, various frameworks support the use of both AR and VR. For instance, the paper on the DXR framework demonstrates its applicability with both, the Hololens and the ACER VR headset [SLC*19].

Environment type
In the following, we slightly shifted the focus from the technology used to the environment created by the technology: We clustered the papers along the virtuality continuum and distinguished between VR and AR applications (see Figure 15). Both VR and AR technologies are applied in the context of IA. As Table 5 indicates, VR has been the dominant environment throughout the considered period, but AR has seen an increased focus in the past few years. For differentiation, the virtuality continuum is often used [MTUK95]. In this section, we give some examples. However, since the previous section dealt with the technology being used to create immersive experiences, we will not discuss all works in depth in this section again and only reflect on the two environment types on a higher level.

Data types
Using this last dimension, each paper was analysed with regard to the data considered in the respective visualization approach and grouped into one or more classes of data type (see Table 6). Most of the papers are concerned with the visualization of tabular datathat is, independent data items with multiple dimensions listed in a table. Besides, we also found several papers on the visualization of network, field and geometry data. In the following, we take a close look at each type of data and summarize our findings.

Tables
Table data, as classified by Munzner et al. [Mun14], comprise multidimensional data where a data point is composed of a set of attributes. This type of data is the most frequently deployed in the considered corpus (see Figure 16). In most cases, the papers do not specify the concrete meaning of the data used, but rather focus on the description of its properties and refer only to three-dimensional [FVP*18] or higher-dimensional data [NGM01, DDC*15, CCD*17, BHM*18, SLC*19, KCWK19]. In some works, it is specified more precisely that the data are statistical data [SSC*93, ACCN99], features from multimedia data [APGV06], or higher-dimensional data that has been reduced to three dimensions [EML13,WRFN17]. We also found two papers that explicitly stated that the underlying data were artificially created [RBLN04,KWO*20]. Some papers describe table data with georeference [SSL*00, BGB*18]. The data, therefore, also consist of data points with multiple attributes, but one of the attributes is a geocoordinate, which can be used to position the data on a geo-map. We found six works that make use of multivariate datasets in which each data point is defined for a series of time steps, that is, for each tuple of data item and time, there are different attributes that compose a data entry in the

Networks
Network data include data items that are interconnected. The relation between data entries is established with (weighted) references between data points. In several papers, the authors resort to artificially created network data to conduct controlled user studies with data sets with restricted properties [ [MRS*18] used time-variate surface information and explored flows in the data. We found one more paper that specifically focuses on flow data of particles [BCC05]. Wijayasekara et al. [WLM11] visualized the network structure of neurons in SOMs. The underlying data can be classified as field data since each value is associated with a certain location on a 3D grid.

Geometry
Geometry data refer to datasets with data entries containing information about their spatial position. As we limited the scope of this survey to abstract 3D visualizations, only papers with abstract visualization elements remained in the final paper corpus. We found papers working with geometry data from the real world, such as plant [SWX*15] models. Others focus on microscopically small volume structures, such as redox-modified cysteines [MMD*18], or fly embryos [GPG*19]. In addition to volume geometry data, several papers also use data that establish spatial positioning by means of stored geolocations, such as earth surface information [HHC18, MRS*18].

Reflection
Data types such as fields, real-world geometry and data with georeferences lend themselves more naturally to a 3D visualization, and as a consequence, they also might allow creating simple yet intuitive interactions for navigation. Where 3D coordinates do not necessarily have a natural interpretation, such as for node-link network diagrams and some dimension-reducing projections of table data, more effort is required to conceive intuitive navigation. One associated challenge is to guide the user in the choice of insight-creating viewing perspectives.
While there is an interest in using immersive technologies for the analysis of networks, the research is far from having explored a large portion of the corresponding design space. The freedom in the selection of the visualization idiom, encoding, interaction and use of space is challenging, as the efficiency and effectiveness of different combinations are not yet well investigated and evaluated in immersive environments. In particular, for the large data sets from current applications, a big challenge is to create scalable approaches, for example, by employing adaptive multi-level representations and abstractions.

Implications for Data Visualization
In this section, findings, lessons learned and guidelines for the application of immersive environments in analysis tasks on abstract 3D visualizations are synthesized and summarized.

3D structures & depth perception
Data that are visualized in 3D space (not necessarily spatial data) can profit from IA, for instance, through an improved depth perception of the analyst. The degree to which improved depth perception is beneficial for a certain analysis procedure depends mainly on the analysis task. For instance, results of quantitative user experiments revealed that distances between data points can be perceived more accurately in stereoscopic environments compared to monoscopic 2D displays [EML13,WRFN17]. This fundamental finding is reflected in follow-up studies with more complex tasks like cluster identification [KWO*20, WRFN17, NCCN99], or outlier detection [WRFN17]. Hence, if, for a certain analysis task, distance estimation between two points in the 3D visualization is relevant, the deployment of immersive environments to observe the visualization may pose an advantage.
Frequently, the increased performance of participants working with stereoscopic settings is ascribed to improved depth perception. Researchers in various domains support the hypothesis that improved depth perception inherent to stereoscopic displays increase task performance in various spatial tasks [GB06] like measuring position and distance of data objects or path tracing in 3D graphs. Whitlock et al. [WSS20] even consider that the improvements of depth perception inherent to stereoscopic viewing might alleviate the stigma of 3D visualizations. Wither et al. [WH05] present techniques to further boost depth perception with spatial cues. However, there are also critical voices in terms of adding additional depth cues to the visualization, especially for large data sizes with complex structures [LBS16]. Another explanation for the improved performance of users in immersive environments could be the increased level of immersion of analysts in the data space. Arns et al. [ACCN99] came to the conclusion that the reason for the better performance of participants in the VR environment is the 'true' three-dimensionality caused by the immersion of the user. This effect can also lead to a reduced learning curve in understanding more complex data structures [BCC05], as shown for path tracing experiments in network graphs [WF96,BBHS03,KMLM16].
However, there is also research reporting different results. One drawback of AR was identified by Whitlock et al. [WSS20]. In their study, participants had difficulties with decoding colours from visualizations due to the fact that virtual elements overlapped with the real-world environment. Therefore, they advise deploying colour with care as a visual variable in AR environments.
In summary, research mainly reports positively about the use of 3D stereoscopic visualizations. However, designers have to be careful when working with low-resolution devices, large amounts of data or additional depth cues since these can have a negative effect on the analysis result.

Navigation & interaction
IA opens new ways and possibilities for the design of interaction and navigation modalities. For instance, VR environments enable more intuitive and natural interactions (e.g. movement by walking, selection by grabbing). However, this great freedom of choice, in combination with the absence of guidelines and reference work for best practices, also leads to a high degree of uncertainty and arbitrariness when designing user interaction concepts for immersive applications. Researchers report steep learning curves and poor performance of users with unfamiliar, direct interaction approaches.
IA allows users to interact with 3D objects in 3D space directly and is, therefore, more engaging. Symanzik et al. presented scatterplots of statistical data in a CAVE environment and claimed that the visualization encompassing the user is 'inviting interactions with the data' [SSC*93]. The more 'natural' and 'intuitive' interaction modalities associated with immersive environments are often cited as the reason for improved accuracy [ACCN99,SPVT01,BBHS03]. However, results regarding task completion times differ. While some could show that task performance increased with the new interaction techniques, others found the opposite effect of higher task completion times with immersive interactions (e.g. [ACCN99]).
Besides interactions for manipulating visualizations, various sources report improved navigation capabilities in immersive environments. For instance, Kwon et al. [KMLM16] found that navigation in 3D graphs in VR was more manageable compared to navigation in 2D graphs in screen-based environments. Immersive environments in which head movements control the perceived view on a visualization allow intuitive control of the viewport and improve navigation in 3D space. Raja et al. [RBLN04] concluded from a user study comparing the performance of users in various tasks on scatterplots when working in VR and in screen-based environments that 'head tracking showed a strong trend in favour of its use'. Task completion times, disorientation and usefulness ratings of users, and personal observations, led to their conclusion about the usefulness of head-tracking. Similarly, Hurter et al. [HRD*19] found a general benefit in the intuitive control of a user's viewport induced by head movements. However, without some orientation support people in VR environments might suffer from motion sickness or disorientation [PPM20].
Various sources report on difficulties inherent to immersive interaction modalities. While Wagner et al. [WSN19] found many aspects in favour of using immersive environments for the interactive analysis of STCs, such as higher usability scores, higher user preference and lower workload, users performed slightly worse when immersed. The authors attributed this finding to users' unfamiliarity with VR and the resulting interactions. Arns et al. [ACCN99] conducted a user study comparing the interaction capabilities of participants in a VR environment and in a screen-based environment. Even though they found that participants needed much more time to complete the given cluster selection task when immersed, they also found significant differences when taking the users' experience with VR into account. Users who were more familiar with VR were much faster compared to novice users. Hence, they concluded that interaction difficulties and the associated decrease in efficiency could be due to a steep learning curve and lack of familiarity with novel VR environments. Similarly, the line of argumentation of other works is that unfamiliarity is a major obstacle that makes usability comparisons of novel immersive and familiar screen-based environments difficult [NGM01].
In summary, research reports positively about the integration of 3D interaction and navigation when analysing data in VR or AR settings. However, there seems to be a steep learning curve in 3D navigation for users, which negatively affects completion time. Establishing some common grounds or guidelines for 3D user interaction and navigation might mitigate this negative effect.

Hardware
Immersive environments can be created with different mediums. The choice of the medium can have an impact on the effectiveness and efficiency of visualizations perceived within the therewith created AR or VR environment. For example, current AR HMDs, such as the Microsoft Hololens, have a very limited field of view, which affects the impression of immersion [YDJ*19]. This can be circumvented by creating the AR environment with see-through VR [SPVT01], in which cameras capture and manipulate the real environment and display it in a VR HMD.
Augmented and VR environments have both benefits and drawbacks compared to each other. AR environments provide better contextual awareness and reduce the likelihood of simulator sickness [SPVT01], while VR environments maximize immersion and enable remote collaboration in a completely shared environment [DDC*15]. In addition to differences in perception and usability, the hardware also differs in manageability and costs. While a CAVE setup is bulky and expensive [SSC*93], head-mounted solutions are much cheaper and more common [DDC*15].
Moreover, general technical limitations of state-of-the-art AR and VR technologies must be taken into account. For example, when discussing the poor performance and high task completion times of users when comparing their immersive environment with a conventional screen-based setup, Belcher et al. [BBHS03] refer to hampering properties of their deployed device, such as low resolution, limited field of view and colour and contrast characteristics.
With regard to VR hardware, we have seen a transition from CAVEs to HMDs VREs. While CAVE setups were very popular before the mid-2010s, the technology was displaced by more mobile and cheaper HMD solutions. The reason for that could be the broad offer of different, consumer-ready VR HMDs by various manufacturers. With that trend of VR being used by a wider range of people, we can expect IA to become more broadly applicable and easier accessible in the future.
In summary, there is a trend from CAVE environments to more flexible, affordable and mobile HMD devices. This transition will pave the way for an increase in IA applications. However, designers have to carefully consider the application domain and weighing up the benefits of 3D against the drawbacks like lower resolutions or worse colour characteristics in comparison to typical 2D screen setups.

Guidelines & common practice
Especially in earlier years of IA, researchers reported difficulties in designing user interfaces and visualizations for immersive environments. The visualization space in immersive environments is large and, in contrast to conventional screen-based environments, not restricted to a certain area (i.e. the screen). This complicates the design of visualization frameworks. For instance, Symazik et al. [SSC*93] discussed where a geo-map visualization could be optimally placed in the virtual environment. The design of user interfaces and menus in immersive environments is similarly difficult, and the adoption of conventional screen-optimized menus is not always feasible [VRV97]. Similarly, several sources report on difficulties when designing interactions for their visualizations due to the absence of guidelines in the field [NCCN99,RBLN04]. According to Whitlock et al. [WSS20], this also holds true for most basic research. The authors state that we still lack empirical grounding for how to best visualize data in immersive environments. In their work, the authors try to counter the issue by initial studies on visual variables, comparing the effectiveness and expressiveness of different variables like size, colour, orientation and depth in scatterplot visualizations. While comparing AR, VR and screen, their results indicate differences between all three media. Even though this gives us a first glance on medium-specific differences of the effectiveness of visual variables, exhaustive guidelines for visual variables, gestalt laws or pre-attentive perception as they are available for traditional monitor screens are still outstanding.
In summary, designing VR or AR applications for data analysis is still a challenging task. Due to the vast design space of immersive environments and the lack of empirical research, guidelines are still rare. More research is needed to establish a common basis for future designers to rely on.

Collaboration
The use of immersive environments can offer several advantages for collaborative analysis tasks on abstract 3D visualizations. For instance, Butscher et al. [BHM*18] discussed the potential of deploying AR for the collaborative analysis of multidimensional data visualized as PCPs. In their approach, the analysts are in the same physical environment and share the same digital content, allowing natural communication and coordination between collaborators. Similarly, Cordeil et al. [CDK*17] investigated the performance of users in co-located collaborative tasks on graph visualizations and compared a CAVE setup to an HMD VR environment. Their results suggest that both compared VR platforms perform equally well in most aspects for the tasks investigated. This and the fact that CAVE VREs are much more expensive, require more maintenance and are not available to the general public speak in favour of using HMD VR devices for collaborative tasks.
In addition, IA enables the natural collaboration of remotely located collaborators. Different approaches for remote collaboration on abstract data visualizations were presented. Leading arguments for the application of VR are that sharing the same visual space leads to better collaboration in visual data exploration tasks [DDC*15], improves communication [HHC18] and makes collaboration more convenient due to direct interaction capabilities in shared visualizations [SSL*00]. Somewhat more rarely, nevertheless represented, is research on co-located collaboration in VR environments. For instance, Lee et al. [LHC*20] compared different designs for collaborative co-located VR environments and argue for the highly flexible design of the shared workspace as an advantage of VR.
In summary, immersive environments support collaboration because of more natural interaction between users and a shared visual space for data exploration. These findings are independent of the underlying hardware favouring HMDs since they are less expensive and open to the general public.

Discussion and Open Research Areas
There are many papers describing new techniques for immersive visualizations, evaluations of existing non-immersive approaches deployed in AR or VR, comparisons between different immersive and non-immersive media, immersive visualization systems and applications of immersive environments for abstract 3D visualizations. However, there are hardly any taxonomy and model papers that focus on the application of abstract 3D visualizations in immersive environments. More and more research deals with the assessment of differences between mediums on abstract 3D visualizations and the identification of potentially beneficial properties of immersive environments in restricted settings. However, there are few generalizable guidelines and recommendations as to when and where the use of immersive environments can bring benefits. Initial observations in various studies suggested that even established visualization paradigms could be overwritten in immersive environments. For instance, gestalt laws or the order of visual variables according to their effectiveness, may be perceived differently in such environments, which could change the way visualizations should be designed for IA applications. Therefore, more fundamental research is required to address general issues of immersive visualization and to provide general guidelines for the application of visualizations in AR/VR. Over the last decades, a large number of different immersive technologies have been evaluated as media for displaying abstract 3D visualizations and compared to conventional, non-immersive analysis environments. While research uniformly points to advantages in immersive environments, such as direct manipulation capabilities of visualization elements bypassing indirect input modalities (e.g. mouse/keyboard interactions), interaction difficulties are often cited as a hindering factor for efficient analysis procedures. High degrees of freedom and interaction constraints (e.g. text input, coding) complicate various user interactions. The constant progress in technology leads to the continuous development of new interaction modalities for immersive environments, which have to be evaluated individually. Moreover, technological advances in immersive devices could also affect the effectiveness of certain visualizations and overwrite evaluation results of previous studies with outdated technologies. As previous research suggests, the level of perceived immersion is decisively influenced by factors like multisensory stimulation, display resolution and fidelity/photo-realisticness of the virtual environment. Increased levels of immersion can, in turn, influence visual analysis tasks. For instance, if the user is allowed to touch, feel or even smell data points with haptic VR gloves or HMD extensions, the illusion of actually dealing with real objects is enhanced.
Another popular justification for poorly functioning VR/AR scenarios is that immersive environments and accompanied input modalities are highly unfamiliar to most participants. Therefore, VR/AR environments might already increase their effectiveness if users are better trained and more familiar with the new environments. However, this could have a decreasing effect on other dimensions such as excitement and engagement, which could be increased in new AR/VR environments just by the fact of low levels of familiarization. Novel interaction paradigms invite further assessments. For instance, virtual teleportation is a popular technique to compensate for the limited physical space in VREs and needs to be carefully evaluated in contrast to physical walking or other alternatives such as VR treadmills or redirected walking.
In short, more research is needed to assess the actual impact of technology differences (resolution, fidelity, multisensory stimulation) and user familiarization on user performance in immersive visual analysis tasks. Further, studies on outdated devices and technologies may need to be repeated on newer devices that lead to higher levels of immersion. Of course, the results of previous studies can be used as a starting point for formulating new hypotheses.
The deployment of immersive environments for data analysis is largely independent of data types. The usefulness and applicability of immersive visualizations depend on the target analysis task and the chosen visualization type. We observed that none of the investigated papers contained abstract 3D visualizations for Text data. We assume that the main obstacle factors for this are missing or incorrect input modalities for text in VR/AR and non-optimal technical constraints of immersive devices, such as low resolution, which make reading text in the respective immersive environments difficult. Nevertheless, the great potential of plain text analysis in immersive environments should be considered carefully in future research.
While most IA papers focus on fundamental research on common visualizations (e.g. scatterplots, node-link graphs, geo-map visualizations), only a few make use of or present 3D adaptions of rare visualization techniques such as dense pixel visualizations, sankey diagrams, chord diagrams, arc diagrams, cartograms, stream charts, dendrograms or complex glyph visualizations. While it is important to evaluate the basic properties of immersion and their impact on visualization efficiency and effectiveness in combination with new interaction and visualization design conditions, the assessment of more complex niche visualizations deployed in immersive environments would be highly interesting.
There is no general and uniform framework, library or programming language that can be used to generate visualizations for immersive applications quickly. Certainly, there are different frameworks that allow quick prototyping of a certain set of visualizations, but much effort is needed to create the above-mentioned types of visualizations. In addition, existing IA authoring toolkit papers frequently point out the difficulty of completing all steps in the ref-erence model of visualization [CMS99] in order to create a visualization from scratch in immersive environments due to restricted code/text interaction capabilities. Therefore, the three main steps of applying data transformations, visual mappings and view transformations are mainly restricted to the latter two, and hardly any framework supports data transformation procedures in immersive environments. In this regard, establishing more standards for data handling and transmission might be a fruitful research direction and help for future designers.
Several 3D visualizations have proven to work poorly on 2D monitor screens. However, influencing factors induced by immersive environments could balance out or even eliminate some of the main disadvantages of such visualizations. Therefore, it might be reasonable to consider re-evaluating visualizations with a bad reputation, such as 3D bar charts in immersive environments. In some cases, the optimal approach might be a combination of 2D and 3D visualizations, allowing smooth transitions or links between them and taking advantage of both types.
Future research should not only focus on fundamental research but also explore the application of immersive technology for more advanced types of visualizations or the differences in collaboration when exposed to small restricted rooms in comparison to open space environments. Furthermore, there is a need for uniform development and authoring environments that facilitate the process of creating new types of device-independent immersive visualizations and make immersive visualization accessible to non-experts without advanced programming skills.
In previous research, abstract 3D visualization was mainly used in exploratory and confirmatory analysis procedures. However, we see a great potential of immersive visualizations for information presentation scenarios where the only goal is to convey information to an observer [RBEMV18]. Previous research has shown that immersive environments can enhance memorability, increase engagement and even intensify emotions. Such factors could help to make information more accessible, understandable and lasting. These properties could also prove helpful with regard to gamification and gameful learning. The application of immersive environments is also studied in various other research domains, such as in educational research (teaching scenarios with pupils), psychology (phobia treatment) and entertainment (game development). Future research in the field of IA should tie in with research in other domains, apply cross-domain knowledge transfer and use findings and insights from other domains as a basis for new hypotheses.
Basic research in IA could reveal certain potentially beneficial properties in immersive environments, such as improved spatial memory, direct object manipulation, natural navigation and improved depth perception through stereoscopic vision. However, most results are very task-and condition-specific and cannot be generalized. Depending on the task and condition, the visualization expert must assess whether properties that have proven useful in other cases apply to the current problem and must modify the visualization or environment to take advantage of potential benefits. Future research should try to establish general guidelines for the design of immersive visualizations to support the optimization of immersive analyses.
Although IA research pointed out various benefits of deploying immersive environments for the analysis of abstract data, AR/VR devices are not yet established media that are widely used in the industry for the visualization of abstract 3D data. The main obstacles for this could be (a) the lack of established end-user visualization environments (such as Tableau and others for non-immersive visualizations) for creating and exploring visualizations in VR, (b) high efforts to present immersive visualizations to a large audience and (c) usability constraints, such as uncomfortable and tedious headmounted displays or bulky and expensive setups.

Conclusion
We conducted a survey on publications of IA approaches for abstract data visualization. The publication selection was based on a keyword search and manual inspection. The base set was expanded by scanning the references of matching papers, resulting in a corpus of 58 papers covering a period of 27 years. A key observation from our survey is a surge in the number of publications in recent years. While this is not clear evidence that immersive environments are already accepted for abstract data analysis after years of scepticism, it does show that the design space and potential are being explored in current research projects. Furthermore, we can see that a variety of aspects is being investigated regarding data type, visualization technique and paper categories. However, while CAVEs played a central role in the early years of VR-related research, research on environments based on VR HMDs clearly dominates today. This may be due to the relatively inexpensive devices, the easy setup of such an environment, which is almost plug-and-play, and the broad support by available software for content creation. In addition, the controllers of current HMDs allow for quite intuitive interaction that goes beyond the standard desktop setup.
Despite the diversity of research topics covered in the publications investigated, there seems to be no structured exploration of the design space. As the results from studies are often quite specific to the conditions and tasks used, better characterization and specification would help to enable replication, but also a more structured approach to evaluating the potential of IA for abstract data visualization. Similarly, there is no common code base, such as a toolkit or framework, that supports fast prototyping of general solutions, and much effort is put into developing necessary basics for each of the projects. However, in the course of our survey, we have discussed a number of toolkits that already implement a wide selection of visualizations discussed here (e.g. [SLC*19, CCB*19, NSW*20]). Although there are prototypes, they are not widely used in the community, and developers tend to start their projects from scratch. Thus, we can see the potential for a community effort to create supporting toolkits that can be used for prototyping. Further initiatives are needed to develop common standards as a basis for general IA toolkits optimized for visualizing abstract data.
It is interesting to note that although the number of research projects using immersive technologies is increasing dramatically, the amount of abstract data visualization in this domain is relatively small. This can be seen in the relatively small number of papers found based on our search criteria. Therefore, this area has much potential for new findings.