Publikation: Evaluating Mixed and Augmented Reality : A Systematic Literature Review (2009-2019)
Dateien
Datum
Autor:innen
Herausgeber:innen
ISSN der Zeitschrift
Electronic ISSN
ISBN
Bibliografische Daten
Verlag
Schriftenreihe
Auflagebezeichnung
DOI (zitierfähiger Link)
Internationale Patentnummer
Angaben zur Forschungsförderung
Projekt
Open Access-Veröffentlichung
Core Facility der Universität Konstanz
Titel in einer weiteren Sprache
Publikationstyp
Publikationsstatus
Erschienen in
Zusammenfassung
We present a systematic review of 45S papers that report on evaluations in mixed and augmented reality (MR/AR) published in ISMAR, CHI, IEEE VR, and UIST over a span of 11 years (2009-2019). Our goal is to provide guidance for future evaluations of MR/AR approaches. To this end, we characterize publications by paper type (e.g., technique, design study), research topic (e.g., tracking, rendering), evaluation scenario (e.g., algorithm performance, user performance), cognitive aspects (e.g., perception, emotion), and the context in which evaluations were conducted (e.g., lab vs. in-thewild). We found a strong coupling of types, topics, and scenarios. We observe two groups: (a) technology-centric performance evaluations of algorithms that focus on improving tracking, displays, reconstruction, rendering, and calibration, and (b) human-centric studies that analyze implications of applications and design, human factors on perception, usability, decision making, emotion, and attention. Amongst the 458 papers, we identified 248 user studies that involved 5,761 participants in total, of whom only 1,619 were identified as female. We identified 43 data collection methods used to analyze 10 cognitive aspects. We found nine objective methods, and eight methods that support qualitative analysis. A majority (216/248) of user studies are conducted in a laboratory setting. Often (138/248), such studies involve participants in a static way. However, we also found a fair number (30/248) of in-the-wild studies that involve participants in a mobile fashion. We consider this paper to be relevant to academia and industry alike in presenting the state-of-the-art and guiding the steps to designing, conducting, and analyzing results of evaluations in MR/AR.
Zusammenfassung in einer weiteren Sprache
Fachgebiet (DDC)
Schlagwörter
Konferenz
Rezension
Zitieren
ISO 690
MERINO, Leonel, Magdalena SCHWARZL, Matthias KRAUS, Michael SEDLMAIR, Dieter SCHMALSTIEG, Daniel WEISKOPF, 2020. Evaluating Mixed and Augmented Reality : A Systematic Literature Review (2009-2019). International Symposium on Mixed and Augmented Reality (ISMAR). Porto de Galinhas, Brazil, 9. Nov. 2020 - 13. Nov. 2020. In: 2020 IEEE International Symposium on Mixed and Augmented Reality : ISMAR 2020 : Proceedings. Piscataway, NJ: IEEE, 2020, pp. 438-451. ISBN 978-1-72818-508-8. Available under: doi: 10.1109/ISMAR50242.2020.00069BibTex
@inproceedings{Merino2020Evalu-55967, year={2020}, doi={10.1109/ISMAR50242.2020.00069}, title={Evaluating Mixed and Augmented Reality : A Systematic Literature Review (2009-2019)}, isbn={978-1-72818-508-8}, publisher={IEEE}, address={Piscataway, NJ}, booktitle={2020 IEEE International Symposium on Mixed and Augmented Reality : ISMAR 2020 : Proceedings}, pages={438--451}, author={Merino, Leonel and Schwarzl, Magdalena and Kraus, Matthias and Sedlmair, Michael and Schmalstieg, Dieter and Weiskopf, Daniel} }
RDF
<rdf:RDF xmlns:dcterms="http://purl.org/dc/terms/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:bibo="http://purl.org/ontology/bibo/" xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:void="http://rdfs.org/ns/void#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/55967"> <dc:contributor>Kraus, Matthias</dc:contributor> <dc:creator>Weiskopf, Daniel</dc:creator> <dc:contributor>Weiskopf, Daniel</dc:contributor> <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2021-12-21T14:25:06Z</dcterms:available> <dc:contributor>Merino, Leonel</dc:contributor> <dc:creator>Kraus, Matthias</dc:creator> <dc:contributor>Sedlmair, Michael</dc:contributor> <dc:language>eng</dc:language> <dcterms:title>Evaluating Mixed and Augmented Reality : A Systematic Literature Review (2009-2019)</dcterms:title> <dc:contributor>Schmalstieg, Dieter</dc:contributor> <dc:creator>Merino, Leonel</dc:creator> <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/> <dc:creator>Sedlmair, Michael</dc:creator> <dc:creator>Schmalstieg, Dieter</dc:creator> <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2021-12-21T14:25:06Z</dc:date> <dc:creator>Schwarzl, Magdalena</dc:creator> <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/55967"/> <dcterms:abstract xml:lang="eng">We present a systematic review of 45S papers that report on evaluations in mixed and augmented reality (MR/AR) published in ISMAR, CHI, IEEE VR, and UIST over a span of 11 years (2009-2019). Our goal is to provide guidance for future evaluations of MR/AR approaches. To this end, we characterize publications by paper type (e.g., technique, design study), research topic (e.g., tracking, rendering), evaluation scenario (e.g., algorithm performance, user performance), cognitive aspects (e.g., perception, emotion), and the context in which evaluations were conducted (e.g., lab vs. in-thewild). We found a strong coupling of types, topics, and scenarios. We observe two groups: (a) technology-centric performance evaluations of algorithms that focus on improving tracking, displays, reconstruction, rendering, and calibration, and (b) human-centric studies that analyze implications of applications and design, human factors on perception, usability, decision making, emotion, and attention. Amongst the 458 papers, we identified 248 user studies that involved 5,761 participants in total, of whom only 1,619 were identified as female. We identified 43 data collection methods used to analyze 10 cognitive aspects. We found nine objective methods, and eight methods that support qualitative analysis. A majority (216/248) of user studies are conducted in a laboratory setting. Often (138/248), such studies involve participants in a static way. However, we also found a fair number (30/248) of in-the-wild studies that involve participants in a mobile fashion. We consider this paper to be relevant to academia and industry alike in presenting the state-of-the-art and guiding the steps to designing, conducting, and analyzing results of evaluations in MR/AR.</dcterms:abstract> <foaf:homepage rdf:resource="http://localhost:8080/"/> <dcterms:issued>2020</dcterms:issued> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/> <dc:contributor>Schwarzl, Magdalena</dc:contributor> </rdf:Description> </rdf:RDF>