Publikation: From virtual to physical environments when judging action opportunities : are diagnostics and trainings transferable?
Dateien
Datum
Autor:innen
Herausgeber:innen
ISSN der Zeitschrift
Electronic ISSN
ISBN
Bibliografische Daten
Verlag
Schriftenreihe
Auflagebezeichnung
URI (zitierfähiger Link)
DOI (zitierfähiger Link)
Internationale Patentnummer
Link zur Lizenz
Angaben zur Forschungsförderung
Deutsche Forschungsgemeinschaft (DFG): 438470816
European Union (EU): 291784
Projekt
Open Access-Veröffentlichung
Core Facility der Universität Konstanz
Titel in einer weiteren Sprache
Publikationstyp
Publikationsstatus
Erschienen in
Zusammenfassung
The proper evaluation of whether our given bodily capabilities and environmental properties allow particular actions is indispensable for pertinent decisions, so-called affordance judgments. These can be impacted by older age or brain damage. Virtual Environments (VEs) may provide an efficient opportunity to offer trainings. But do people make affordance judgments in VEs in the same way that they do in Physical Environments (PEs)? And are these decisions trainable by use of VEs? We investigated 24 healthy young adults’ judgment performance of whether or not they could fit their hand into a given aperture. They were presented with a set of opening-increments and indicated their judgments by pressing a yes- or no-button. The stimuli were presented in PE using an aperture apparatus and in VE displayed by use of Oculus Rift goggles. Our results demonstrated the level of equivalence to be specific to the variable: While we found equivalence between VE and PE for the accuracy parameter, results were uncertain or non-equivalent for perceptual sensitivity and for judgment tendency, respectively. When applying training in VE, judgment accuracy improved significantly when tested subsequently within VE. Improvement appeared detectable in PE only on a descriptive level. Furthermore, equivalence testing post-training revealed that perceptual sensitivity performance in VE approached a PE-level. Promisingly, the VE training approach appeared applicable and efficacious within the VE. Future studies need to specify factors that enhance equivalence for detection theory variables and that facilitate transfer from VEs to PEs when judging action opportunities.
Zusammenfassung in einer weiteren Sprache
Fachgebiet (DDC)
Schlagwörter
Konferenz
Rezension
Zitieren
ISO 690
GÖLZ, Milena S., Lisa FINKEL, Rebecca KEHLBECK, Anne HERSCHBACH, Isabel BAUER, Jean P.P. SCHEIB, Oliver DEUSSEN, Jennifer RANDERATH, 2023. From virtual to physical environments when judging action opportunities : are diagnostics and trainings transferable?. In: Virtual Reality. Springer. 2023, 27(3), pp. 1697-1715. ISSN 1359-4338. eISSN 1434-9957. Available under: doi: 10.1007/s10055-023-00765-4BibTex
@article{Golz2023-02-15virtu-66148, year={2023}, doi={10.1007/s10055-023-00765-4}, title={From virtual to physical environments when judging action opportunities : are diagnostics and trainings transferable?}, number={3}, volume={27}, issn={1359-4338}, journal={Virtual Reality}, pages={1697--1715}, author={Gölz, Milena S. and Finkel, Lisa and Kehlbeck, Rebecca and Herschbach, Anne and Bauer, Isabel and Scheib, Jean P.P. and Deussen, Oliver and Randerath, Jennifer} }
RDF
<rdf:RDF xmlns:dcterms="http://purl.org/dc/terms/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:bibo="http://purl.org/ontology/bibo/" xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:void="http://rdfs.org/ns/void#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/66148"> <dcterms:title>From virtual to physical environments when judging action opportunities : are diagnostics and trainings transferable?</dcterms:title> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/> <dc:contributor>Scheib, Jean P.P.</dc:contributor> <dc:creator>Randerath, Jennifer</dc:creator> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43615"/> <dc:creator>Kehlbeck, Rebecca</dc:creator> <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/66148/1/Goelz_2-1a8anfpvuczeh8.pdf"/> <dcterms:abstract>The proper evaluation of whether our given bodily capabilities and environmental properties allow particular actions is indispensable for pertinent decisions, so-called affordance judgments. These can be impacted by older age or brain damage. Virtual Environments (VEs) may provide an efficient opportunity to offer trainings. But do people make affordance judgments in VEs in the same way that they do in Physical Environments (PEs)? And are these decisions trainable by use of VEs? We investigated 24 healthy young adults’ judgment performance of whether or not they could fit their hand into a given aperture. They were presented with a set of opening-increments and indicated their judgments by pressing a yes- or no-button. The stimuli were presented in PE using an aperture apparatus and in VE displayed by use of Oculus Rift goggles. Our results demonstrated the level of equivalence to be specific to the variable: While we found equivalence between VE and PE for the accuracy parameter, results were uncertain or non-equivalent for perceptual sensitivity and for judgment tendency, respectively. When applying training in VE, judgment accuracy improved significantly when tested subsequently within VE. Improvement appeared detectable in PE only on a descriptive level. Furthermore, equivalence testing post-training revealed that perceptual sensitivity performance in VE approached a PE-level. Promisingly, the VE training approach appeared applicable and efficacious within the VE. Future studies need to specify factors that enhance equivalence for detection theory variables and that facilitate transfer from VEs to PEs when judging action opportunities.</dcterms:abstract> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43"/> <dc:contributor>Randerath, Jennifer</dc:contributor> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43"/> <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/66148/1/Goelz_2-1a8anfpvuczeh8.pdf"/> <dc:contributor>Deussen, Oliver</dc:contributor> <dc:contributor>Kehlbeck, Rebecca</dc:contributor> <dc:creator>Finkel, Lisa</dc:creator> <dc:contributor>Bauer, Isabel</dc:contributor> <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/66148"/> <dc:creator>Bauer, Isabel</dc:creator> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43615"/> <dc:contributor>Finkel, Lisa</dc:contributor> <foaf:homepage rdf:resource="http://localhost:8080/"/> <dcterms:issued>2023-02-15</dcterms:issued> <dc:creator>Herschbach, Anne</dc:creator> <dc:contributor>Herschbach, Anne</dc:contributor> <dc:creator>Deussen, Oliver</dc:creator> <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2023-02-21T08:43:58Z</dcterms:available> <dcterms:rights rdf:resource="http://creativecommons.org/licenses/by/4.0/"/> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/> <dc:contributor>Gölz, Milena S.</dc:contributor> <dc:creator>Gölz, Milena S.</dc:creator> <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2023-02-21T08:43:58Z</dc:date> <dc:rights>Attribution 4.0 International</dc:rights> <dc:language>eng</dc:language> <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/> <dc:creator>Scheib, Jean P.P.</dc:creator> </rdf:Description> </rdf:RDF>