Understanding and Creating Spatial Interactions with Distant Displays Enabled by Unmodified Off-The-Shelf Smartphones
Dateien
Datum
Autor:innen
Herausgeber:innen
ISSN der Zeitschrift
Electronic ISSN
ISBN
Bibliografische Daten
Verlag
Schriftenreihe
Auflagebezeichnung
URI (zitierfähiger Link)
DOI (zitierfähiger Link)
Internationale Patentnummer
Link zur Lizenz
Angaben zur Forschungsförderung
Projekt
Open Access-Veröffentlichung
Sammlungen
Core Facility der Universität Konstanz
Titel in einer weiteren Sprache
Publikationstyp
Publikationsstatus
Erschienen in
Zusammenfassung
Over decades, many researchers developed complex in-lab systems with the overall goal to track multiple body parts of the user for a richer and more powerful 2D/3D interaction with a distant display. In this work, we introduce a novel smartphone-based tracking approach that eliminates the need for complex tracking systems. Relying on simultaneous usage of the front and rear smartphone cameras, our solution enables rich spatial interactions with distant displays by combining touch input with hand-gesture input, body and head motion, as well as eye-gaze input. In this paper, we firstly present a taxonomy for classifying distant display interactions, providing an overview of enabling technologies, input modalities, and interaction techniques, spanning from 2D to 3D interactions. Further, we provide more details about our implementation—using off-the-shelf smartphones. Finally, we validate our system in a user study by a variety of 2D and 3D multimodal interaction techniques, including input refinement.
Zusammenfassung in einer weiteren Sprache
Fachgebiet (DDC)
Schlagwörter
Konferenz
Rezension
Zitieren
ISO 690
BABIC, Teo, Harald REITERER, Michael HALLER, 2022. Understanding and Creating Spatial Interactions with Distant Displays Enabled by Unmodified Off-The-Shelf Smartphones. In: Multimodal Technologies and Interaction. MDPI. 2022, 6(10), 94. eISSN 2414-4088. Available under: doi: 10.3390/mti6100094BibTex
@article{Babic2022-10-19Under-58898.3, year={2022}, doi={10.3390/mti6100094}, title={Understanding and Creating Spatial Interactions with Distant Displays Enabled by Unmodified Off-The-Shelf Smartphones}, number={10}, volume={6}, journal={Multimodal Technologies and Interaction}, author={Babic, Teo and Reiterer, Harald and Haller, Michael}, note={Article Number: 94} }
RDF
<rdf:RDF xmlns:dcterms="http://purl.org/dc/terms/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:bibo="http://purl.org/ontology/bibo/" xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:void="http://rdfs.org/ns/void#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/58898.3"> <dc:language>eng</dc:language> <dcterms:rights rdf:resource="http://creativecommons.org/licenses/by/4.0/"/> <dc:contributor>Reiterer, Harald</dc:contributor> <dcterms:title>Understanding and Creating Spatial Interactions with Distant Displays Enabled by Unmodified Off-The-Shelf Smartphones</dcterms:title> <dc:rights>Attribution 4.0 International</dc:rights> <dc:contributor>Haller, Michael</dc:contributor> <dcterms:abstract xml:lang="eng">Over decades, many researchers developed complex in-lab systems with the overall goal to track multiple body parts of the user for a richer and more powerful 2D/3D interaction with a distant display. In this work, we introduce a novel smartphone-based tracking approach that eliminates the need for complex tracking systems. Relying on simultaneous usage of the front and rear smartphone cameras, our solution enables rich spatial interactions with distant displays by combining touch input with hand-gesture input, body and head motion, as well as eye-gaze input. In this paper, we firstly present a taxonomy for classifying distant display interactions, providing an overview of enabling technologies, input modalities, and interaction techniques, spanning from 2D to 3D interactions. Further, we provide more details about our implementation—using off-the-shelf smartphones. Finally, we validate our system in a user study by a variety of 2D and 3D multimodal interaction techniques, including input refinement.</dcterms:abstract> <dcterms:issued>2022-10-19</dcterms:issued> <dc:creator>Babic, Teo</dc:creator> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/> <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-10-25T12:55:49Z</dc:date> <foaf:homepage rdf:resource="http://localhost:8080/"/> <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/> <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-10-25T12:55:49Z</dcterms:available> <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/58898.3/1/Babic_2-10nz1ryixe8s66.pdf"/> <dc:creator>Reiterer, Harald</dc:creator> <dc:creator>Haller, Michael</dc:creator> <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/58898.3"/> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/> <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/58898.3/1/Babic_2-10nz1ryixe8s66.pdf"/> <dc:contributor>Babic, Teo</dc:contributor> </rdf:Description> </rdf:RDF>