Publikation:

Combining Unity with machine vision to create low latency, flexible and simple virtual realities

Lade...
Vorschaubild

Dateien

Zu diesem Dokument gibt es keine Dateien.

Datum

2025

Autor:innen

Ogawa, Yuri
Aoukar, Raymond
Leibbrandt, Richard
Manger, Jake S.
Bagheri, Zahra M.
Turnbull, Luke
Johnston, Chris
Mitchell, Jaxon
Nordström, Karin
et al.

Herausgeber:innen

Kontakt

ISSN der Zeitschrift

Electronic ISSN

ISBN

Bibliografische Daten

Verlag

Schriftenreihe

Auflagebezeichnung

URI (zitierfähiger Link)
ArXiv-ID

Internationale Patentnummer

Angaben zur Forschungsförderung

Projekt

Open Access-Veröffentlichung
Open Access Gold
Core Facility der Universität Konstanz

Gesperrt bis

Titel in einer weiteren Sprache

Publikationstyp
Zeitschriftenartikel
Publikationsstatus
Published

Erschienen in

Methods in Ecology and Evolution. Wiley. 2025, 16(1), S. 126-144. ISSN 2041-2096. eISSN 2041-210X. Verfügbar unter: doi: 10.1111/2041-210x.14449

Zusammenfassung

  1. In recent years, virtual reality arenas have become increasingly popular for quantifying visual behaviours. By using the actions of a constrained animal to control the visual scenery, the animal perceives that it is moving through a virtual world. Importantly, as the animal is constrained in space, behavioural quantification is facilitated. Furthermore, using computer-generated visual scenery allows for identification of visual triggers of behaviour.

  2. We created a novel virtual reality arena combining machine vision with the gaming engine Unity. For tethered flight, we enhanced an existing multi-modal virtual reality arena, MultiMoVR, but tracked wing movements using DeepLabCut-live (DLC-live). For tethered walking animals, we used FicTrac to track the motion of a trackball. In both cases, real-time tracking was interfaced with Unity to control the location and rotation of the tethered animal's avatar in the virtual world. We developed a user-friendly Unity Editor interface, CAVE, to simplify experimental design and data storage without the need for coding.

  3. We show that both the DLC-live-Unity and the FicTrac-Unity configurations close the feedback loop effectively and quickly. We show that closed-loop feedback reduces behavioural artefacts exhibited by walking crabs in open-loop scenarios, and that flying Eristalis tenax hoverflies navigate towards virtual flowers in closed loop. We show examples of how the CAVE interface can enable experimental sequencing control including use of avatar proximity to virtual objects of interest.

  4. Our results show that combining Unity with machine vision tools provides an easy and flexible virtual reality environment that can be readily adjusted to new experiments and species. This can be implemented programmatically in Unity, or by using our new tool CAVE, which allows users to design new experiments without additional programming. We provide resources for replicating experiments and our interface CAVE via GitHub, together with user manuals and instruction videos, for sharing with the wider scientific community.

Zusammenfassung in einer weiteren Sprache

Fachgebiet (DDC)
570 Biowissenschaften, Biologie

Schlagwörter

arthropod vision, closed loop, gain, motion vision, naturalistic stimuli, navigation, open loop

Konferenz

Rezension
undefined / . - undefined, undefined

Forschungsvorhaben

Organisationseinheiten

Zeitschriftenheft

Zugehörige Datensätze in KOPS

Zitieren

ISO 690OGAWA, Yuri, Raymond AOUKAR, Richard LEIBBRANDT, Jake S. MANGER, Zahra M. BAGHERI, Luke TURNBULL, Chris JOHNSTON, Pavan KAUSHIK, Jaxon MITCHELL, Karin NORDSTRÖM, 2025. Combining Unity with machine vision to create low latency, flexible and simple virtual realities. In: Methods in Ecology and Evolution. Wiley. 2025, 16(1), S. 126-144. ISSN 2041-2096. eISSN 2041-210X. Verfügbar unter: doi: 10.1111/2041-210x.14449
BibTex
@article{Ogawa2025-01Combi-71639,
  title={Combining Unity with machine vision to create low latency, flexible and simple virtual realities},
  year={2025},
  doi={10.1111/2041-210x.14449},
  number={1},
  volume={16},
  issn={2041-2096},
  journal={Methods in Ecology and Evolution},
  pages={126--144},
  author={Ogawa, Yuri and Aoukar, Raymond and Leibbrandt, Richard and Manger, Jake S. and Bagheri, Zahra M. and Turnbull, Luke and Johnston, Chris and Kaushik, Pavan and Mitchell, Jaxon and Nordström, Karin}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/71639">
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dc:contributor>Bagheri, Zahra M.</dc:contributor>
    <dc:contributor>Leibbrandt, Richard</dc:contributor>
    <dc:creator>Manger, Jake S.</dc:creator>
    <dc:creator>Kaushik, Pavan</dc:creator>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2024-12-11T08:55:48Z</dcterms:available>
    <dc:contributor>Mitchell, Jaxon</dc:contributor>
    <dcterms:abstract>1. In recent years, virtual reality arenas have become increasingly popular for quantifying visual behaviours. By using the actions of a constrained animal to control the visual scenery, the animal perceives that it is moving through a virtual world. Importantly, as the animal is constrained in space, behavioural quantification is facilitated. Furthermore, using computer-generated visual scenery allows for identification of visual triggers of behaviour.

2. We created a novel virtual reality arena combining machine vision with the gaming engine Unity. For tethered flight, we enhanced an existing multi-modal virtual reality arena, MultiMoVR, but tracked wing movements using DeepLabCut-live (DLC-live). For tethered walking animals, we used FicTrac to track the motion of a trackball. In both cases, real-time tracking was interfaced with Unity to control the location and rotation of the tethered animal's avatar in the virtual world. We developed a user-friendly Unity Editor interface, CAVE, to simplify experimental design and data storage without the need for coding.

3. We show that both the DLC-live-Unity and the FicTrac-Unity configurations close the feedback loop effectively and quickly. We show that closed-loop feedback reduces behavioural artefacts exhibited by walking crabs in open-loop scenarios, and that flying Eristalis tenax hoverflies navigate towards virtual flowers in closed loop. We show examples of how the CAVE interface can enable experimental sequencing control including use of avatar proximity to virtual objects of interest.

4. Our results show that combining Unity with machine vision tools provides an easy and flexible virtual reality environment that can be readily adjusted to new experiments and species. This can be implemented programmatically in Unity, or by using our new tool CAVE, which allows users to design new experiments without additional programming. We provide resources for replicating experiments and our interface CAVE via GitHub, together with user manuals and instruction videos, for sharing with the wider scientific community.</dcterms:abstract>
    <dc:creator>Ogawa, Yuri</dc:creator>
    <dc:creator>Johnston, Chris</dc:creator>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/71639"/>
    <dc:contributor>Aoukar, Raymond</dc:contributor>
    <dc:creator>Nordström, Karin</dc:creator>
    <dc:creator>Bagheri, Zahra M.</dc:creator>
    <dc:creator>Aoukar, Raymond</dc:creator>
    <dc:contributor>Manger, Jake S.</dc:contributor>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2024-12-11T08:55:48Z</dc:date>
    <dc:contributor>Kaushik, Pavan</dc:contributor>
    <dc:contributor>Ogawa, Yuri</dc:contributor>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43615"/>
    <dc:rights>Attribution-NonCommercial-NoDerivatives 4.0 International</dc:rights>
    <dc:contributor>Nordström, Karin</dc:contributor>
    <dc:language>eng</dc:language>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dcterms:title>Combining Unity with machine vision to create low latency, flexible and simple virtual realities</dcterms:title>
    <dc:creator>Turnbull, Luke</dc:creator>
    <dc:creator>Leibbrandt, Richard</dc:creator>
    <dc:contributor>Johnston, Chris</dc:contributor>
    <dcterms:issued>2025-01</dcterms:issued>
    <dc:contributor>Turnbull, Luke</dc:contributor>
    <dc:creator>Mitchell, Jaxon</dc:creator>
    <dcterms:rights rdf:resource="http://creativecommons.org/licenses/by-nc-nd/4.0/"/>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43615"/>
  </rdf:Description>
</rdf:RDF>

Interner Vermerk

xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter

Kontakt
URL der Originalveröffentl.

Prüfdatum der URL

Prüfungsdatum der Dissertation

Finanzierungsart

Kommentar zur Publikation

Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Nein
Begutachtet
Ja
Diese Publikation teilen