Publikation:

3D-POP : An Automated Annotation Approach to Facilitate Markerless 2D-3D Tracking of Freely Moving Birds with Marker-Based Motion Capture

Lade...
Vorschaubild

Dateien

Naik_2-1sddlvjqt9nu76.PDF
Naik_2-1sddlvjqt9nu76.PDFGröße: 855.53 KBDownloads: 59

Datum

2023

Autor:innen

Chan, Hoi Hang
Yang, Junran
Delacoux, Mathilde
Couzin, Iain D.

Herausgeber:innen

Kontakt

ISSN der Zeitschrift

Electronic ISSN

ISBN

Bibliografische Daten

Verlag

Schriftenreihe

Auflagebezeichnung

ArXiv-ID

Internationale Patentnummer

Link zur Lizenz
oops

Angaben zur Forschungsförderung

Projekt

Open Access-Veröffentlichung
Open Access Green
Core Facility der Universität Konstanz

Gesperrt bis

Titel in einer weiteren Sprache

Publikationstyp
Beitrag zu einem Konferenzband
Publikationsstatus
Published

Erschienen in

2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ: IEEE, 2023, S. 21274-21284. ISBN 979-8-3503-0129-8. Verfügbar unter: doi: 10.1109/cvpr52729.2023.02038

Zusammenfassung

Recent advances in machine learning and computer vision are revolutionizing the field of animal behavior by enabling researchers to track the poses and locations of freely moving animals without any marker attachment. However, large datasets of annotated images of animals for markerless pose tracking, especially high-resolution images taken from multiple angles with accurate 3D annotations, are still scant. Here, we propose a method that uses a motion capture (mo-cap) system to obtain a large amount of annotated data on animal movement and posture (2D and 3D) in a semi-automatic manner. Our method is novel in that it extracts the 3D positions of morphological keypoints (e.g eyes, beak, tail) in reference to the positions of markers attached to the animals. Using this method, we obtained, and offer here, a new dataset - 3D-POP with approximately 300k annotated frames (4 million instances) in the form of videos having groups of one to ten freely moving birds from 4 different camera views in a 3.6m x 4.2m area. 3D-POP is the first dataset of flocking birds with accurate keypoint annotations in 2D and 3D along with bounding box and individual identities and will facilitate the development of solutions for problems of 2D to 3D markerless pose, trajectory tracking, and identification in birds.

Zusammenfassung in einer weiteren Sprache

Fachgebiet (DDC)
570 Biowissenschaften, Biologie

Schlagwörter

Konferenz

2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 17. Juni 2023 - 24. Juni 2023, Vancouver, BC, Canada
Rezension
undefined / . - undefined, undefined

Forschungsvorhaben

Organisationseinheiten

Zeitschriftenheft

Zugehörige Datensätze in KOPS

Datensatz
3D-POP - An automated annotation approach to facilitate markerless 2D-3D tracking of freely moving birds with marker-based motion capture
(V6, 2023) Naik, Hemal; Chan, Hoi Hang; Yang, Junran; Delacoux, Mathilde; Couzin, Iain D.; Kano, Fumihiro; Nagy, Mate

Zitieren

ISO 690NAIK, Hemal, Hoi Hang CHAN, Junran YANG, Mathilde DELACOUX, Iain D. COUZIN, Fumihiro KANO, Mate NAGY, 2023. 3D-POP : An Automated Annotation Approach to Facilitate Markerless 2D-3D Tracking of Freely Moving Birds with Marker-Based Motion Capture. 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vancouver, BC, Canada, 17. Juni 2023 - 24. Juni 2023. In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ: IEEE, 2023, S. 21274-21284. ISBN 979-8-3503-0129-8. Verfügbar unter: doi: 10.1109/cvpr52729.2023.02038
BibTex
@inproceedings{Naik2023-063DPOP-67783,
  title={3D-POP : An Automated Annotation Approach to Facilitate Markerless 2D-3D Tracking of Freely Moving Birds with Marker-Based Motion Capture},
  year={2023},
  doi={10.1109/cvpr52729.2023.02038},
  isbn={979-8-3503-0129-8},
  address={Piscataway, NJ},
  publisher={IEEE},
  booktitle={2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  pages={21274--21284},
  author={Naik, Hemal and Chan, Hoi Hang and Yang, Junran and Delacoux, Mathilde and Couzin, Iain D. and Kano, Fumihiro and Nagy, Mate}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/67783">
    <dcterms:issued>2023-06</dcterms:issued>
    <dc:creator>Yang, Junran</dc:creator>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43615"/>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/67783"/>
    <dc:creator>Couzin, Iain D.</dc:creator>
    <dc:language>eng</dc:language>
    <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/67783/1/Naik_2-1sddlvjqt9nu76.PDF"/>
    <dc:contributor>Naik, Hemal</dc:contributor>
    <dc:contributor>Delacoux, Mathilde</dc:contributor>
    <dc:contributor>Yang, Junran</dc:contributor>
    <dc:creator>Chan, Hoi Hang</dc:creator>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2023-09-13T14:09:56Z</dcterms:available>
    <dc:contributor>Chan, Hoi Hang</dc:contributor>
    <dc:creator>Kano, Fumihiro</dc:creator>
    <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/67783/1/Naik_2-1sddlvjqt9nu76.PDF"/>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/28"/>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43615"/>
    <dc:contributor>Couzin, Iain D.</dc:contributor>
    <dc:creator>Nagy, Mate</dc:creator>
    <dc:contributor>Kano, Fumihiro</dc:contributor>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dcterms:abstract>Recent advances in machine learning and computer vision are revolutionizing the field of animal behavior by enabling researchers to track the poses and locations of freely moving animals without any marker attachment. However, large datasets of annotated images of animals for markerless pose tracking, especially high-resolution images taken from multiple angles with accurate 3D annotations, are still scant. Here, we propose a method that uses a motion capture (mo-cap) system to obtain a large amount of annotated data on animal movement and posture (2D and 3D) in a semi-automatic manner. Our method is novel in that it extracts the 3D positions of morphological keypoints (e.g eyes, beak, tail) in reference to the positions of markers attached to the animals. Using this method, we obtained, and offer here, a new dataset - 3D-POP with approximately 300k annotated frames (4 million instances) in the form of videos having groups of one to ten freely moving birds from 4 different camera views in a 3.6m x 4.2m area. 3D-POP is the first dataset of flocking birds with accurate keypoint annotations in 2D and 3D along with bounding box and individual identities and will facilitate the development of solutions for problems of 2D to 3D markerless pose, trajectory tracking, and identification in birds.</dcterms:abstract>
    <dc:creator>Delacoux, Mathilde</dc:creator>
    <dc:contributor>Nagy, Mate</dc:contributor>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dcterms:title>3D-POP : An Automated Annotation Approach to Facilitate Markerless 2D-3D Tracking of Freely Moving Birds with Marker-Based Motion Capture</dcterms:title>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/28"/>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2023-09-13T14:09:56Z</dc:date>
    <dc:creator>Naik, Hemal</dc:creator>
  </rdf:Description>
</rdf:RDF>

Interner Vermerk

xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter

Kontakt
URL der Originalveröffentl.

Prüfdatum der URL

Prüfungsdatum der Dissertation

Finanzierungsart

Kommentar zur Publikation

Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Ja
Begutachtet
Diese Publikation teilen