DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning
Dateien
Datum
Autor:innen
Herausgeber:innen
ISSN der Zeitschrift
Electronic ISSN
ISBN
Bibliografische Daten
Verlag
Schriftenreihe
Auflagebezeichnung
URI (zitierfähiger Link)
DOI (zitierfähiger Link)
Internationale Patentnummer
Link zur Lizenz
Angaben zur Forschungsförderung
Projekt
Open Access-Veröffentlichung
Core Facility der Universität Konstanz
Titel in einer weiteren Sprache
Publikationstyp
Publikationsstatus
Erschienen in
Zusammenfassung
Quantitative behavioral measurements are important for answering questions across scientific disciplines-from neuroscience to ecology. State-of-the-art deep-learning methods offer major advances in data quality and detail by allowing researchers to automatically estimate locations of an animal's body parts directly from images or videos. However, currently-available animal pose estimation methods have limitations in speed and robustness. Here we introduce a new easy-to-use software toolkit, DeepPoseKit, that addresses these problems using an efficient multi-scale deep-learning model, called Stacked DenseNet, and a fast GPU-based peak-detection algorithm for estimating keypoint locations with subpixel precision. These advances improve processing speed >2× with no loss in accuracy compared to currently-available methods. We demonstrate the versatility of our methods with multiple challenging animal pose estimation tasks in laboratory and field settings-including groups of interacting individuals. Our work reduces barriers to using advanced tools for measuring behavior and has broad applicability across the behavioral sciences.
Zusammenfassung in einer weiteren Sprache
Fachgebiet (DDC)
Schlagwörter
Konferenz
Rezension
Zitieren
ISO 690
GRAVING, Jacob M., Daniel CHAE, Hemal NAIK, Liang LI, Benjamin KOGER, Blair R. COSTELLOE, Iain D. COUZIN, 2019. DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning. In: eLife. eLife Sciences Publications. 2019, 8, e47994. eISSN 2050-084X. Available under: doi: 10.7554/eLife.47994BibTex
@article{Graving2019-10-01DeepP-47135, year={2019}, doi={10.7554/eLife.47994}, title={DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning}, volume={8}, journal={eLife}, author={Graving, Jacob M. and Chae, Daniel and Naik, Hemal and Li, Liang and Koger, Benjamin and Costelloe, Blair R. and Couzin, Iain D.}, note={Article Number: e47994} }
RDF
<rdf:RDF xmlns:dcterms="http://purl.org/dc/terms/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:bibo="http://purl.org/ontology/bibo/" xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:void="http://rdfs.org/ns/void#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/47135"> <dc:creator>Naik, Hemal</dc:creator> <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2019-10-08T11:39:15Z</dc:date> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/28"/> <dcterms:title>DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning</dcterms:title> <dc:creator>Chae, Daniel</dc:creator> <foaf:homepage rdf:resource="http://localhost:8080/"/> <dc:rights>Attribution 4.0 International</dc:rights> <dc:contributor>Naik, Hemal</dc:contributor> <dcterms:abstract xml:lang="eng">Quantitative behavioral measurements are important for answering questions across scientific disciplines-from neuroscience to ecology. State-of-the-art deep-learning methods offer major advances in data quality and detail by allowing researchers to automatically estimate locations of an animal's body parts directly from images or videos. However, currently-available animal pose estimation methods have limitations in speed and robustness. Here we introduce a new easy-to-use software toolkit, DeepPoseKit, that addresses these problems using an efficient multi-scale deep-learning model, called Stacked DenseNet, and a fast GPU-based peak-detection algorithm for estimating keypoint locations with subpixel precision. These advances improve processing speed >2× with no loss in accuracy compared to currently-available methods. We demonstrate the versatility of our methods with multiple challenging animal pose estimation tasks in laboratory and field settings-including groups of interacting individuals. Our work reduces barriers to using advanced tools for measuring behavior and has broad applicability across the behavioral sciences.</dcterms:abstract> <dc:creator>Costelloe, Blair R.</dc:creator> <dc:contributor>Couzin, Iain D.</dc:contributor> <dc:contributor>Graving, Jacob M.</dc:contributor> <dc:contributor>Koger, Benjamin</dc:contributor> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43615"/> <dc:creator>Koger, Benjamin</dc:creator> <dc:contributor>Li, Liang</dc:contributor> <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/47135/3/Graving_2-1u19syoqeazrh4.pdf"/> <dc:contributor>Costelloe, Blair R.</dc:contributor> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43615"/> <dc:contributor>Chae, Daniel</dc:contributor> <dcterms:issued>2019-10-01</dcterms:issued> <dc:language>eng</dc:language> <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/> <dc:creator>Li, Liang</dc:creator> <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2019-10-08T11:39:15Z</dcterms:available> <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/47135"/> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/28"/> <dc:creator>Graving, Jacob M.</dc:creator> <dc:creator>Couzin, Iain D.</dc:creator> <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/47135/3/Graving_2-1u19syoqeazrh4.pdf"/> <dcterms:rights rdf:resource="http://creativecommons.org/licenses/by/4.0/"/> </rdf:Description> </rdf:RDF>