Publikation:

Comparison of manual, machine learning, and hybrid methods for video annotation to extract parental care data

Lade...
Vorschaubild

Dateien

Chan_2-1xyhnr480mo6u0.pdf
Chan_2-1xyhnr480mo6u0.pdfGröße: 1.05 MBDownloads: 38

Datum

2024

Autor:innen

Liu, Jingqi
Burke, Terry
Pearse, William D.
Schroeder, Julia

Herausgeber:innen

Kontakt

ISSN der Zeitschrift

Electronic ISSN

ISBN

Bibliografische Daten

Verlag

Schriftenreihe

Auflagebezeichnung

DOI (zitierfähiger Link)
ArXiv-ID

Internationale Patentnummer

Angaben zur Forschungsförderung

Projekt

Open Access-Veröffentlichung
Open Access Gold
Core Facility der Universität Konstanz

Gesperrt bis

Titel in einer weiteren Sprache

Publikationstyp
Zeitschriftenartikel
Publikationsstatus
Published

Erschienen in

Journal of Avian Biology. Wiley. 2024, 2024(3-4), e03167. ISSN 0908-8857. eISSN 1600-048X. Verfügbar unter: doi: 10.1111/jav.03167

Zusammenfassung

Measuring parental care behaviour in the wild is central to the study of animal ecology and evolution, but it is often labour- and time-intensive. Efficient open-source tools have recently emerged that allow animal behaviour to be quantified from videos using machine learning and computer vision techniques, but there is limited appraisal of how these tools perform compared to traditional methods. To gain insight into how different methods perform in extracting data from videos taken in the field, we compared estimates of the parental provisioning rate of wild house sparrows Passer domesticus from video recordings. We compared four methods: manual annotation by experts, crowd-sourcing, automatic detection based on the open-source software DeepMeerkat, and a hybrid annotation method. We found that the data collected by the automatic method correlated with expert annotation (r = 0.62) and further show that these data are biologically meaningful as they predict brood survival. However, the automatic method produced largely biased estimates due to the detection of non-visitation events, while the crowd-sourcing and hybrid annotation produced estimates that are equivalent to expert annotation. The hybrid annotation method takes approximately 20% of annotation time compared to manual annotation, making it a more cost-effective way to collect data from videos. We provide a successful case study of how different approaches can be adopted and evaluated with a pre-existing dataset, to make informed decisions on the best way to process video datasets. If pre-existing frameworks produce biased estimates, we encourage researchers to adopt a hybrid approach of first using machine learning frameworks to preprocess videos, and then to do manual annotation to save annotation time. As open-source machine learning tools are becoming more accessible, we encourage biologists to make use of these tools to cut annotation time but still get equally accurate results without the need to develop novel algorithms from scratch.

Zusammenfassung in einer weiteren Sprache

Fachgebiet (DDC)
570 Biowissenschaften, Biologie

Schlagwörter

Konferenz

Rezension
undefined / . - undefined, undefined

Forschungsvorhaben

Organisationseinheiten

Zeitschriftenheft

Zugehörige Datensätze in KOPS

Zitieren

ISO 690CHAN, Hoi Hang, Jingqi LIU, Terry BURKE, William D. PEARSE, Julia SCHROEDER, 2024. Comparison of manual, machine learning, and hybrid methods for video annotation to extract parental care data. In: Journal of Avian Biology. Wiley. 2024, 2024(3-4), e03167. ISSN 0908-8857. eISSN 1600-048X. Verfügbar unter: doi: 10.1111/jav.03167
BibTex
@article{Chan2024-03Compa-68940,
  year={2024},
  doi={10.1111/jav.03167},
  title={Comparison of manual, machine learning, and hybrid methods for video annotation to extract parental care data},
  number={3-4},
  volume={2024},
  issn={0908-8857},
  journal={Journal of Avian Biology},
  author={Chan, Hoi Hang and Liu, Jingqi and Burke, Terry and Pearse, William D. and Schroeder, Julia},
  note={Article Number: e03167}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/68940">
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2024-01-05T08:42:42Z</dcterms:available>
    <dc:creator>Schroeder, Julia</dc:creator>
    <dcterms:rights rdf:resource="http://creativecommons.org/licenses/by/3.0/"/>
    <dc:contributor>Burke, Terry</dc:contributor>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43615"/>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/68940"/>
    <dc:contributor>Pearse, William D.</dc:contributor>
    <dc:contributor>Liu, Jingqi</dc:contributor>
    <dc:contributor>Chan, Hoi Hang</dc:contributor>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43615"/>
    <dc:creator>Burke, Terry</dc:creator>
    <dc:contributor>Schroeder, Julia</dc:contributor>
    <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/68940/1/Chan_2-1xyhnr480mo6u0.pdf"/>
    <dcterms:abstract>Measuring parental care behaviour in the wild is central to the study of animal ecology and evolution, but it is often labour- and time-intensive. Efficient open-source tools have recently emerged that allow animal behaviour to be quantified from videos using machine learning and computer vision techniques, but there is limited appraisal of how these tools perform compared to traditional methods. To gain insight into how different methods perform in extracting data from videos taken in the field, we compared estimates of the parental provisioning rate of wild house sparrows Passer domesticus from video recordings. We compared four methods: manual annotation by experts, crowd-sourcing, automatic detection based on the open-source software DeepMeerkat, and a hybrid annotation method. We found that the data collected by the automatic method correlated with expert annotation (r = 0.62) and further show that these data are biologically meaningful as they predict brood survival. However, the automatic method produced largely biased estimates due to the detection of non-visitation events, while the crowd-sourcing and hybrid annotation produced estimates that are equivalent to expert annotation. The hybrid annotation method takes approximately 20% of annotation time compared to manual annotation, making it a more cost-effective way to collect data from videos. We provide a successful case study of how different approaches can be adopted and evaluated with a pre-existing dataset, to make informed decisions on the best way to process video datasets. If pre-existing frameworks produce biased estimates, we encourage researchers to adopt a hybrid approach of first using machine learning frameworks to preprocess videos, and then to do manual annotation to save annotation time. As open-source machine learning tools are becoming more accessible, we encourage biologists to make use of these tools to cut annotation time but still get equally accurate results without the need to develop novel algorithms from scratch.</dcterms:abstract>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2024-01-05T08:42:42Z</dc:date>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/68940/1/Chan_2-1xyhnr480mo6u0.pdf"/>
    <dc:creator>Chan, Hoi Hang</dc:creator>
    <dc:creator>Pearse, William D.</dc:creator>
    <dcterms:issued>2024-03</dcterms:issued>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dc:language>eng</dc:language>
    <dc:creator>Liu, Jingqi</dc:creator>
    <dc:rights>Attribution 3.0 Unported</dc:rights>
    <dcterms:title>Comparison of manual, machine learning, and hybrid methods for video annotation to extract parental care data</dcterms:title>
  </rdf:Description>
</rdf:RDF>

Interner Vermerk

xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter

Kontakt
URL der Originalveröffentl.

Prüfdatum der URL

Prüfungsdatum der Dissertation

Finanzierungsart

Kommentar zur Publikation

Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Ja
Begutachtet
Ja
Diese Publikation teilen