Publikation:

Learning to detect an animal sound from five examples

Lade...
Vorschaubild

Dateien

Nolasco_2-1hylp3mwpbrne2.pdf
Nolasco_2-1hylp3mwpbrne2.pdfGröße: 4.25 MBDownloads: 32

Datum

2023

Autor:innen

Nolasco, Ines
Singh, Shubhr
Morfi, Veronica
Lostanlen, Vincent
Vidaña-Vila, Ester
Gill, Lisa
Pamuła, Hanna
Stowell, Dan
et al.

Herausgeber:innen

Kontakt

ISSN der Zeitschrift

Electronic ISSN

ISBN

Bibliografische Daten

Verlag

Schriftenreihe

Auflagebezeichnung

ArXiv-ID

Internationale Patentnummer

Link zur Lizenz

Angaben zur Forschungsförderung

Projekt

Open Access-Veröffentlichung
Open Access Hybrid
Core Facility der Universität Konstanz

Gesperrt bis

Titel in einer weiteren Sprache

Publikationstyp
Zeitschriftenartikel
Publikationsstatus
Published

Erschienen in

Ecological Informatics. Elsevier. 2023, 77, 102258. ISSN 1574-9541. eISSN 1878-0512. Available under: doi: 10.1016/j.ecoinf.2023.102258

Zusammenfassung

Automatic detection and classification of animal sounds has many applications in biodiversity monitoring and animal behavior. In the past twenty years, the volume of digitised wildlife sound available has massively increased, and automatic classification through deep learning now shows strong results. However, bioacoustics is not a single task but a vast range of small-scale tasks (such as individual ID, call type, emotional indication) with wide variety in data characteristics, and most bioacoustic tasks do not come with strongly-labelled training data. The standard paradigm of supervised learning, focussed on a single large-scale dataset and/or a generic pre-trained algorithm, is insufficient. In this work we recast bioacoustic sound event detection within the AI framework of few-shot learning. We adapt this framework to sound event detection, such that a system can be given the annotated start/end times of as few as 5 events, and can then detect events in long-duration audio—even when the sound category was not known at the time of algorithm training. We introduce a collection of open datasets designed to strongly test a system's ability to perform few-shot sound event detections, and we present the results of a public contest to address the task. Our analysis shows that prototypical networks are a very common used strategy and they perform well when enhanced with adaptations for general characteristics of animal sounds. However, systems with high time resolution capabilities perform the best in this challenge. We demonstrate that widely-varying sound event durations are an important factor in performance, as well as non-stationarity, i.e. gradual changes in conditions throughout the duration of a recording. For fine-grained bioacoustic recognition tasks without massive annotated training data, our analysis demonstrate that few-shot sound event detection is a powerful new method, strongly outperforming traditional signal-processing detection methods in the fully automated scenario.

Zusammenfassung in einer weiteren Sprache

Fachgebiet (DDC)
570 Biowissenschaften, Biologie

Schlagwörter

Konferenz

Rezension
undefined / . - undefined, undefined

Forschungsvorhaben

Organisationseinheiten

Zeitschriftenheft

Zugehörige Datensätze in KOPS

Zitieren

ISO 690NOLASCO, Ines, Shubhr SINGH, Veronica MORFI, Vincent LOSTANLEN, Ariana STRANDBURG-PESHKIN, Ester VIDAÑA-VILA, Lisa GILL, Hanna PAMUŁA, Emily GROUT, Dan STOWELL, 2023. Learning to detect an animal sound from five examples. In: Ecological Informatics. Elsevier. 2023, 77, 102258. ISSN 1574-9541. eISSN 1878-0512. Available under: doi: 10.1016/j.ecoinf.2023.102258
BibTex
@article{Nolasco2023Learn-68579,
  year={2023},
  doi={10.1016/j.ecoinf.2023.102258},
  title={Learning to detect an animal sound from five examples},
  volume={77},
  issn={1574-9541},
  journal={Ecological Informatics},
  author={Nolasco, Ines and Singh, Shubhr and Morfi, Veronica and Lostanlen, Vincent and Strandburg-Peshkin, Ariana and Vidaña-Vila, Ester and Gill, Lisa and Pamuła, Hanna and Grout, Emily and Stowell, Dan},
  note={Article Number: 102258}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/68579">
    <dcterms:rights rdf:resource="http://creativecommons.org/licenses/by/4.0/"/>
    <dc:contributor>Vidaña-Vila, Ester</dc:contributor>
    <dc:contributor>Grout, Emily</dc:contributor>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/28"/>
    <dc:contributor>Stowell, Dan</dc:contributor>
    <dc:creator>Gill, Lisa</dc:creator>
    <dc:contributor>Strandburg-Peshkin, Ariana</dc:contributor>
    <dc:creator>Stowell, Dan</dc:creator>
    <dc:creator>Morfi, Veronica</dc:creator>
    <dc:creator>Grout, Emily</dc:creator>
    <dcterms:hasPart rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/68579/1/Nolasco_2-1hylp3mwpbrne2.pdf"/>
    <dc:contributor>Lostanlen, Vincent</dc:contributor>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/68579"/>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dc:contributor>Singh, Shubhr</dc:contributor>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2023-12-05T09:50:30Z</dc:date>
    <dcterms:issued>2023</dcterms:issued>
    <dcterms:title>Learning to detect an animal sound from five examples</dcterms:title>
    <dc:contributor>Morfi, Veronica</dc:contributor>
    <dc:creator>Singh, Shubhr</dc:creator>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43615"/>
    <dc:creator>Strandburg-Peshkin, Ariana</dc:creator>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/28"/>
    <dcterms:abstract>Automatic detection and classification of animal sounds has many applications in biodiversity monitoring and animal behavior. In the past twenty years, the volume of digitised wildlife sound available has massively increased, and automatic classification through deep learning now shows strong results. However, bioacoustics is not a single task but a vast range of small-scale tasks (such as individual ID, call type, emotional indication) with wide variety in data characteristics, and most bioacoustic tasks do not come with strongly-labelled training data. The standard paradigm of supervised learning, focussed on a single large-scale dataset and/or a generic pre-trained algorithm, is insufficient. In this work we recast bioacoustic sound event detection within the AI framework of few-shot learning. We adapt this framework to sound event detection, such that a system can be given the annotated start/end times of as few as 5 events, and can then detect events in long-duration audio—even when the sound category was not known at the time of algorithm training. We introduce a collection of open datasets designed to strongly test a system's ability to perform few-shot sound event detections, and we present the results of a public contest to address the task. Our analysis shows that prototypical networks are a very common used strategy and they perform well when enhanced with adaptations for general characteristics of animal sounds. However, systems with high time resolution capabilities perform the best in this challenge. We demonstrate that widely-varying sound event durations are an important factor in performance, as well as non-stationarity, i.e. gradual changes in conditions throughout the duration of a recording. For fine-grained bioacoustic recognition tasks without massive annotated training data, our analysis demonstrate that few-shot sound event detection is a powerful new method, strongly outperforming traditional signal-processing detection methods in the fully automated scenario.</dcterms:abstract>
    <dc:creator>Vidaña-Vila, Ester</dc:creator>
    <dc:language>eng</dc:language>
    <dc:rights>Attribution 4.0 International</dc:rights>
    <dc:contributor>Nolasco, Ines</dc:contributor>
    <dc:contributor>Gill, Lisa</dc:contributor>
    <dc:creator>Pamuła, Hanna</dc:creator>
    <dc:creator>Nolasco, Ines</dc:creator>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/43615"/>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2023-12-05T09:50:30Z</dcterms:available>
    <dspace:hasBitstream rdf:resource="https://kops.uni-konstanz.de/bitstream/123456789/68579/1/Nolasco_2-1hylp3mwpbrne2.pdf"/>
    <dc:creator>Lostanlen, Vincent</dc:creator>
    <dc:contributor>Pamuła, Hanna</dc:contributor>
  </rdf:Description>
</rdf:RDF>

Interner Vermerk

xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter

Kontakt
URL der Originalveröffentl.

Prüfdatum der URL

Prüfungsdatum der Dissertation

Finanzierungsart

Kommentar zur Publikation

Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Ja
Begutachtet
Ja
Diese Publikation teilen