Publikation:

BABEL: Bodies, Action and Behavior with English Labels

Lade...
Vorschaubild

Dateien

Zu diesem Dokument gibt es keine Dateien.

Datum

2021

Autor:innen

Punnakkal, Abhinanda R.
Chandrasekaran, Arjun
Athanasiou, Nikos
Black, Michael J.

Herausgeber:innen

Kontakt

ISSN der Zeitschrift

Electronic ISSN

ISBN

Bibliografische Daten

Verlag

Schriftenreihe

Auflagebezeichnung

URI (zitierfähiger Link)
ArXiv-ID

Internationale Patentnummer

Angaben zur Forschungsförderung

Projekt

Open Access-Veröffentlichung
Core Facility der Universität Konstanz

Gesperrt bis

Titel in einer weiteren Sprache

Publikationstyp
Beitrag zu einem Konferenzband
Publikationsstatus
Published

Erschienen in

Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2021, pp. 722-731. ISBN 978-1-66544-509-2. Available under: doi: 10.1109/CVPR46437.2021.00078

Zusammenfassung

Understanding the semantics of human movement – the what, how and why of the movement – is an important problem that requires datasets of human actions with semantic labels. Existing datasets take one of two approaches. Large-scale video datasets contain many action labels but do not contain ground-truth 3D human motion. Alternatively, motion-capture (mocap) datasets have precise body motions but are limited to a small number of actions. To address this, we present BABEL, a large dataset with language labels describing the actions being performed in mocap sequences. BABEL consists of language labels for over 43 hours of mocap sequences from AMASS, containing over 250 unique actions. Each action label in BABEL is precisely aligned with the duration of the corresponding action in the mocap sequence. BABELalso allows overlap of multiple actions, that may each span different durations. This results in a total of over 66000 action segments. The dense annotations can be leveraged for tasks like action recognition, temporal localization, motion synthesis, etc. To demonstrate the value of BABEL as a benchmark, we evaluate the performance of models on 3D action recognition. We demonstrate that BABEL poses interesting learning challenges that are applicable to real-world scenarios, and can serve as a useful benchmark for progress in 3D action recognition. The dataset, baseline methods, and evaluation code are available and supported for academic research purposes at https://babel.is.tue.mpg.de/.

Zusammenfassung in einer weiteren Sprache

Fachgebiet (DDC)
004 Informatik

Schlagwörter

Konferenz

IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 20. Juni 2021 - 25. Juni 2021, Nashville, TN, USA
Rezension
undefined / . - undefined, undefined

Forschungsvorhaben

Organisationseinheiten

Zeitschriftenheft

Zugehörige Datensätze in KOPS

Zitieren

ISO 690PUNNAKKAL, Abhinanda R., Arjun CHANDRASEKARAN, Nikos ATHANASIOU, M. Alejandra QUIRÓS-RAMÍREZ, Michael J. BLACK, 2021. BABEL: Bodies, Action and Behavior with English Labels. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, TN, USA, 20. Juni 2021 - 25. Juni 2021. In: Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2021, pp. 722-731. ISBN 978-1-66544-509-2. Available under: doi: 10.1109/CVPR46437.2021.00078
BibTex
@inproceedings{Punnakkal2021BABEL-56520,
  year={2021},
  doi={10.1109/CVPR46437.2021.00078},
  title={BABEL: Bodies, Action and Behavior with English Labels},
  isbn={978-1-66544-509-2},
  publisher={IEEE},
  address={Piscataway, NJ},
  booktitle={Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={722--731},
  author={Punnakkal, Abhinanda R. and Chandrasekaran, Arjun and Athanasiou, Nikos and Quirós-Ramírez, M. Alejandra and Black, Michael J.}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/56520">
    <dcterms:title>BABEL: Bodies, Action and Behavior with English Labels</dcterms:title>
    <dc:creator>Punnakkal, Abhinanda R.</dc:creator>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dc:creator>Black, Michael J.</dc:creator>
    <dcterms:abstract xml:lang="eng">Understanding the semantics of human movement – the what, how and why of the movement – is an important problem that requires datasets of human actions with semantic labels. Existing datasets take one of two approaches. Large-scale video datasets contain many action labels but do not contain ground-truth 3D human motion. Alternatively, motion-capture (mocap) datasets have precise body motions but are limited to a small number of actions. To address this, we present BABEL, a large dataset with language labels describing the actions being performed in mocap sequences. BABEL consists of language labels for over 43 hours of mocap sequences from AMASS, containing over 250 unique actions. Each action label in BABEL is precisely aligned with the duration of the corresponding action in the mocap sequence. BABELalso allows overlap of multiple actions, that may each span different durations. This results in a total of over 66000 action segments. The dense annotations can be leveraged for tasks like action recognition, temporal localization, motion synthesis, etc. To demonstrate the value of BABEL as a benchmark, we evaluate the performance of models on 3D action recognition. We demonstrate that BABEL poses interesting learning challenges that are applicable to real-world scenarios, and can serve as a useful benchmark for progress in 3D action recognition. The dataset, baseline methods, and evaluation code are available and supported for academic research purposes at https://babel.is.tue.mpg.de/.</dcterms:abstract>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-02-10T13:33:17Z</dc:date>
    <dc:contributor>Athanasiou, Nikos</dc:contributor>
    <dc:creator>Athanasiou, Nikos</dc:creator>
    <dc:creator>Quirós-Ramírez, M. Alejandra</dc:creator>
    <dc:contributor>Punnakkal, Abhinanda R.</dc:contributor>
    <dc:creator>Chandrasekaran, Arjun</dc:creator>
    <dc:contributor>Chandrasekaran, Arjun</dc:contributor>
    <dc:contributor>Quirós-Ramírez, M. Alejandra</dc:contributor>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-02-10T13:33:17Z</dcterms:available>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/56520"/>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dc:contributor>Black, Michael J.</dc:contributor>
    <dcterms:issued>2021</dcterms:issued>
    <dc:language>eng</dc:language>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
  </rdf:Description>
</rdf:RDF>

Interner Vermerk

xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter

Kontakt
URL der Originalveröffentl.

Prüfdatum der URL

Prüfungsdatum der Dissertation

Finanzierungsart

Kommentar zur Publikation

Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Ja
Begutachtet
Unbekannt
Diese Publikation teilen