AIR-Nets : An Attention-Based Framework for Locally Conditioned Implicit Representations

Lade...
Vorschaubild
Dateien
Zu diesem Dokument gibt es keine Dateien.
Datum
2021
Herausgeber:innen
Kontakt
ISSN der Zeitschrift
Electronic ISSN
ISBN
Bibliografische Daten
Verlag
Schriftenreihe
Auflagebezeichnung
URI (zitierfähiger Link)
ArXiv-ID
Internationale Patentnummer
Angaben zur Forschungsförderung
Projekt
Open Access-Veröffentlichung
Core Facility der Universität Konstanz
Gesperrt bis
Titel in einer weiteren Sprache
Publikationstyp
Beitrag zu einem Konferenzband
Publikationsstatus
Published
Erschienen in
2021 International Conference on 3D Vision, 3DV 2021 : , virtual conference ; 1-3 December 2021 : proceedings. Piscataway: IEEE, 2021, pp. 1054-1064. ISBN 978-1-66542-688-6. Available under: doi: 10.1109/3DV53792.2021.00113
Zusammenfassung

This paper introduces Attentive Implicit Representation Networks (AIR-Nets), a simple, but highly effective architecture for 3D reconstruction from point clouds. Since representing 3D shapes in a local and modular fashion increases generalization and reconstruction quality, AIR-Nets encode an input point cloud into a set of local latent vectors anchored in 3D space, which locally describe the object’s geometry, as well as a global latent description, enforcing global consistency. Our model is the first grid-free, encoder-based approach that locally describes an implicit function. The vector attention mechanism from [62] serves as main point cloud processing module, and allows for permutation invariance and translation equivariance. When queried with a 3D coordinate, our decoder gathers information from the global and nearby local latent vectors in order to predict an occupancy value. Experiments on the ShapeNet dataset [7] show that AIR-Nets significantly outperform previous state-of-the-art encoder-based, implicit shape learning methods and especially dominate in the sparse setting. Furthermore, our model generalizes well to the FAUST dataset [1] in a zero-shot setting. Finally, since AIR-Nets use a sparse latent representation and follow a simple operating scheme, the model offers several exiting avenues for future work. Our code is available at https: //github.com/SimonGiebenhain/AIR-Nets.

Zusammenfassung in einer weiteren Sprache
Fachgebiet (DDC)
004 Informatik
Schlagwörter
Implicit Functions, Local Shape Representation, 3D Reconstruction
Konferenz
International Conference on 3D Vision : 3DV 2021, 1. Dez. 2021 - 3. Dez. 2021, Online
Rezension
undefined / . - undefined, undefined
Forschungsvorhaben
Organisationseinheiten
Zeitschriftenheft
Datensätze
Zitieren
ISO 690GIEBENHAIN, Simon, Bastian GOLDLÜCKE, 2021. AIR-Nets : An Attention-Based Framework for Locally Conditioned Implicit Representations. International Conference on 3D Vision : 3DV 2021. Online, 1. Dez. 2021 - 3. Dez. 2021. In: 2021 International Conference on 3D Vision, 3DV 2021 : , virtual conference ; 1-3 December 2021 : proceedings. Piscataway: IEEE, 2021, pp. 1054-1064. ISBN 978-1-66542-688-6. Available under: doi: 10.1109/3DV53792.2021.00113
BibTex
@inproceedings{Giebenhain2021AIRNe-57559,
  year={2021},
  doi={10.1109/3DV53792.2021.00113},
  title={AIR-Nets : An Attention-Based Framework for Locally Conditioned Implicit Representations},
  isbn={978-1-66542-688-6},
  publisher={IEEE},
  address={Piscataway},
  booktitle={2021 International Conference on 3D Vision, 3DV 2021 : , virtual conference ; 1-3 December 2021 : proceedings},
  pages={1054--1064},
  author={Giebenhain, Simon and Goldlücke, Bastian}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/57559">
    <dc:contributor>Giebenhain, Simon</dc:contributor>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dcterms:issued>2021</dcterms:issued>
    <dc:language>eng</dc:language>
    <dcterms:title>AIR-Nets : An Attention-Based Framework for Locally Conditioned Implicit Representations</dcterms:title>
    <dcterms:abstract xml:lang="eng">This paper introduces Attentive Implicit Representation Networks (AIR-Nets), a simple, but highly effective architecture for 3D reconstruction from point clouds. Since representing 3D shapes in a local and modular fashion increases generalization and reconstruction quality, AIR-Nets encode an input point cloud into a set of local latent vectors anchored in 3D space, which locally describe the object’s geometry, as well as a global latent description, enforcing global consistency. Our model is the first grid-free, encoder-based approach that locally describes an implicit function. The vector attention mechanism from [62] serves as main point cloud processing module, and allows for permutation invariance and translation equivariance. When queried with a 3D coordinate, our decoder gathers information from the global and nearby local latent vectors in order to predict an occupancy value. Experiments on the ShapeNet dataset [7] show that AIR-Nets significantly outperform previous state-of-the-art encoder-based, implicit shape learning methods and especially dominate in the sparse setting. Furthermore, our model generalizes well to the FAUST dataset [1] in a zero-shot setting. Finally, since AIR-Nets use a sparse latent representation and follow a simple operating scheme, the model offers several exiting avenues for future work. Our code is available at https: //github.com/SimonGiebenhain/AIR-Nets.</dcterms:abstract>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dc:creator>Goldlücke, Bastian</dc:creator>
    <dc:contributor>Goldlücke, Bastian</dc:contributor>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/57559"/>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-05-17T10:51:33Z</dc:date>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-05-17T10:51:33Z</dcterms:available>
    <dc:creator>Giebenhain, Simon</dc:creator>
  </rdf:Description>
</rdf:RDF>
Interner Vermerk
xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter
Kontakt
URL der Originalveröffentl.
Prüfdatum der URL
Prüfungsdatum der Dissertation
Finanzierungsart
Kommentar zur Publikation
Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Ja
Begutachtet
Ja
Diese Publikation teilen