AIR-Nets : An Attention-Based Framework for Locally Conditioned Implicit Representations

dc.contributor.authorGiebenhain, Simon
dc.contributor.authorGoldlücke, Bastian
dc.date.accessioned2022-05-17T10:51:33Z
dc.date.available2022-05-17T10:51:33Z
dc.date.issued2021eng
dc.description.abstractThis paper introduces Attentive Implicit Representation Networks (AIR-Nets), a simple, but highly effective architecture for 3D reconstruction from point clouds. Since representing 3D shapes in a local and modular fashion increases generalization and reconstruction quality, AIR-Nets encode an input point cloud into a set of local latent vectors anchored in 3D space, which locally describe the object’s geometry, as well as a global latent description, enforcing global consistency. Our model is the first grid-free, encoder-based approach that locally describes an implicit function. The vector attention mechanism from [62] serves as main point cloud processing module, and allows for permutation invariance and translation equivariance. When queried with a 3D coordinate, our decoder gathers information from the global and nearby local latent vectors in order to predict an occupancy value. Experiments on the ShapeNet dataset [7] show that AIR-Nets significantly outperform previous state-of-the-art encoder-based, implicit shape learning methods and especially dominate in the sparse setting. Furthermore, our model generalizes well to the FAUST dataset [1] in a zero-shot setting. Finally, since AIR-Nets use a sparse latent representation and follow a simple operating scheme, the model offers several exiting avenues for future work. Our code is available at https: //github.com/SimonGiebenhain/AIR-Nets.eng
dc.description.versionpublishedde
dc.identifier.doi10.1109/3DV53792.2021.00113eng
dc.identifier.urihttps://kops.uni-konstanz.de/handle/123456789/57559
dc.language.isoengeng
dc.subjectImplicit Functions, Local Shape Representation, 3D Reconstructioneng
dc.subject.ddc004eng
dc.titleAIR-Nets : An Attention-Based Framework for Locally Conditioned Implicit Representationseng
dc.typeINPROCEEDINGSde
dspace.entity.typePublication
kops.citation.bibtex
@inproceedings{Giebenhain2021AIRNe-57559,
  year={2021},
  doi={10.1109/3DV53792.2021.00113},
  title={AIR-Nets : An Attention-Based Framework for Locally Conditioned Implicit Representations},
  isbn={978-1-66542-688-6},
  publisher={IEEE},
  address={Piscataway},
  booktitle={2021 International Conference on 3D Vision, 3DV 2021 : , virtual conference ; 1-3 December 2021 : proceedings},
  pages={1054--1064},
  author={Giebenhain, Simon and Goldlücke, Bastian}
}
kops.citation.iso690GIEBENHAIN, Simon, Bastian GOLDLÜCKE, 2021. AIR-Nets : An Attention-Based Framework for Locally Conditioned Implicit Representations. International Conference on 3D Vision : 3DV 2021. Online, 1. Dez. 2021 - 3. Dez. 2021. In: 2021 International Conference on 3D Vision, 3DV 2021 : , virtual conference ; 1-3 December 2021 : proceedings. Piscataway: IEEE, 2021, pp. 1054-1064. ISBN 978-1-66542-688-6. Available under: doi: 10.1109/3DV53792.2021.00113deu
kops.citation.iso690GIEBENHAIN, Simon, Bastian GOLDLÜCKE, 2021. AIR-Nets : An Attention-Based Framework for Locally Conditioned Implicit Representations. International Conference on 3D Vision : 3DV 2021. Online, Dec 1, 2021 - Dec 3, 2021. In: 2021 International Conference on 3D Vision, 3DV 2021 : , virtual conference ; 1-3 December 2021 : proceedings. Piscataway: IEEE, 2021, pp. 1054-1064. ISBN 978-1-66542-688-6. Available under: doi: 10.1109/3DV53792.2021.00113eng
kops.citation.rdf
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/57559">
    <dc:contributor>Giebenhain, Simon</dc:contributor>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dcterms:issued>2021</dcterms:issued>
    <dc:language>eng</dc:language>
    <dcterms:title>AIR-Nets : An Attention-Based Framework for Locally Conditioned Implicit Representations</dcterms:title>
    <dcterms:abstract xml:lang="eng">This paper introduces Attentive Implicit Representation Networks (AIR-Nets), a simple, but highly effective architecture for 3D reconstruction from point clouds. Since representing 3D shapes in a local and modular fashion increases generalization and reconstruction quality, AIR-Nets encode an input point cloud into a set of local latent vectors anchored in 3D space, which locally describe the object’s geometry, as well as a global latent description, enforcing global consistency. Our model is the first grid-free, encoder-based approach that locally describes an implicit function. The vector attention mechanism from [62] serves as main point cloud processing module, and allows for permutation invariance and translation equivariance. When queried with a 3D coordinate, our decoder gathers information from the global and nearby local latent vectors in order to predict an occupancy value. Experiments on the ShapeNet dataset [7] show that AIR-Nets significantly outperform previous state-of-the-art encoder-based, implicit shape learning methods and especially dominate in the sparse setting. Furthermore, our model generalizes well to the FAUST dataset [1] in a zero-shot setting. Finally, since AIR-Nets use a sparse latent representation and follow a simple operating scheme, the model offers several exiting avenues for future work. Our code is available at https: //github.com/SimonGiebenhain/AIR-Nets.</dcterms:abstract>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dc:creator>Goldlücke, Bastian</dc:creator>
    <dc:contributor>Goldlücke, Bastian</dc:contributor>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/57559"/>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-05-17T10:51:33Z</dc:date>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-05-17T10:51:33Z</dcterms:available>
    <dc:creator>Giebenhain, Simon</dc:creator>
  </rdf:Description>
</rdf:RDF>
kops.conferencefieldInternational Conference on 3D Vision : 3DV 2021, 1. Dez. 2021 - 3. Dez. 2021, Onlinedeu
kops.date.conferenceEnd2021-12-03eng
kops.date.conferenceStart2021-12-01eng
kops.flag.isPeerReviewedtrueeng
kops.flag.knbibliographytrue
kops.location.conferenceOnlineeng
kops.sourcefield<i>2021 International Conference on 3D Vision, 3DV 2021 : , virtual conference ; 1-3 December 2021 : proceedings</i>. Piscataway: IEEE, 2021, pp. 1054-1064. ISBN 978-1-66542-688-6. Available under: doi: 10.1109/3DV53792.2021.00113deu
kops.sourcefield.plain2021 International Conference on 3D Vision, 3DV 2021 : , virtual conference ; 1-3 December 2021 : proceedings. Piscataway: IEEE, 2021, pp. 1054-1064. ISBN 978-1-66542-688-6. Available under: doi: 10.1109/3DV53792.2021.00113deu
kops.sourcefield.plain2021 International Conference on 3D Vision, 3DV 2021 : , virtual conference ; 1-3 December 2021 : proceedings. Piscataway: IEEE, 2021, pp. 1054-1064. ISBN 978-1-66542-688-6. Available under: doi: 10.1109/3DV53792.2021.00113eng
kops.title.conferenceInternational Conference on 3D Vision : 3DV 2021eng
relation.isAuthorOfPublication4b1153da-7732-4d56-a7c8-76b40cb9c0c1
relation.isAuthorOfPublicationc4ecb499-9c85-4481-832e-af061f18cbdc
relation.isAuthorOfPublication.latestForDiscovery4b1153da-7732-4d56-a7c8-76b40cb9c0c1
source.bibliographicInfo.fromPage1054eng
source.bibliographicInfo.toPage1064eng
source.identifier.isbn978-1-66542-688-6eng
source.publisherIEEEeng
source.publisher.locationPiscatawayeng
source.title2021 International Conference on 3D Vision, 3DV 2021 : , virtual conference ; 1-3 December 2021 : proceedingseng

Dateien