Style Agnostic 3D Reconstruction via Adversarial Style Transfer

No Thumbnail Available
Files
There are no files associated with this item.
Date
2022
Editors
Contact
Journal ISSN
Electronic ISSN
ISBN
Bibliographical data
Publisher
Series
URI (citable link)
ArXiv-ID
International patent number
Link to the license
oops
EU project number
Project
Open Access publication
Restricted until
Title in another language
Research Projects
Organizational Units
Journal Issue
Publication type
Contribution to a conference collection
Publication status
Published
Published in
2022 IEEE Winter Conference on Applications of Computer Vision : WACW 2022 : proceedings : 4 - 8 January 2022, Waikoloa, Hawaii. - Piscataway : IEEE, 2022. - pp. 2273-2282. - ISBN 978-1-66540-915-5
Abstract
Reconstructing the 3D geometry of an object from an image is a major challenge in computer vision. Recently introduced differentiable renderers can be leveraged to learn the 3D geometry of objects from 2D images, but those approaches require additional supervision to enable the renderer to produce an output that can be compared to the input image. This can be scene information or constraints such as object silhouettes, uniform backgrounds, material, texture, and lighting. In this paper, we propose an approach that enables a differentiable rendering-based learning of 3D objects from images with backgrounds without the need for silhouette supervision. Instead of trying to render an image close to the input, we propose an adversarial style-transfer and domain adaptation pipeline that allows to translate the input image domain to the rendered image domain. This allows us to directly compare between a translated image and the differentiable rendering of a 3D object reconstruction in order to train the 3D object reconstruction network. We show that the approach learns 3D geometry from images with backgrounds and provides a better performance than constrained methods for single-view 3D object reconstruction on this task.
Summary in another language
Subject (DDC)
004 Computer Science
Keywords
Geometry, Training, Computer vision, Visualization, Three-dimensional displays, Pipelines, Reconstruction algorithms
Conference
2022 IEEE Winter Conference on Applications of Computer Vision, Jan 4, 2022 - Jan 8, 2022, Waikoloa, Hawaii
Review
undefined / . - undefined, undefined. - (undefined; undefined)
Cite This
ISO 690PETERSEN, Felix, Bastian GOLDLÜCKE, Oliver DEUSSEN, Hilde KUEHNE, 2022. Style Agnostic 3D Reconstruction via Adversarial Style Transfer. 2022 IEEE Winter Conference on Applications of Computer Vision. Waikoloa, Hawaii, Jan 4, 2022 - Jan 8, 2022. In: 2022 IEEE Winter Conference on Applications of Computer Vision : WACW 2022 : proceedings : 4 - 8 January 2022, Waikoloa, Hawaii. Piscataway:IEEE, pp. 2273-2282. ISBN 978-1-66540-915-5. Available under: doi: 10.1109/WACV51458.2022.00233
BibTex
@inproceedings{Petersen2022Style-58150,
  year={2022},
  doi={10.1109/WACV51458.2022.00233},
  title={Style Agnostic 3D Reconstruction via Adversarial Style Transfer},
  isbn={978-1-66540-915-5},
  publisher={IEEE},
  address={Piscataway},
  booktitle={2022 IEEE Winter Conference on Applications of Computer Vision : WACW 2022 : proceedings : 4 - 8 January 2022, Waikoloa, Hawaii},
  pages={2273--2282},
  author={Petersen, Felix and Goldlücke, Bastian and Deussen, Oliver and Kuehne, Hilde}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/58150">
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dc:language>eng</dc:language>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/58150"/>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dc:contributor>Deussen, Oliver</dc:contributor>
    <dcterms:title>Style Agnostic 3D Reconstruction via Adversarial Style Transfer</dcterms:title>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-07-26T09:28:47Z</dc:date>
    <dcterms:issued>2022</dcterms:issued>
    <dc:creator>Deussen, Oliver</dc:creator>
    <dc:creator>Petersen, Felix</dc:creator>
    <dc:contributor>Petersen, Felix</dc:contributor>
    <dc:contributor>Kuehne, Hilde</dc:contributor>
    <dcterms:abstract xml:lang="eng">Reconstructing the 3D geometry of an object from an image is a major challenge in computer vision. Recently introduced differentiable renderers can be leveraged to learn the 3D geometry of objects from 2D images, but those approaches require additional supervision to enable the renderer to produce an output that can be compared to the input image. This can be scene information or constraints such as object silhouettes, uniform backgrounds, material, texture, and lighting. In this paper, we propose an approach that enables a differentiable rendering-based learning of 3D objects from images with backgrounds without the need for silhouette supervision. Instead of trying to render an image close to the input, we propose an adversarial style-transfer and domain adaptation pipeline that allows to translate the input image domain to the rendered image domain. This allows us to directly compare between a translated image and the differentiable rendering of a 3D object reconstruction in order to train the 3D object reconstruction network. We show that the approach learns 3D geometry from images with backgrounds and provides a better performance than constrained methods for single-view 3D object reconstruction on this task.</dcterms:abstract>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-07-26T09:28:47Z</dcterms:available>
    <dc:contributor>Goldlücke, Bastian</dc:contributor>
    <dc:creator>Goldlücke, Bastian</dc:creator>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dc:creator>Kuehne, Hilde</dc:creator>
  </rdf:Description>
</rdf:RDF>
Internal note
xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter
Contact
URL of original publication
Test date of URL
Examination date of dissertation
Method of financing
Comment on publication
Alliance license
Corresponding Authors der Uni Konstanz vorhanden
International Co-Authors
Bibliography of Konstanz
Yes
Refereed
Yes