Publikation:

Style Agnostic 3D Reconstruction via Adversarial Style Transfer

Lade...
Vorschaubild

Dateien

Zu diesem Dokument gibt es keine Dateien.

Datum

2022

Herausgeber:innen

Kontakt

ISSN der Zeitschrift

Electronic ISSN

ISBN

Bibliografische Daten

Verlag

Schriftenreihe

Auflagebezeichnung

URI (zitierfähiger Link)
ArXiv-ID

Internationale Patentnummer

Angaben zur Forschungsförderung

Projekt

Open Access-Veröffentlichung
Core Facility der Universität Konstanz

Gesperrt bis

Titel in einer weiteren Sprache

Publikationstyp
Beitrag zu einem Konferenzband
Publikationsstatus
Published

Erschienen in

2022 IEEE Winter Conference on Applications of Computer Vision : WACW 2022 : proceedings : 4 - 8 January 2022, Waikoloa, Hawaii. Piscataway: IEEE, 2022, pp. 2273-2282. ISBN 978-1-66540-915-5. Available under: doi: 10.1109/WACV51458.2022.00233

Zusammenfassung

Reconstructing the 3D geometry of an object from an image is a major challenge in computer vision. Recently introduced differentiable renderers can be leveraged to learn the 3D geometry of objects from 2D images, but those approaches require additional supervision to enable the renderer to produce an output that can be compared to the input image. This can be scene information or constraints such as object silhouettes, uniform backgrounds, material, texture, and lighting. In this paper, we propose an approach that enables a differentiable rendering-based learning of 3D objects from images with backgrounds without the need for silhouette supervision. Instead of trying to render an image close to the input, we propose an adversarial style-transfer and domain adaptation pipeline that allows to translate the input image domain to the rendered image domain. This allows us to directly compare between a translated image and the differentiable rendering of a 3D object reconstruction in order to train the 3D object reconstruction network. We show that the approach learns 3D geometry from images with backgrounds and provides a better performance than constrained methods for single-view 3D object reconstruction on this task.

Zusammenfassung in einer weiteren Sprache

Fachgebiet (DDC)
004 Informatik

Schlagwörter

Geometry, Training, Computer vision, Visualization, Three-dimensional displays, Pipelines, Reconstruction algorithms

Konferenz

2022 IEEE Winter Conference on Applications of Computer Vision, 4. Jan. 2022 - 8. Jan. 2022, Waikoloa, Hawaii
Rezension
undefined / . - undefined, undefined

Forschungsvorhaben

Organisationseinheiten

Zeitschriftenheft

Zugehörige Datensätze in KOPS

Zitieren

ISO 690PETERSEN, Felix, Bastian GOLDLÜCKE, Oliver DEUSSEN, Hilde KUEHNE, 2022. Style Agnostic 3D Reconstruction via Adversarial Style Transfer. 2022 IEEE Winter Conference on Applications of Computer Vision. Waikoloa, Hawaii, 4. Jan. 2022 - 8. Jan. 2022. In: 2022 IEEE Winter Conference on Applications of Computer Vision : WACW 2022 : proceedings : 4 - 8 January 2022, Waikoloa, Hawaii. Piscataway: IEEE, 2022, pp. 2273-2282. ISBN 978-1-66540-915-5. Available under: doi: 10.1109/WACV51458.2022.00233
BibTex
@inproceedings{Petersen2022Style-58150,
  year={2022},
  doi={10.1109/WACV51458.2022.00233},
  title={Style Agnostic 3D Reconstruction via Adversarial Style Transfer},
  isbn={978-1-66540-915-5},
  publisher={IEEE},
  address={Piscataway},
  booktitle={2022 IEEE Winter Conference on Applications of Computer Vision : WACW 2022 : proceedings : 4 - 8 January 2022, Waikoloa, Hawaii},
  pages={2273--2282},
  author={Petersen, Felix and Goldlücke, Bastian and Deussen, Oliver and Kuehne, Hilde}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/58150">
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dc:language>eng</dc:language>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/58150"/>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dc:contributor>Deussen, Oliver</dc:contributor>
    <dcterms:title>Style Agnostic 3D Reconstruction via Adversarial Style Transfer</dcterms:title>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-07-26T09:28:47Z</dc:date>
    <dcterms:issued>2022</dcterms:issued>
    <dc:creator>Deussen, Oliver</dc:creator>
    <dc:creator>Petersen, Felix</dc:creator>
    <dc:contributor>Petersen, Felix</dc:contributor>
    <dc:contributor>Kuehne, Hilde</dc:contributor>
    <dcterms:abstract xml:lang="eng">Reconstructing the 3D geometry of an object from an image is a major challenge in computer vision. Recently introduced differentiable renderers can be leveraged to learn the 3D geometry of objects from 2D images, but those approaches require additional supervision to enable the renderer to produce an output that can be compared to the input image. This can be scene information or constraints such as object silhouettes, uniform backgrounds, material, texture, and lighting. In this paper, we propose an approach that enables a differentiable rendering-based learning of 3D objects from images with backgrounds without the need for silhouette supervision. Instead of trying to render an image close to the input, we propose an adversarial style-transfer and domain adaptation pipeline that allows to translate the input image domain to the rendered image domain. This allows us to directly compare between a translated image and the differentiable rendering of a 3D object reconstruction in order to train the 3D object reconstruction network. We show that the approach learns 3D geometry from images with backgrounds and provides a better performance than constrained methods for single-view 3D object reconstruction on this task.</dcterms:abstract>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2022-07-26T09:28:47Z</dcterms:available>
    <dc:contributor>Goldlücke, Bastian</dc:contributor>
    <dc:creator>Goldlücke, Bastian</dc:creator>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dc:creator>Kuehne, Hilde</dc:creator>
  </rdf:Description>
</rdf:RDF>

Interner Vermerk

xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter

Kontakt
URL der Originalveröffentl.

Prüfdatum der URL

Prüfungsdatum der Dissertation

Finanzierungsart

Kommentar zur Publikation

Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Ja
Begutachtet
Ja
Diese Publikation teilen