A Superresolution Framework for High-Accuracy Multiview Reconstruction

Lade...
Vorschaubild
Dateien
Zu diesem Dokument gibt es keine Dateien.
Datum
2014
Autor:innen
Aubry, Mathieu
Kolev, Kalin
Cremers, Daniel
Herausgeber:innen
Kontakt
ISSN der Zeitschrift
Electronic ISSN
ISBN
Bibliografische Daten
Verlag
Schriftenreihe
Auflagebezeichnung
URI (zitierfähiger Link)
ArXiv-ID
Internationale Patentnummer
Angaben zur Forschungsförderung
Projekt
Open Access-Veröffentlichung
Core Facility der Universität Konstanz
Gesperrt bis
Titel in einer weiteren Sprache
Publikationstyp
Zeitschriftenartikel
Publikationsstatus
Published
Erschienen in
International Journal of Computer Vision. 2014, 106(2), pp. 172-191. ISSN 0920-5691. eISSN 1573-1405. Available under: doi: 10.1007/s11263-013-0654-8
Zusammenfassung

We present a variational framework to estimate super-resolved texture maps on a 3D geometry model of a surface from multiple images. Given the calibrated images and the reconstructed geometry, the proposed functional is convex in the super-resolution texture. Using a conformal atlas of the surface, we transform the model from the curved geometry to the flat charts and solve it using state-of-the-art and provably convergent primal–dual algorithms. In order to improve image alignment and quality of the texture, we extend the functional to also optimize for a normal displacement map on the surface as well as the camera calibration parameters. Since the sub-problems for displacement and camera parameters are non-convex, we revert to relaxation schemes in order to robustly estimate a minimizer via sequential convex programming. Experimental results confirm that the proposed super-resolution framework allows to recover textured models with significantly higher level-of-detail than the individual input images.

Zusammenfassung in einer weiteren Sprache
Fachgebiet (DDC)
004 Informatik
Schlagwörter
Multi-view 3D reconstruction, Texture reconstruction, Super-resolution, Camera calibration, Variational methods
Konferenz
Rezension
undefined / . - undefined, undefined
Forschungsvorhaben
Organisationseinheiten
Zeitschriftenheft
Datensätze
Zitieren
ISO 690GOLDLÜCKE, Bastian, Mathieu AUBRY, Kalin KOLEV, Daniel CREMERS, 2014. A Superresolution Framework for High-Accuracy Multiview Reconstruction. In: International Journal of Computer Vision. 2014, 106(2), pp. 172-191. ISSN 0920-5691. eISSN 1573-1405. Available under: doi: 10.1007/s11263-013-0654-8
BibTex
@article{Goldlucke2014Super-29111,
  year={2014},
  doi={10.1007/s11263-013-0654-8},
  title={A Superresolution Framework for High-Accuracy Multiview Reconstruction},
  number={2},
  volume={106},
  issn={0920-5691},
  journal={International Journal of Computer Vision},
  pages={172--191},
  author={Goldlücke, Bastian and Aubry, Mathieu and Kolev, Kalin and Cremers, Daniel},
  note={received DAGM main prize (best paper award)}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/29111">
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2014-10-14T12:26:35Z</dcterms:available>
    <dc:creator>Kolev, Kalin</dc:creator>
    <dc:contributor>Goldlücke, Bastian</dc:contributor>
    <dc:contributor>Cremers, Daniel</dc:contributor>
    <dc:contributor>Aubry, Mathieu</dc:contributor>
    <dcterms:issued>2014</dcterms:issued>
    <dc:creator>Goldlücke, Bastian</dc:creator>
    <dc:language>eng</dc:language>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dcterms:abstract xml:lang="eng">We present a variational framework to estimate super-resolved texture maps on a 3D geometry model of a surface from multiple images. Given the calibrated images and the reconstructed geometry, the proposed functional is convex in the super-resolution texture. Using a conformal atlas of the surface, we transform the model from the curved geometry to the flat charts and solve it using state-of-the-art and provably convergent primal–dual algorithms. In order to improve image alignment and quality of the texture, we extend the functional to also optimize for a normal displacement map on the surface as well as the camera calibration parameters. Since the sub-problems for displacement and camera parameters are non-convex, we revert to relaxation schemes in order to robustly estimate a minimizer via sequential convex programming. Experimental results confirm that the proposed super-resolution framework allows to recover textured models with significantly higher level-of-detail than the individual input images.</dcterms:abstract>
    <dc:creator>Aubry, Mathieu</dc:creator>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dc:creator>Cremers, Daniel</dc:creator>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2014-10-14T12:26:35Z</dc:date>
    <bibo:uri rdf:resource="http://kops.uni-konstanz.de/handle/123456789/29111"/>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dc:contributor>Kolev, Kalin</dc:contributor>
    <dcterms:title>A Superresolution Framework for High-Accuracy Multiview Reconstruction</dcterms:title>
  </rdf:Description>
</rdf:RDF>
Interner Vermerk
xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter
Kontakt
URL der Originalveröffentl.
Prüfdatum der URL
Prüfungsdatum der Dissertation
Finanzierungsart
Kommentar zur Publikation
received DAGM main prize (best paper award)
Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Ja
Begutachtet
Diese Publikation teilen