Publikation: Sparse Views, Near Light : A Practical Paradigm for Uncalibrated Point-light Photometric Stereo
Dateien
Datum
Autor:innen
Herausgeber:innen
ISSN der Zeitschrift
Electronic ISSN
ISBN
Bibliografische Daten
Verlag
Schriftenreihe
Auflagebezeichnung
DOI (zitierfähiger Link)
Internationale Patentnummer
Angaben zur Forschungsförderung
Projekt
Open Access-Veröffentlichung
Core Facility der Universität Konstanz
Titel in einer weiteren Sprache
Publikationstyp
Publikationsstatus
Erschienen in
Zusammenfassung
Neural approaches have shown a significant progress on camera-based reconstruction. But they require either a fairly dense sampling of the viewing sphere, or pre-training on an existing dataset, thereby limiting their generalizability. In contrast, photometric stereo (PS) approaches have shown great potential for achieving high-quality reconstruction under sparse viewpoints. Yet, they are impractical because they typically require tedious laboratory conditions, are restricted to dark rooms, and often multi-staged, making them subject to accumulated errors. To address these shortcomings, we propose an end-to-end uncalibrated multi-view PS framework for reconstructing high-resolution shapes acquired from sparse viewpoints in a real-world environment. We relax the dark room assumption, and allow a combination of static ambient lighting and dynamic near LED lighting, thereby enabling easy data capture outside the lab. Experimental validation confirms that it outperforms existing baseline approaches in the regime of sparse viewpoints by a large margin. This allows to bring high-accuracy 3D reconstruction from the dark room to the real world, while maintaining a reasonable data capture complexity.
Zusammenfassung in einer weiteren Sprache
Fachgebiet (DDC)
Schlagwörter
Konferenz
Rezension
Zitieren
ISO 690
BRAHIMI, Mohammed, Bjoern HAEFNER, Zhenzhang YE, Bastian GOLDLÜCKE, Daniel CREMERS, 2024. Sparse Views, Near Light : A Practical Paradigm for Uncalibrated Point-light Photometric Stereo. IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2024. Seattle, WA, 17. Juni 2024 - 21. Juni 2024. In: 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, NJ: IEEE, 2024, S. 11862-11872. Verfügbar unter: doi: 10.1109/CVPR52733.2024.01127BibTex
@inproceedings{Brahimi2024Spars-70085, year={2024}, doi={10.1109/CVPR52733.2024.01127}, title={Sparse Views, Near Light : A Practical Paradigm for Uncalibrated Point-light Photometric Stereo}, publisher={IEEE}, address={Piscataway, NJ}, booktitle={ 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, pages={11862--11872}, author={Brahimi, Mohammed and Haefner, Bjoern and Ye, Zhenzhang and Goldlücke, Bastian and Cremers, Daniel} }
RDF
<rdf:RDF xmlns:dcterms="http://purl.org/dc/terms/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:bibo="http://purl.org/ontology/bibo/" xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:void="http://rdfs.org/ns/void#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/70085"> <dc:creator>Cremers, Daniel</dc:creator> <foaf:homepage rdf:resource="http://localhost:8080/"/> <dcterms:abstract>Neural approaches have shown a significant progress on camera-based reconstruction. But they require either a fairly dense sampling of the viewing sphere, or pre-training on an existing dataset, thereby limiting their generalizability. In contrast, photometric stereo (PS) approaches have shown great potential for achieving high-quality reconstruction under sparse viewpoints. Yet, they are impractical because they typically require tedious laboratory conditions, are restricted to dark rooms, and often multi-staged, making them subject to accumulated errors. To address these shortcomings, we propose an end-to-end uncalibrated multi-view PS framework for reconstructing high-resolution shapes acquired from sparse viewpoints in a real-world environment. We relax the dark room assumption, and allow a combination of static ambient lighting and dynamic near LED lighting, thereby enabling easy data capture outside the lab. Experimental validation confirms that it outperforms existing baseline approaches in the regime of sparse viewpoints by a large margin. This allows to bring high-accuracy 3D reconstruction from the dark room to the real world, while maintaining a reasonable data capture complexity.</dcterms:abstract> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/> <dc:creator>Haefner, Bjoern</dc:creator> <dcterms:issued>2024</dcterms:issued> <dc:contributor>Goldlücke, Bastian</dc:contributor> <dc:contributor>Brahimi, Mohammed</dc:contributor> <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2024-06-07T11:07:05Z</dcterms:available> <dc:contributor>Haefner, Bjoern</dc:contributor> <dc:creator>Goldlücke, Bastian</dc:creator> <dcterms:title>Sparse Views, Near Light : A Practical Paradigm for Uncalibrated Point-light Photometric Stereo</dcterms:title> <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2024-06-07T11:07:05Z</dc:date> <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/> <dc:creator>Ye, Zhenzhang</dc:creator> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/> <dc:language>eng</dc:language> <dc:contributor>Ye, Zhenzhang</dc:contributor> <dc:contributor>Cremers, Daniel</dc:contributor> <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/70085"/> <dc:creator>Brahimi, Mohammed</dc:creator> </rdf:Description> </rdf:RDF>