Epipolar Plane Image Refocusing for Improved Depth Estimation and Occlusion Handling
Dateien
Datum
Autor:innen
Herausgeber:innen
ISSN der Zeitschrift
Electronic ISSN
ISBN
Bibliografische Daten
Verlag
Schriftenreihe
Auflagebezeichnung
DOI (zitierfähiger Link)
Internationale Patentnummer
Angaben zur Forschungsförderung
Projekt
Open Access-Veröffentlichung
Sammlungen
Core Facility der Universität Konstanz
Titel in einer weiteren Sprache
Publikationstyp
Publikationsstatus
Erschienen in
Zusammenfassung
In contrast to traditional imaging, the higher dimensionality of a light field offers directional information about the captured intensity. This information can be leveraged to estimate the disparity of 3D points in the captured scene. A recent approach to estimate disparities analyzes the structure tensor and evaluates the orientation on epipolar plane images (EPIs). While the resulting disparity maps are generally satisfying, the allowed disparity range is small and occlusion boundaries can become smeared and noisy. In this paper, we first introduce an approach to extend the total allowed disparity range. This allows for example the investigation of camera setups with a larger baseline, like in the Middlebury 3D light fields. Second, we introduce a method to handle the difficulties arising at boundaries between fore- and background objects to achieve sharper edge transitions.
Zusammenfassung in einer weiteren Sprache
Fachgebiet (DDC)
Schlagwörter
Konferenz
Rezension
Zitieren
ISO 690
DIEBOLD, Maximilian, Bastian GOLDLÜCKE, 2013. Epipolar Plane Image Refocusing for Improved Depth Estimation and Occlusion Handling. Annual Workshop on Vision, Modeling and Visualization : VMV. Lugano, 11. Sept. 2013 - 13. Sept. 2013. In: DIETER FELLNER, , ed.. VMV 2013 : Vision Modeling and Visualization. Goslar: Eurographics Association, 2013, pp. 145-152. ISBN 978-3-905674-51-4. Available under: doi: 10.2312/PE.VMV.VMV13.145-152BibTex
@inproceedings{Diebold2013Epipo-29115, year={2013}, doi={10.2312/PE.VMV.VMV13.145-152}, title={Epipolar Plane Image Refocusing for Improved Depth Estimation and Occlusion Handling}, isbn={978-3-905674-51-4}, publisher={Eurographics Association}, address={Goslar}, booktitle={VMV 2013 : Vision Modeling and Visualization}, pages={145--152}, editor={Dieter Fellner}, author={Diebold, Maximilian and Goldlücke, Bastian} }
RDF
<rdf:RDF xmlns:dcterms="http://purl.org/dc/terms/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:bibo="http://purl.org/ontology/bibo/" xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:void="http://rdfs.org/ns/void#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/29115"> <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2014-10-15T06:25:13Z</dc:date> <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2014-10-15T06:25:13Z</dcterms:available> <dcterms:title>Epipolar Plane Image Refocusing for Improved Depth Estimation and Occlusion Handling</dcterms:title> <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/> <bibo:uri rdf:resource="http://kops.uni-konstanz.de/handle/123456789/29115"/> <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/> <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/> <dc:contributor>Goldlücke, Bastian</dc:contributor> <dc:creator>Goldlücke, Bastian</dc:creator> <dc:creator>Diebold, Maximilian</dc:creator> <dcterms:abstract xml:lang="eng">In contrast to traditional imaging, the higher dimensionality of a light field offers directional information about the captured intensity. This information can be leveraged to estimate the disparity of 3D points in the captured scene. A recent approach to estimate disparities analyzes the structure tensor and evaluates the orientation on epipolar plane images (EPIs). While the resulting disparity maps are generally satisfying, the allowed disparity range is small and occlusion boundaries can become smeared and noisy. In this paper, we first introduce an approach to extend the total allowed disparity range. This allows for example the investigation of camera setups with a larger baseline, like in the Middlebury 3D light fields. Second, we introduce a method to handle the difficulties arising at boundaries between fore- and background objects to achieve sharper edge transitions.</dcterms:abstract> <foaf:homepage rdf:resource="http://localhost:8080/"/> <dc:contributor>Diebold, Maximilian</dc:contributor> <dc:language>eng</dc:language> <dcterms:issued>2013</dcterms:issued> </rdf:Description> </rdf:RDF>