An Epipolar Volume Autoencoder With Adversarial Loss for Deep Light Field Super-Resolution

Lade...
Vorschaubild
Dateien
Zu diesem Dokument gibt es keine Dateien.
Datum
2019
Herausgeber:innen
Kontakt
ISSN der Zeitschrift
Electronic ISSN
ISBN
Bibliografische Daten
Verlag
Schriftenreihe
Auflagebezeichnung
URI (zitierfähiger Link)
ArXiv-ID
Internationale Patentnummer
Angaben zur Forschungsförderung
European Union (EU): 336978
Projekt
LIA - Light Field Imaging and Analysis
Open Access-Veröffentlichung
Core Facility der Universität Konstanz
Gesperrt bis
Titel in einer weiteren Sprache
Publikationstyp
Beitrag zu einem Konferenzband
Publikationsstatus
Published
Erschienen in
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition workshops : CVPRW 2019 : proceedings : 16-20 June 2019, Long Beach, California. Piscataway, NJ: IEEE, 2019, pp. 1853-1861. ISBN 978-1-72812-506-0. Available under: doi: 10.1109/CVPRW.2019.00236
Zusammenfassung

When capturing a light field of a scene, one typically faces a trade-off between more spatial or more angular resolution. Fortunately, light fields are also a rich source of information for solving the problem of super-resolution. Contrary to single image approaches, where high-frequency content has to be hallucinated to be the most likely source of the downscaled version, sub-aperture views from the light field can help with an actual reconstruction of those details that have been removed by downsampling. In this paper, we propose a three-dimensional generative adversarial autoencoder network to recover the high-resolution light field from a low-resolution light field with a sparse set of viewpoints. We require only three views along both horizontal and vertical axis to increase angular resolution by a factor of three while at the same time increasing spatial resolution by a factor of either two or four in each direction, respectively.

Zusammenfassung in einer weiteren Sprache
Fachgebiet (DDC)
004 Informatik
Schlagwörter
Konferenz
CVPRW 2019, 16. Juni 2019 - 20. Juni 2019, Long Beach, California
Rezension
undefined / . - undefined, undefined
Forschungsvorhaben
Organisationseinheiten
Zeitschriftenheft
Datensätze
Zitieren
ISO 690ZHU, Minchen, Anna ALPEROVICH, Ole JOHANNSEN, Antonin SULC, Bastian GOLDLÜCKE, 2019. An Epipolar Volume Autoencoder With Adversarial Loss for Deep Light Field Super-Resolution. CVPRW 2019. Long Beach, California, 16. Juni 2019 - 20. Juni 2019. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition workshops : CVPRW 2019 : proceedings : 16-20 June 2019, Long Beach, California. Piscataway, NJ: IEEE, 2019, pp. 1853-1861. ISBN 978-1-72812-506-0. Available under: doi: 10.1109/CVPRW.2019.00236
BibTex
@inproceedings{Zhu2019-06Epipo-51260,
  year={2019},
  doi={10.1109/CVPRW.2019.00236},
  title={An Epipolar Volume Autoencoder With Adversarial Loss for Deep Light Field Super-Resolution},
  isbn={978-1-72812-506-0},
  publisher={IEEE},
  address={Piscataway, NJ},
  booktitle={2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition workshops : CVPRW 2019 : proceedings : 16-20 June 2019, Long Beach, California},
  pages={1853--1861},
  author={Zhu, Minchen and Alperovich, Anna and Johannsen, Ole and Sulc, Antonin and Goldlücke, Bastian}
}
RDF
<rdf:RDF
    xmlns:dcterms="http://purl.org/dc/terms/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:bibo="http://purl.org/ontology/bibo/"
    xmlns:dspace="http://digital-repositories.org/ontologies/dspace/0.1.0#"
    xmlns:foaf="http://xmlns.com/foaf/0.1/"
    xmlns:void="http://rdfs.org/ns/void#"
    xmlns:xsd="http://www.w3.org/2001/XMLSchema#" > 
  <rdf:Description rdf:about="https://kops.uni-konstanz.de/server/rdf/resource/123456789/51260">
    <dc:contributor>Alperovich, Anna</dc:contributor>
    <dc:creator>Alperovich, Anna</dc:creator>
    <dcterms:abstract xml:lang="eng">When capturing a light field of a scene, one typically faces a trade-off between more spatial or more angular resolution. Fortunately, light fields are also a rich source of information for solving the problem of super-resolution. Contrary to single image approaches, where high-frequency content has to be hallucinated to be the most likely source of the downscaled version, sub-aperture views from the light field can help with an actual reconstruction of those details that have been removed by downsampling. In this paper, we propose a three-dimensional generative adversarial autoencoder network to recover the high-resolution light field from a low-resolution light field with a sparse set of viewpoints. We require only three views along both horizontal and vertical axis to increase angular resolution by a factor of three while at the same time increasing spatial resolution by a factor of either two or four in each direction, respectively.</dcterms:abstract>
    <void:sparqlEndpoint rdf:resource="http://localhost/fuseki/dspace/sparql"/>
    <foaf:homepage rdf:resource="http://localhost:8080/"/>
    <dcterms:isPartOf rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dcterms:issued>2019-06</dcterms:issued>
    <dspace:isPartOfCollection rdf:resource="https://kops.uni-konstanz.de/server/rdf/resource/123456789/36"/>
    <dc:creator>Johannsen, Ole</dc:creator>
    <dcterms:title>An Epipolar Volume Autoencoder With Adversarial Loss for Deep Light Field Super-Resolution</dcterms:title>
    <dcterms:available rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2020-10-08T08:55:54Z</dcterms:available>
    <bibo:uri rdf:resource="https://kops.uni-konstanz.de/handle/123456789/51260"/>
    <dc:contributor>Johannsen, Ole</dc:contributor>
    <dc:creator>Sulc, Antonin</dc:creator>
    <dc:date rdf:datatype="http://www.w3.org/2001/XMLSchema#dateTime">2020-10-08T08:55:54Z</dc:date>
    <dc:creator>Goldlücke, Bastian</dc:creator>
    <dc:contributor>Sulc, Antonin</dc:contributor>
    <dc:language>eng</dc:language>
    <dc:contributor>Zhu, Minchen</dc:contributor>
    <dc:contributor>Goldlücke, Bastian</dc:contributor>
    <dc:creator>Zhu, Minchen</dc:creator>
  </rdf:Description>
</rdf:RDF>
Interner Vermerk
xmlui.Submission.submit.DescribeStep.inputForms.label.kops_note_fromSubmitter
Kontakt
URL der Originalveröffentl.
Prüfdatum der URL
Prüfungsdatum der Dissertation
Finanzierungsart
Kommentar zur Publikation
Allianzlizenz
Corresponding Authors der Uni Konstanz vorhanden
Internationale Co-Autor:innen
Universitätsbibliographie
Ja
Begutachtet
Diese Publikation teilen