Marniok, Nico
Forschungsvorhaben
Organisationseinheiten
Berufsbeschreibung
Nachname
Vorname
Name
Suchergebnisse Publikationen
Real-Time Variational Range Image Fusion and Visualization for Large-Scale Scenes Using GPU Hash Tables
2018, Marniok, Nico, Goldlücke, Bastian
We present a real-time pipeline for large-scale 3D scene reconstruction from a single moving RGB-D camera together with interactive visualization. Our approach combines a time and space efficient data structure capable of representing large scenes, a local variational update algorithm and a visualization system. The environment's structure is reconstructed by integrating the depth image of each camera view into a sparse volume representation using a truncated signed distance function, which is organized via a hash table. Noise from real-world data is efficiently eliminated by immediately performing local variational refinements on newly integrated data. The whole pipeline is able to perform in real-time on consumer-available hardware and allows for simultaneous inspection of the currently reconstructed scene.
An Efficient Octree Design for Local Variational Range Image Fusion
2017, Marniok, Nico, Johannsen, Ole, Goldlücke, Bastian
We present a reconstruction pipeline for a large-scale 3D environment viewed by a single moving RGB-D camera. Our approach combines advantages of fast and direct, regularization-free depth fusion and accurate, but costly variational schemes. The scene’s depth geometry is extracted from each camera view and efficiently integrated into a large, dense grid as a truncated signed distance function, which is organized in an octree. To account for noisy real-world input data, variational range image integration is performed in local regions of the volume directly on this octree structure. We focus on algorithms which are easily parallelizable on GPUs, allowing the pipeline to be used in real-time scenarios where the user can interactively view the reconstruction and adapt camera motion as required.
Structure-from-Motion-Aware PatchMatch for Adaptive Optical Flow Estimation
2018, Maurer, Daniel, Marniok, Nico, Goldlücke, Bastian, Bruhn, Andrés
Many recent energy-based methods for optical flow estimation rely on a good initialization that is typically provided by some kind of feature matching. So far, however, these initial matching approaches are rather general: They do not incorporate any additional information that could help to improve the accuracy or the robustness of the estimation. In particular, they do not exploit potential cues on the camera poses and the thereby induced rigid motion of the scene. In the present paper, we tackle this problem. To this end, we propose a novel structure-from-motion-aware PatchMatch approach that, in contrast to existing matching techniques, combines two hierarchical feature matching methods: a recent two-frame PatchMatch approach for optical flow estimation (general motion) and a specifically tailored three-frame PatchMatch approach for rigid scene reconstruction (SfM). While the motion PatchMatch serves as baseline with good accuracy, the SfM counterpart takes over at occlusions and other regions with insufficient information. Experiments with our novel SfM-aware PatchMatch approach demonstrate its usefulness. They not only show excellent results for all major benchmarks (KITTI 2012/2015, MPI Sintel), but also improvements up to 50% compared to a PatchMatch approach without structure information.
Reflection Separation in Light Fields based on Sparse Coding and Specular Flow
2016, Sulc, Antonin, Alperovich, Anna, Marniok, Nico, Goldlücke, Bastian
We present a method to separate a dichromatic reflection component from diffuse object colors for the set of rays in a 4D light field such that the separation is consistent across all subaperture views. The separation model is based on explaining the observed light field as a sparse linear combination of a constant-color specular term and a small finite set of albedos. Consistency across the light field is achieved by embedding the ray-wise separation into a global optimization framework. On each individual epipolar plane image (EPI), the diffuse coefficients need to be constant along lines which are the projections of the same scene point, while the specular coefficient needs to be constant along the direction of the specular flow within the epipolar volume. We handle both constraints with depth-dependent anisotropic regularizers, and demonstrate promising performance on a number of real-world light fields captured with a Lytro Illum plenoptic camera.
Layered Scene Reconstruction from Multiple Light Field Camera Views
2017-03-11, Johannsen, Ole, Sulc, Antonin, Marniok, Nico, Goldlücke, Bastian
We propose a framework to infer complete geometry of a scene with strong reflections or hidden by partially transparent occluders from a set of 4D light fields captured with a hand-held light field camera. For this, we first introduce a variant of bundle adjustment specifically tailored to 4D light fields to obtain improved pose parameters. Geometry is recovered in a global framework based on convex optimization for a weighted minimal surface. To allow for non-Lambertian materials and semi-transparent occluders, the point-wise costs are not based on the principle of photo-consistency. Instead, we perform a layer analysis of the light field obtained by finding superimposed oriented patterns in epipolar plane image space to obtain a set of depth hypotheses and confidence scores, which are integrated into a single functional.