Datenbestand vom 15. November 2024

Warenkorb Datenschutzhinweis Dissertationsdruck Dissertationsverlag Institutsreihen     Preisrechner

aktualisiert am 15. November 2024

ISBN 9783843921169

72,00 € inkl. MwSt, zzgl. Versand


978-3-8439-2116-9, Reihe Informatik

Michael Zollhöfer
Real-Time Reconstruction of Static and Dynamic Scenes

196 Seiten, Dissertation Universität Erlangen-Nürnberg (2014), Softcover, A5

Zusammenfassung / Abstract

With the release of the Microsoft Xbox 360 Kinect, an affordable real-time RGB-D sensor is now available on the mass market. This makes new techniques and algorithms, which have previously been only available to researchers and enthusiasts, accessible for an everyday use by a broad audience. Applications range from the acquisition of detailed high-quality reconstructions of everyday objects to tracking the complex motions of people. In addition, the captured data can be directly exploited to build virtual reality applications, i.e. virtual mirrors, and can be used for gesture control of devices and motion analysis. To make these applications easy-to-use in our everyday life, they should be intuitive to control and provide feedback at real-time rates. In this dissertation, we present new techniques and algorithms for building three-dimensional representations of arbitrary objects using only a single commodity RGB-D sensor, manually editing the acquired reconstructions and tracking the non-rigid motion of physically deforming objects at real-time rates. We start by proposing the use of a statistical prior to obtain high-quality reconstructions of the human head using only a single low-quality depth frame of a commodity sensor. We extend this approach and obtain even higher quality reconstructions at real-time rates by exploiting all information of a contiguous RGB-D stream and jointly optimizing for shape, albedo and illumination parameters. Thereafter, we show that a moving sensor can be used to obtain super-resolution reconstructions of arbitrary objects at sensor rate by fusing all depth observations. We present strategies that allow us to handle a virtually unlimited reconstruction volume by exploiting a new sparse scene representation in combination with an efficient streaming approach. In addition, we present a handle based deformation paradigm that allows the user to edit the captured geometry, which might consist of millions of polygons, using an interactive and intuitive modeling metaphor. Finally, we demonstrate that the motion of arbitrary non-rigidly deforming physical objects can be tracked at real-time rates using a custom high-quality RGB-D sensor.