Datenbestand vom 15. November 2024
Tel: 0175 / 9263392 Mo - Fr, 9 - 12 Uhr
Impressum Fax: 089 / 66060799
aktualisiert am 15. November 2024
978-3-8439-4681-0, Reihe Informatik
Matthias Innmann Practical 3D Reconstruction of Non-Rigid Objects and Surface Reflectance
166 Seiten, Dissertation Universität Erlangen-Nürnberg (2020), Softcover, A5
Acquiring accurate 3D models of the world is a long-standing research problem in both computer vision and computer graphics with applications in many areas, such as entertainment, autonomous driving etc. While algorithms that reconstruct static 3D geometry offer compelling results using practical acquisition methods, many attributes have received less attention. In this thesis, we are specifically interested in non-rigid scene motion and surface reflectance, which are important properties to realistically model the physical world. Almost any day-to-day situation contains non-rigid motion. This causes algorithms assuming static geometry to fail and moreover results in missing motion information. In the first part of this thesis, we present two methods that jointly reconstruct motion and geometry without the requirement of prior knowledge. One method is designed to operate on a video stream of color and depth information at real-time framerates. The second method reduces these requirements and works on a sparse set of color images and offers reconstructions in an offline process. Additionally, we present a post-processing algorithm that is able to establish dense correspondences between 3D models of different human skulls using non-rigid registration. This method is used to interactively transfer landmarks placed on a template mesh to a database. The reconstruction of surface reflectance is especially important for a realistic visualization of objects. While many reconstruction methods assume Lambertian reflectance, this assumption does not hold for almost any real-world object. In the second part of this thesis, we present an algorithm that is designed to work in photo-studio like setups. Our method jointly reconstructs the albedo, surface reflectance and light setup. The outputs are both spatially varying albedo as well as spatially varying surface reflectance at high quality, allowing for photorealistic image synthesis from novel views or under different lighting.