Datenbestand vom 10. Dezember 2024

Impressum Warenkorb Datenschutzhinweis Dissertationsdruck Dissertationsverlag Institutsreihen     Preisrechner

aktualisiert am 10. Dezember 2024

ISBN 978-3-8439-2195-4

96,00 € inkl. MwSt, zzgl. Versand


978-3-8439-2195-4, Reihe Informatik

Daniel Haase
Robust Data- and Model-Driven Anatomical Landmark Localization in Biomedical Scenarios

358 Seiten, Dissertation Friedrich-Schiller-Universität Jena (2015), Softcover, A5

Zusammenfassung / Abstract

One of the largest and practically most important application areas of computer vision consists of scenarios from biomedical contexts, i.e., from medicine, biology, and closely related fields such as psychology. The need for computer vision typically arises in the fact that huge amounts of data are available or can easily be acquired, while a manual analysis of the data is often time-consuming, expensive, tedious, and subjective. Many of these image-based biomedical analyses are performed on basis of anatomical landmarks, i.e., a sparse set of keypoints located at anatomically relevant parts of the object of interest. The aim of this thesis is to develop, analyze, and evaluate robust automated methods for the detection, localization, and tracking of anatomical landmarks in biomedical imaging contexts. Depending on their differing needs for training data, the methods presented in this work are grouped into data-driven and model-driven techniques.

In the data-driven part, we first derive a robust extension of standard template matching, which allows to overcome severe local occlusions for landmark localization.

Afterwards, we show how this extension can be combined with the pictorial structures approach to connect multiple occlusion-robust templates into one global framework for non-rigid objects. In a further step, we introduce a two-stage graph-based technique which focuses on the challenging identification and tracking of landmarks that are visually undistinguishable.

The model-driven part is entirely dedicated to the well-known active appearance model. After discussing key weaknesses of the standard model, we argue that a general solution for the identified problems is the inclusion of additional knowledge. We propose two opposing paradigms to achieve this goal: Our augmented active appearance model approach aims to include context knowledge about the fitting task from arbitrary sources during model fitting. Our second approach is the transfer of knowledge to a target model using existing models trained on different but related data. The basis of both approaches is a probabilistic formulation of the respective underlying processes.

All presented techniques are evaluated using data from three practically relevant biomedical real-world application scenarios, namely animal locomotion analysis, cardiac cycle analysis, and medical face analysis. The results clearly indicate the benefit of the proposed methods compared to established approaches.