Datenbestand vom 12. November 2024

Warenkorb Datenschutzhinweis Dissertationsdruck Dissertationsverlag Institutsreihen     Preisrechner

aktualisiert am 12. November 2024

ISBN 978-3-8439-3009-3

84,00 € inkl. MwSt, zzgl. Versand


978-3-8439-3009-3, Reihe Elektrotechnik

Stephan Zibner
A Neuro-Dynamic Architecture for Autonomous Visual Scene Representation

201 Seiten, Dissertation Ruhr-Universität Bochum (2016), Softcover, A5

Zusammenfassung / Abstract

Humans have a unique ability to interact with objects in their vicinity. Foundation of these interactions is the visual perception of scenes, from which internal representations are created. Behaviors such as reaching and grasping, as well as generation and understanding of utterances, build on these representations. Processing of visual scenes is a major challenge for robotics research, especially if scenes are novel or dynamic. In this thesis, I present a neuro-dynamic scene representation architecture. It creates working memory representations of scenes, updates memory content on change, and is able to re-instantiate accumulated knowledge about the scene to efficiently search for target objects. At the core of the architecture, three-dimensional dynamic fields associate the spatial position of objects with their visual features such as color or size. The main focus of my work is the behavioral organization of involved behaviors and the resulting autonomy of processes. I evaluate the behaviors and processes generated by this architecture on robotic platforms and compare the evaluation with behavioral signatures of human scene representation. I extend the principles of scene representation onto two applications: object recognition and movement generation. The integration with object recognition allows to locate target objects by means of abstract labels, whose generation is computationally demanding and thus cannot be applied in parallel to a visual scene. Instead, these labels are memorized in a sequential process provided by the scene representation architecture. Movement generation benefits from the continuous link to visual input and autonomous organization of behaviors. Changes in target position are continuously integrated into the current movement. This property of on-line updating is also found in human arm movements. I conclude with a perspective on advanced integrative work in the context of robotic grasping.