Datenbestand vom 15. November 2024
Tel: 0175 / 9263392 Mo - Fr, 9 - 12 Uhr
Impressum Fax: 089 / 66060799
aktualisiert am 15. November 2024
978-3-8439-2481-8, Reihe Robotik und Automation
Krishna Kumar Narayanan Learning Vision based Mobile Robot Behaviors from Demonstration
193 Seiten, Dissertation Technische Universität Dortmund (2015), Softcover, A5
Autonomous service robots that are scalable and flexible to learn, accommodate new tasks and thereby assist humans is one of the long term vision of robotics and artificial intelligence. Out of the multitude of skills that we desire from mobile robots, the ability to navigate autonomously is an important task. Behavior based control is an approach to realize this vision where the robot action is divided into independent primitive motion substrates called as behaviors. Manual design and programming of such behaviors require an astute understanding of the environment and its effect on behavioral response. Furthermore, if the perceptual sensor is vision based, the task is more challenging because of the complexity of the visual information. Learning from Demonstration is one approach that alleviates this process, where a teacher demonstrates examples of behavioral perceptual states action pairs which are then transferred to a behavioral policy without any explicit programming.
Despite the ever progressive research in this area, many design decisions to achieve a successful architecture for learning vision based mobile robot behaviors are yet not completely answered. This work addresses these issues and proposes a framework to learn visual robotic indoor behaviors from demonstration examples. Demonstrations are performed by a human teacher to a robot mounted with an omnidirectional camera as its main navigational sensor. Design and analysis of different intuitive demonstration modes to realize an innate, flexible and user-friendly interface are presented. The fidelity of the generated demonstration data from the different teaching modes are evaluated. Situated learning of the robotic behaviors is accomplished by learning scenario specific behaviors. Other behavior modularities such as learning individual behavioral representations are also addressed. Besides the supervised behavior learning models, an architecture for self-learning vision based robot behaviors within a LfD framework is also proposed. Successful solutions to challenging problems such as one-shot learning and learning from scratch are accomplished. The goal of the thesis is to present the reader to the potentials of Learning from Demonstration for vision based mobile robot behaviors and propose the necessary steps and design decisions to build such a framework.