Learning-based perception, control, and navigation for autonomous missions in aerial robotics

  1. Sampedro Pérez, Carlos
Dirigida por:
  1. Pascual Campoy Cervera Director/a

Universidad de defensa: Universidad Politécnica de Madrid

Fecha de defensa: 18 de diciembre de 2019

Tribunal:
  1. Sergio Dominguez Cabrerizo Presidente/a
  2. Luis Marino Secretario/a
  3. Guido de Croon Vocal
  4. Matilde Santos Peñas Vocal
  5. Fernando Caballero Benítez Vocal
  6. María Dolores Rodríguez Moreno Vocal
  7. Jose Luis Sanchez Lopez Vocal

Tipo: Tesis

Resumen

Abstract: The design of algorithms for mobile robotic systems represents a significant challenge since these systems should operate in a wide range of unstructured scenarios where the robot is required to interact with the environment efficiently. This interaction can become critical for aerial robotic systems since a small failure (e.g., touching an obstacle during navigation) can compromise the security of the entire system. In order to achieve complex behaviors able to handle the wide spectrum of situations that can occur during the execution of a mission, the research community is evolving towards the development of biologically-inspired and learning-based solutions. Artificial intelligence, and more concretely machine learning models, are gaining significant importance owing to their versatility for working in the wide range of conditions where mobile robotic systems have to operate. These models are capable of learning directly from data, avoiding the use of handcrafted heuristics. In recent years, the growth of computational resources has permitted the usage of more complex learning-based techniques. In this direction, deep learning techniques are being widely researched in the scientific community since they can provide non-linear models capable of learning complex and robust behaviors. To this aim, this thesis presents different learning-based solutions, primarily based on deep neural networks, integrated into essential layers in an aerial robotic architecture, such as perception, control, and navigation. In this work, perception is conceived as the task of using vision-based data for object recognition, which is addressed in this thesis by designing traditional learning-based algorithms and more sophisticated fully convolutional networks. In addition, a higher level of intelligence is added by designing supervised and semi-supervised models, using siamese networks and autoencoders, for detecting anomalies or abnormal states in the objects previously recognized. Regarding the control and navigation tasks, we use deep reinforcement learning algorithms which are efficiently trained for visual servoing and target-driven reactive navigation. Finally, we propose a versatile mission planning system capable of coordinating different subsystems for performing fully-autonomous missions in dynamic and unstructured scenarios. Most of the algorithms presented in this thesis have been validated in simulation and real-flight experiments using different aerial robotic platforms. The results obtained in the different experiments conducted throughout this thesis demonstrate the robust capabilities provided by learning-based systems. These algorithms have proven to learn complex behaviors which can help in situations where traditional algorithms can present limitations.