Towards energy efficiency and productivity for decision making in mobile robot navigation

  1. Constantinescu, Denisa-Andreea
Dirigida por:
  1. Ángeles González Navarro Director/a
  2. Rafael Asenjo Plaza Director/a

Universidad de defensa: Universidad de Málaga

Fecha de defensa: 20 de julio de 2022

Tribunal:
  1. Francisco Tirado Fernández Presidente
  2. Nicolas Guil Matas Secretario/a
  3. David Atienza Alonso Vocal

Tipo: Tesis

Teseo: 733173 DIALNET lock_openRIUMA editor

Resumen

Our goal in this work is to make it easy and feasible to implement solutions for autonomous decision-making and planning under uncertainty on low-power mobile platforms. We focus on practical applications, such as autonomous driving and service robotics, that must run on mobile SoC platforms. These applications often have real-time execution constraints. The main challenge is to keep the runtime and energy performance in check while enabling the users (programmers) to code efficient solvers for decision-making problems. Our proposal uses low-power heterogeneous computing strategies, sparse data structures to fit real-world size decision-making problems on SoCs with scarce memory and computing resources, and oneAPI with DPC++ programming. In the first part of this thesis, we compare three heterogeneous scheduling strategies to run parallel code on CPU+iGPU SoCs. We evaluate their performance on a set of benchmarks for planning sequences of actions for mobile robot navigation. The benchmarks compute an optimal navigation plan through Value Iteration algorithm - a fundamental method to find optimal policies in decision making under uncertainty, allowing an intelligent agent modeled as Markov Decision Processes to act autonomously. Our experimental results show that the implementations based on oneAPI programming model are up to 5x easier to program than those based on OpenCL while incurring only 3 to 8% overhead. In the second part, we take the lessons learned from optimizing Value Iteration for low-power execution and apply them to a more complex autonomous decision-making framework that accounts for all sources of uncertainty in the agent interaction with the environment - Partially Observable Markov Decision Processes. We propose a new method for online planning under uncertainty for POMDPs, Recall-Planner, that outperforms the state-of-the-art online planners for a known set of real-time navigation benchmarks. This research demonstrates that it is feasible to solve large-scale (Partially Observable) Markov Decision Processes in real-time using low-power heterogeneous CPU+iGPU platforms. We can achieve both performance and productivity if we carefully select the scheduling strategy and programming model. In particular, we remark that the oneAPI programming model creates new opportunities to improve productivity, performance, and efficiency in low-power systems.