Mobile Robot Navigation in Dynamic Environments
Deep Reinforcement Learning for dynamic robot navigation
Author: Ahmed Yesuf Nurye
Advisor: Prof. Elżbieta Jarzębowska

Abstract
We present a framework for mobile robot navigation in dynamic environments using Deep Reinforcement Learning (DRL) and the Robot Operating System (ROS). Traditional navigation methods often lack the real-time adaptability required in highly dynamic settings. To address this, we leverage the TD7 algorithm—an extension of the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm incorporating state and state-action embeddings—to directly map raw sensor inputs to control actions. These embeddings, trained to minimize the mean squared error (MSE) between the encoded state-action representation and the transition-predicted next state, enhance the system’s ability to model environment dynamics and improve navigation performance.
Extensive simulations were conducted in custom Gazebo environments of increasing complexity, ranging from open spaces to scenarios with static obstacles and moving actors. Performance was evaluated based on navigation success rate, time to goal, path efficiency, and collision rate. Results indicate that this approach consistently improves navigation performance, particularly in highly dynamic environments.
Network Architecture

Simulation Environment



The framework was tested in Gazebo simulation environments with increasing level of complexity.




@mastersthesis{Nurye-2024,
author = {Nurye, Ahmed Y.},
title = {Mobile Robot Navigation in Dynamic Environments},
year = {2024},
month = oct,
school = {Warsaw University of Technology},
address = {Warsaw, Poland},
number = {WUT4f18e5c2cd214a9cb555f730fa440901},
keywords = {Mobile Robot Navigation, Deep Reinforcement Learning, ROS2, Gazebo},
}