Mobile Robot Navigation in Dynamic Environments

Ahmed Yesuf Nurye1, Elżbieta Jarzębowska1
1 Warsaw University of Technology

Read Paper

GitHub Repository
Example simulation.

Abstract

This paper presents a framework for mobile robot navigation in dynamic environments using deep reinforcement learning (DRL) and the Robot Operating System (ROS). The framework enables proactive adaptation to environmental changes. Traditional navigation methods typically assume a static environment and treat moving obstacles as outliers during mapping and localization. This assumption severely limits the robustness of these methods in highly dynamic settings such as homes, hospitals, and other public spaces. To overcome this limitation, we employ encoder networks that jointly learns state and state–action representations by minimizing the mean squared error (MSE) between predicted and actual next-state embeddings. This approach explicitly captures the environment’s transition dynamics, enabling the robot to anticipate and effectively navigate around moving obstacles.

We evaluate the proposed framework through extensive simulations in custom Gazebo worlds of increasing complexity,ranging from open spaces to scenarios with densely populated static obstacles and moving actors. We assess performance in terms of success rate, time to goal, path efficiency, and collision rate. Results demonstrate that our approach consistently improves navigation performance, particularly in highly dynamic environments.


Network Architecture

To effectively capture dynamic actors in the environment, enabling the policy to make more informed actions, it is essential to predict the next state of the environment accurately. For this purpose, a pair of encoders (state and state-action encoders) are used.

Simulation Environment

Upon reset, the positions of the obstacles are randomly altered to enhance generalization, and new starting and target positions are generated randomly.

The framework was tested in Gazebo simulation environments with increasing level of complexity.

We have tested our system in a range of environments that vary in complexity to assess its robustness and effectiveness.
One of the potential application of this framework is exploration. We can use this framework to autonomously navigate in unknown environment and use SLAM frameworks to generate a map of that environment for further application.
@INPROCEEDINGS{nurye2025,
author={Nurye, Ahmed Yesuf and Jarzebowska, Elzbieta},
booktitle={2025 29th International Conference on Methods and Models in Automation and Robotics (MMAR)},
title={Deep Reinforcement Learning for Mobile Robot Navigation in Dynamic Environments},
year={2025}
}