publications
publications by categories in reversed chronological order. generated by jekyll-scholar.
2025
- Deep Reinforcement Learning for Mobile Robot Navigation in Dynamic EnvironmentsAhmed Yesuf Nurye and Elzbieta JarzebowskaIn 2025 29th International Conference on Methods and Models in Automation and Robotics (MMAR), 2025
We present a framework for mobile robot navigation in dynamic environments using Deep Reinforcement Learning (DRL) and the Robot Operating System (ROS). Traditional navigation methods often lack the real-time adaptability required in highly dynamic settings. To address this, we leverage the TD7 algorithm—an extension of the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm incorporating state and state-action embeddings—to directly map raw sensor inputs to control actions. These embeddings, trained to minimize the mean squared error (MSE) between the encoded state-action representation and the transition-predicted next state, enhance the system’s ability to model environment dynamics and improve navigation performance. Extensive simulations were conducted in custom Gazebo environments of increasing complexity, ranging from open spaces to scenarios with static obstacles and moving actors. Performance was evaluated based on navigation success rate, time to goal, path efficiency, and collision rate. Results indicate that this approach consistently improves navigation performance, particularly in highly dynamic environments.
@inproceedings{Nurye-2025, author = {Nurye, Ahmed Yesuf and Jarzebowska, Elzbieta}, booktitle = {2025 29th International Conference on Methods and Models in Automation and Robotics (MMAR)}, title = {Deep Reinforcement Learning for Mobile Robot Navigation in Dynamic Environments}, year = {2025}, keywords = {Mobile Robot Navigation; Deep Reinforcement Learning; TD3; TD7; ROS; Gazebo}, }
2024
- Mobile Robot Navigation in Dynamic EnvironmentsAhmed Y. NuryeWarsaw University of Technology, Warsaw. MS.c. Thesis , Oct 2024
We present a framework for mobile robot navigation in dynamic environments using Deep Reinforcement Learning (DRL) and the Robot Operating System (ROS). Traditional navigation methods often lack the real-time adaptability required in highly dynamic settings. To address this, we leverage the TD7 algorithm—an extension of the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm incorporating state and state-action embeddings—to directly map raw sensor inputs to control actions. These embeddings, trained to minimize the mean squared error (MSE) between the encoded state-action representation and the transition-predicted next state, enhance the system’s ability to model environment dynamics and improve navigation performance. Extensive simulations were conducted in custom Gazebo environments of increasing complexity, ranging from open spaces to scenarios with static obstacles and moving actors. Performance was evaluated based on navigation success rate, time to goal, path efficiency, and collision rate. Results indicate that this approach consistently improves navigation performance, particularly in highly dynamic environments.
@mastersthesis{Nurye-2024, author = {Nurye, Ahmed Y.}, title = {Mobile Robot Navigation in Dynamic Environments}, year = {2024}, month = oct, school = {Warsaw University of Technology}, location = {Warsaw}, number = {WUT4f18e5c2cd214a9cb555f730fa440901}, keywords = {Mobile Robot Navigation, Deep Reinforcement Learning, ROS2, Gazebo}, }