We present a framework for mobile robot navigation in dynamic environments using Deep Reinforcement Learning (DRL) and the Robot Operating System (ROS). Traditional navigation methods, which rely on complex, multi-module systems for mapping, localization, planning, and control, often lack the real-time adaptability required for unpredictable environments. To address this, we propose a streamlined approach that directly maps raw sensor inputs to control actions using the TD7 algorithm, an augmentation of the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm with state-action embeddings. These embeddings are crucial for predicting the next state of the environment, enabling the system to better model environmental dynamics and significantly improving navigation performance in dynamic settings. Extensive simulations were conducted in three distinct environments with varying levels of complexity—ranging from simple obstacle-free spaces to scenarios with static obstacles and dynamic actors. The results demonstrate that our DRL-based approach consistently outperforms baseline methods, particularly in environments with higher complexity.