Applied Sciences (Switzerland), cilt.16, sa.6, 2026 (SCI-Expanded, Scopus)
Traditional navigation methods work well in known, static environments but degrade in real-world settings with dynamic and unpredictable obstacles. This paper presents Double Deep Q-Network with A* guidance (DDQNA), a hybrid navigation algorithm that enables an agent to traverse mazes containing static and dynamic obstacles while maintaining a low probability of collision. DDQNA combines A* guidance with Double Deep Q-Network (DDQN) learning using an (Formula presented.) -greedy policy, and it introduces a redesigned reward function and an improved action-selection mechanism to better exploit A*’s directional cues during training. We evaluate DDQNA in a custom Pygame simulation across 11 environments of increasing difficulty. Experimental results show that DDQNA consistently outperforms the standard DDQN and other state-of-the-art reinforcement learning baselines, achieving higher goal-reaching rates, fewer visited cells, shorter computation times, and higher cumulative rewards. These results indicate that DDQNA provides both effective navigation and computational efficiency in complex environments with static and dynamic obstacles.