A Hybrid Planning–Learning Framework for Autonomous Navigation with Dynamic Obstacles


Creative Commons License

Arslan Öztürk H., YAVUZ S., Koç Ç. K.

Applied Sciences (Switzerland), cilt.16, sa.6, 2026 (SCI-Expanded, Scopus) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 16 Sayı: 6
  • Basım Tarihi: 2026
  • Doi Numarası: 10.3390/app16062961
  • Dergi Adı: Applied Sciences (Switzerland)
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Compendex, INSPEC, Directory of Open Access Journals
  • Anahtar Kelimeler: autonomous navigation, deep reinforcement learning, double deep q-networks, dynamic obstacle avoidance
  • Yıldız Teknik Üniversitesi Adresli: Evet

Özet

Traditional navigation methods work well in known, static environments but degrade in real-world settings with dynamic and unpredictable obstacles. This paper presents Double Deep Q-Network with A* guidance (DDQNA), a hybrid navigation algorithm that enables an agent to traverse mazes containing static and dynamic obstacles while maintaining a low probability of collision. DDQNA combines A* guidance with Double Deep Q-Network (DDQN) learning using an (Formula presented.) -greedy policy, and it introduces a redesigned reward function and an improved action-selection mechanism to better exploit A*’s directional cues during training. We evaluate DDQNA in a custom Pygame simulation across 11 environments of increasing difficulty. Experimental results show that DDQNA consistently outperforms the standard DDQN and other state-of-the-art reinforcement learning baselines, achieving higher goal-reaching rates, fewer visited cells, shorter computation times, and higher cumulative rewards. These results indicate that DDQNA provides both effective navigation and computational efficiency in complex environments with static and dynamic obstacles.