A Compact DQN Model for Mobile Agents with Collision Avoidance

Authors

Keywords: Q-learning, DQN, reinforcement learning

Abstract

This paper presents a complete simulation and reinforcement learning solution to train mobile agents’ strategy of route tracking and avoiding mutual collisions. The aim was to achieve such functionality with limited resources, w.r.t. model input and model size itself. The designed models prove to keep agents safely on the track. Collision avoidance agent’s skills developed in the course of model training are primitive but rational. Small size of the model allows fast training with limited computational resources.

Downloads

Published
22.01.2024
Issue
Section
Articles

How to Cite

Kamola, M. (2024). A Compact DQN Model for Mobile Agents with Collision Avoidance. Journal of Automation, Mobile Robotics and Intelligent Systems, 17(2), 28-35. https://doi.org/10.14313/JAMRIS/2-2023/13