In this paper, we investigate the application of Nash equilibrium strategy to enhance coordination and decision-making in multi-agent systems, specifically focusing on Automated Guided Vehicles (AGVs) systems. Traditional reinforcement learning methods often face challenges in multiagent environments due to the non-stationarity introduced by multiple learning agents and the complexity of coordinating actions among them. To address these challenges, we propose an approach that integrates game-theoretic principles of Nash equilibrium into existing multi-agent reinforcement learning frameworks. By incorporating Nash equilibrium considerations into the policy update mechanisms, agents can anticipate and respond to the strategies of other agents proactively. This integration reduces conflicts and improves cooperation without relying solely on reward shaping or penalization for undesirable behaviors, such as collisions. Additionally, we introduce a collaboration cost into the reward function to further incentivize cooperative behavior among agents. We validate the effectiveness of our approach in a flexible manufacturing system simulated using PyBullet, utilizing the default URDF models to create a realistic and standardized environment. Multiple AGVs operate as autonomous agents tasked with collaboratively optimizing production tasks. Experimental results demonstrate that our Nash equilibrium-based method significantly outperforms traditional algorithms—including MADDPG, NDQN, CQL, COMA, IQL, PPO, SAC, and DQN—in terms of cumulative reward, policy convergence speed, and overall system throughput.
목차
Abstract I. INTRODUCTION II. METHOD A. Multi-Agent Markov Decision Process (MAMDP) B. Nash-MADDPG Algorithm III. RESULT AND DISCUSSION A. Experimental Setup B. Performance Evaluation IV. CONCLUISION REFERENCES
저자
Chang-Hoon Ji [ Department of Artificial Intelligence Korea University ]
Ji-Hye Oh [ Department of Artificial Intelligence Korea University ]
Jun-Mo Kim [ Department of Artificial Intelligence Korea University ]
Soyeon Bak [ Department of Artificial Intelligence Korea University ]
Yu-Kyum Kang [ Department of Artificial Intelligence Korea University ]
Tae-Eui Kam [ Department of Artificial Intelligence Korea University ]
Corresponding Author