TY - JOUR
T1 - Comparison between reinforcement learning methods with different goal selections in multi-agent cooperation
AU - Uwano, Fumito
AU - Takadama, Keiki
N1 - Funding Information:
This work was supported by JSPS KAKENHI Grant Number JP17J08724
PY - 2017/9
Y1 - 2017/9
N2 - This study discusses important factors for zero communication, multi-agent cooperation by comparing different modified reinforcement learning methods. The two learning methods used for comparison were assigned different goal selections for multi-agent cooperation tasks. The first method is called Profit Minimizing Reinforcement Learning (PMRL); it forces agents to learn how to reach the farthest goal, and then the agent closest to the goal is directed to the goal. The second method is called Yielding Action Reinforcement Learning (YARL); it forces agents to learn through a Q-learning process, and if the agents have a conflict, the agent that is closest to the goal learns to reach the next closest goal. To compare the two methods, we designed experiments by adjusting the following maze factors: (1) the location of the start point and goal; (2) the number of agents; and (3) the size of maze. The intensive simulations performed on the maze problem for the agent cooperation task revealed that the two methods successfully enabled the agents to exhibit cooperative behavior, even if the size of the maze and the number of agents change. The PMRL mechanism always enables the agents to learn cooperative behavior, whereas the YARL mechanism makes the agents learn cooperative behavior over a small number of learning iterations. In zero communication, multi-agent cooperation, it is important that only agents that have a conflict cooperate with each other.
AB - This study discusses important factors for zero communication, multi-agent cooperation by comparing different modified reinforcement learning methods. The two learning methods used for comparison were assigned different goal selections for multi-agent cooperation tasks. The first method is called Profit Minimizing Reinforcement Learning (PMRL); it forces agents to learn how to reach the farthest goal, and then the agent closest to the goal is directed to the goal. The second method is called Yielding Action Reinforcement Learning (YARL); it forces agents to learn through a Q-learning process, and if the agents have a conflict, the agent that is closest to the goal learns to reach the next closest goal. To compare the two methods, we designed experiments by adjusting the following maze factors: (1) the location of the start point and goal; (2) the number of agents; and (3) the size of maze. The intensive simulations performed on the maze problem for the agent cooperation task revealed that the two methods successfully enabled the agents to exhibit cooperative behavior, even if the size of the maze and the number of agents change. The PMRL mechanism always enables the agents to learn cooperative behavior, whereas the YARL mechanism makes the agents learn cooperative behavior over a small number of learning iterations. In zero communication, multi-agent cooperation, it is important that only agents that have a conflict cooperate with each other.
KW - Coooperation
KW - Internal reward
KW - Multi-agent system
KW - Reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85032838369&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85032838369&partnerID=8YFLogxK
U2 - 10.20965/jaciii.2017.p0917
DO - 10.20965/jaciii.2017.p0917
M3 - Article
AN - SCOPUS:85032838369
SN - 1343-0130
VL - 21
SP - 917
EP - 929
JO - Journal of Advanced Computational Intelligence and Intelligent Informatics
JF - Journal of Advanced Computational Intelligence and Intelligent Informatics
IS - 5
ER -