We propose a learning and negotiation method to enhance divisional cooperation and demonstrate its flexibility for adapting to environmental changes in the context of the multi-agent cooperative problem. We now… Click to show full abstract
We propose a learning and negotiation method to enhance divisional cooperation and demonstrate its flexibility for adapting to environmental changes in the context of the multi-agent cooperative problem. We now have access to a vast array of information, and everything has become more closely connected. However, this makes tasks/problems in these environments complicated. In particular, we often require fast decision-making and flexible responses to follow environmental changes. For these requirements, multi-agent systems have been attracting interest, but the manner in which multiple agents cooperate is a challenging issue because of the computational cost, environmental complexity, and sophisticated interaction between agents. In this work, we address the continuous cooperative patrol problem, which requires cooperation based on high autonomy, and propose an autonomous learning method with simple negotiation to enhance divisional cooperation for efficient work. We also investigate how this method can have high flexibility to adapt to change. We experimentally show that agents with our method generate several types of role sharing in a bottom-up manner for effective and flexible divisional cooperation. The results also show that agents using our method appropriately change their roles in different environmental change scenarios and enhance the overall efficiency and flexibility.
               
Click one of the above tabs to view related content.