The number of discussion rounds and harmony degree of decision makers are two crucial efficiency measures to be considered in the design of the consensus-reaching process for the group decision-making… Click to show full abstract
The number of discussion rounds and harmony degree of decision makers are two crucial efficiency measures to be considered in the design of the consensus-reaching process for the group decision-making problems. Adjusting the feedback parameter and importance weights of the decision makers in the recommendation mechanism has a great impact on these efficiency measures. This work aims to propose novel and efficient reinforcement learning-based adjustment mechanisms to address the tradeoff between the aforementioned measures. To employ these adjustment mechanisms, we propose to extract the dynamics of state transition from consensus models based on the distributed trust functions and $Z$ -Numbers in order to convert the decision environment into a Markov decision process. Two independent reinforcement learning agents are then trained via a deep deterministic policy gradient algorithm to adjust the feedback parameter and importance weights of decision makers. The first agent is trained toward reducing the number of discussion rounds while ensuring the highest possible level of harmony degree among the decision makers. The second agent merely speeds up the consensus reaching process by adjusting the importance weights of the decision makers. Various experiments are designed to verify the applicability and scalability of the proposed feedback and weight-adjustment mechanisms in different decision environments.
               
Click one of the above tabs to view related content.