With the growing demands on green short life-cycle products, advanced energy-aware process planning (AEPP) becomes critical. A major limitation of the existing methods is the poor resistance to the perturbations… Click to show full abstract
With the growing demands on green short life-cycle products, advanced energy-aware process planning (AEPP) becomes critical. A major limitation of the existing methods is the poor resistance to the perturbations encountered in advanced machining systems. Therefore, a graph convolutional reinforcement learning (GCRL) method is proposed to overcome such limitations. In this method, a graph convolutional policy network is trained to rapidly adapt the learned commonalities to specific tasks. Unlike algorithms that fix decision variables before optimization, this method employs graph generation to represent AEPP while taking into consideration the flexibilities of operations, machines, and cutting tools. The problem is reformulated as a novel Markov decision process (MDP) to describe the dynamic generation procedure of process plans. A graph convolutional network (GCN) is concurrently used to perform graph embedding to compress the topology of input graphs. Additionally, reinforcement learning (RL) is used to achieve robust and intuitive learning for process planning. To improve the adaption performance of the proposed GCRL, a two-phase multitask training strategy is adopted. Learning efficiency is improved because agents can incorporate both intertask similarities and task-specific rules. A comprehensive case study, including energy characteristics and algorithm performance analyses, is also performed to validate the developed method.
               
Click one of the above tabs to view related content.