Evaluating the performance of players in a dynamic competition is vital for achieving effective sports coaching. However, a quantitative evaluation of players in racket sports is difficult because it is… Click to show full abstract
Evaluating the performance of players in a dynamic competition is vital for achieving effective sports coaching. However, a quantitative evaluation of players in racket sports is difficult because it is derived from the integration of complex tactical and technical (i.e., whole-body movement) performances. In this study, we propose a new evaluation method for racket sports based on deep reinforcement learning, which can analyze the motion of a player in more detail, rather than only considering the results (i.e., scores). Our method uses historical data including information related to the tactical and technical performance of players to learn the next-score probability as a Q-function, which is used to value the actions of the players. We leverages long short-term memory model for the learning of Q-function with the poses of the players and the position of the shuttlecock as the input, which are identified by the AlphaPose and TrackNet algorithms, respectively. We verified our approach by comparing various baselines and demonstrated the effectiveness of our method through use cases that analyze the performance of the top badminton players in world-class events.
               
Click one of the above tabs to view related content.