Abstract This paper proposes the design and implementation of a model-free tire slip control for a fast and highly nonlinear Anti-lock Braking System (ABS). A reinforcement Q-learning optimal control approach… Click to show full abstract
Abstract This paper proposes the design and implementation of a model-free tire slip control for a fast and highly nonlinear Anti-lock Braking System (ABS). A reinforcement Q-learning optimal control approach is inserted in a batch neural fitted scheme using two neural networks to approximate the value function and the controller, respectively. The transition samples required for learning high performance control can be collected by interacting with the process either by online exploiting the current iteration controller (or policy) under an e-greedy exploration strategy, or by using data collected under any other controller that is capable of ensuring efficient exploration of the action-state space. Both approaches are highlighted in the paper. Fortunately, the ABS process fits this type of learning-by-interaction because it does not need an initial stabilizing controller. The validation case studies conducted on a real laboratory setup reveal that high control system performance can be achieved using the proposed approaches. Insightful comments on the observed control behavior are offered along with performance comparisons with several types of model-based and model-free controllers including relay, model-based optimal PI, an original model-free neural network state-feedback VRFT controller and a model-free neural network adaptive actor-critic one. With the ability to improve control performance starting from different supervisory controllers or to learn high performance controllers from scratch, the proposed Q-learning optimal control approach proves its performance in a wide operating range and is therefore recommended to its industrial application on ABS.
               
Click one of the above tabs to view related content.