Optimization of market making strategy is a vital issue for participants in security markets. Traditional strategies are mostly designed manually, and orders are mechanically issued according to rules based on… Click to show full abstract
Optimization of market making strategy is a vital issue for participants in security markets. Traditional strategies are mostly designed manually, and orders are mechanically issued according to rules based on predefined market conditions. On one hand, market conditions cannot be well represented by arbitrarily defined indicators, and on the other hand, rule-based strategies cannot fully capture relations between the market conditions and strategies’ actions. Therefore, it is worthwhile to investigate how to incorporate deep reinforcement learning model to address those issues. In this paper, we propose an end-to-end deep reinforcement learning market making model, i.e., Deep Reinforcement Learning Market Making. It exploits long short-term memory network to extract temporal patterns of the market directly from limit order books, and it learns state-action relations via a reinforcement learning approach. In order to control inventory risk and information asymmetry, a deep Q-network is introduced to adaptively select different action subsets and train the market making agent according to the inventory states. Experiments are conducted on a six-month Level-2 data set, including 10 stock, from Shanghai Stock Exchange in China. Our model is compared with a conventional market making baseline and a state-of-the-art market making model. Experimental results show that our approach outperforms the benchmarks over 10 stocks by at least 10.63%.
               
Click one of the above tabs to view related content.