In this paper, we study the optimal singular controls for stochastic recursive systems, in which the control has two components: the regular control, and the singular control. Under certain assumptions,… Click to show full abstract
In this paper, we study the optimal singular controls for stochastic recursive systems, in which the control has two components: the regular control, and the singular control. Under certain assumptions, we establish the dynamic programming principle for this kind of optimal singular controls problem, and prove that the value function is a unique viscosity solution of the corresponding Hamilton-Jacobi-Bellman inequality, in a given class of bounded and continuous functions. At last, an example is given for illustration.
               
Click one of the above tabs to view related content.