Published in SIAM Journal on Control and Optimization, Volume 45, Issue 6, January 22, 2007, pages 2169-2206.
NOTE: At the time of publication, the author Kevin Ross was not yet affiliated with Cal Poly.
The definitive version is available at https://doi.org/10.1137/050640515.
We consider a singular stochastic control problem with state constraints that arises in problems of optimal consumption and investment under transaction costs. Numerical approximations for the value function using the Markov chain approximation method of Kushner and Dupuis are studied. The main result of the paper shows that the value function of the Markov decision problem (MDP) corresponding to the approximating controlled Markov chain converges to that of the original stochastic control problem as various parameters in the approximation approach suitable limits. All our convergence arguments are probabilistic; the main assumption that we make is that the value function be finite and continuous. In particular, uniqueness of the solutions of the associated HJB equations is neither needed nor available (in the generality under which the problem is considered). Specific features of the problem that make the convergence analysis nontrivial include unboundedness of the state and control space and the cost function; degeneracies in the dynamics; mixed boundary (Dirichlet–Neumann) conditions; and presence of both singular and absolutely continuous controls in the dynamics. Finally, schemes for computing the value function and optimal control policies for the MDP are presented and illustrated with a numerical study.
Statistics and Probability