Recommended Citation
Published in The Annals of Probability, Volume 9, Issue 2, January 1, 1981, pages 293-301. Copyright © 1981 Institute of Mathematical Statistics. The definitive version is available at http://www.jstor.org/stable/2243461.
NOTE: At the time of publication, the author Theodore P. Hill was not yet affiliated with Cal Poly.
Abstract
By a decision process is meant a pair (X,Γ), where X is an arbitrary set (the state space), and Γ associates to each point x in X an arbitrary nonempty collection of discrete probability measures (actions) on X. In a decision process with nonnegative costs depending on the current state, the action taken, and the following state, there is always available a Markov strategy which uniformly (nearly) minimizes the expected total cost. If the costs are strictly positive and depend only on the current state, there is even a stationary strategy with the same property. In a decision process with a fixed goal g in X, there is always a stationary strategy which uniformly (nearly) minimizes the expected time to the goal, and, if X is countable, such a stationary strategy exists which also (nearly) maximizes the probability of reaching the goal.
URL: https://digitalcommons.calpoly.edu/rgp_rsr/64