All Title Author
Keywords Abstract

Mathematics  2012 

The Multi-Armed Bandit, with Constraints

Full-Text   Cite this paper   Add to My Lib

Abstract:

The early sections of this paper present an analysis of a Markov decision model that is known as the multi-armed bandit under the assumption that the utility function of the decision maker is either linear or exponential. The analysis includes efficient procedures for computing the expected utility associated with the use of a priority policy and for identifying a priority policy that is optimal. The methodology in these sections is novel, building on the use of elementary row operations. In the later sections of this paper, the analysis is adapted to accommodate constraints that link the bandits.

Full-Text

comments powered by Disqus