%0 Journal Article %T The Multi-Armed Bandit, with Constraints %A Eric V. Denardo %A Eugene A. Feinberg %A Uriel G. Rothblum %J Mathematics %D 2012 %I arXiv %X The early sections of this paper present an analysis of a Markov decision model that is known as the multi-armed bandit under the assumption that the utility function of the decision maker is either linear or exponential. The analysis includes efficient procedures for computing the expected utility associated with the use of a priority policy and for identifying a priority policy that is optimal. The methodology in these sections is novel, building on the use of elementary row operations. In the later sections of this paper, the analysis is adapted to accommodate constraints that link the bandits. %U http://arxiv.org/abs/1203.4640v1