oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Staged Multi-armed Bandits  [PDF]
Cem Tekin,Mihaela van der Schaar
Computer Science , 2015,
Abstract: In conventional multi-armed bandits (MAB) and other reinforcement learning methods, the learner sequentially chooses actions and obtains a reward (which can be possibly missing, delayed or erroneous) after each taken action. This reward is then used by the learner to improve its future decisions. However, in numerous applications, ranging from personalized patient treatment to personalized web-based education, the learner does not obtain rewards after each action, but only after sequences of actions are taken, intermediate feedbacks are observed, and a final decision is made based on which a reward is obtained. In this paper, we introduce a new class of reinforcement learning methods which can operate in such settings. We refer to this class as staged multi-armed bandits (S-MAB). S-MAB proceeds in rounds, each composed of several stages; in each stage, the learner chooses an action and observes a feedback signal. Upon each action selection a feedback signal is observed, whilst the reward of the selected sequence of actions is only revealed after the learner selects a stop action that ends the current round. The reward of the round depends both on the sequence of actions and the sequence of observed feedbacks. The goal of the learner is to maximize its total expected reward over all rounds by learning to choose the best sequence of actions based on the feedback it gets about these actions. First, we define an oracle benchmark, which sequentially selects the actions that maximize the expected immediate reward. This benchmark is known to be approximately optimal when the reward sequence associated with the selected actions is adaptive submodular. Then, we propose our online learning algorithm, for which we prove that the regret is logarithmic in the number of rounds and linear in the number of stages with respect to the oracle benchmark.
Risk-Aversion in Multi-armed Bandits  [PDF]
Amir Sani,Alessandro Lazaric,Rémi Munos
Computer Science , 2013,
Abstract: Stochastic multi-armed bandits solve the Exploration-Exploitation dilemma and ultimately maximize the expected reward. Nonetheless, in many practical problems, maximizing the expected reward is not the most desirable objective. In this paper, we introduce a novel setting based on the principle of risk-aversion where the objective is to compete against the arm with the best risk-return trade-off. This setting proves to be intrinsically more difficult than the standard multi-arm bandit setting due in part to an exploration risk which introduces a regret associated to the variability of an algorithm. Using variance as a measure of risk, we introduce two new algorithms, investigate their theoretical guarantees, and report preliminary empirical results.
Thompson Sampling for Budgeted Multi-armed Bandits  [PDF]
Yingce Xia,Haifang Li,Tao Qin,Nenghai Yu,Tie-Yan Liu
Computer Science , 2015,
Abstract: Thompson sampling is one of the earliest randomized algorithms for multi-armed bandits (MAB). In this paper, we extend the Thompson sampling to Budgeted MAB, where there is random cost for pulling an arm and the total cost is constrained by a budget. We start with the case of Bernoulli bandits, in which the random rewards (costs) of an arm are independently sampled from a Bernoulli distribution. To implement the Thompson sampling algorithm in this case, at each round, we sample two numbers from the posterior distributions of the reward and cost for each arm, obtain their ratio, select the arm with the maximum ratio, and then update the posterior distributions. We prove that the distribution-dependent regret bound of this algorithm is $O(\ln B)$, where $B$ denotes the budget. By introducing a Bernoulli trial, we further extend this algorithm to the setting that the rewards (costs) are drawn from general distributions, and prove that its regret bound remains almost the same. Our simulation results demonstrate the effectiveness of the proposed algorithm.
Decoupling Exploration and Exploitation in Multi-Armed Bandits  [PDF]
Orly Avner,Shie Mannor,Ohad Shamir
Computer Science , 2012,
Abstract: We consider a multi-armed bandit problem where the decision maker can explore and exploit different arms at every round. The exploited arm adds to the decision maker's cumulative reward (without necessarily observing the reward) while the explored arm reveals its value. We devise algorithms for this setup and show that the dependence on the number of arms, k, can be much better than the standard square root of k dependence, depending on the behavior of the arms' reward sequences. For the important case of piecewise stationary stochastic bandits, we show a significant improvement over existing algorithms. Our algorithms are based on a non-uniform sampling policy, which we show is essential to the success of any algorithm in the adversarial setup. Finally, we show some simulation results on an ultra-wide band channel selection inspired setting indicating the applicability of our algorithms.
Distributed Exploration in Multi-Armed Bandits  [PDF]
Eshcar Hillel,Zohar Karnin,Tomer Koren,Ronny Lempel,Oren Somekh
Computer Science , 2013,
Abstract: We study exploration in Multi-Armed Bandits in a setting where $k$ players collaborate in order to identify an $\epsilon$-optimal arm. Our motivation comes from recent employment of bandit algorithms in computationally intensive, large-scale applications. Our results demonstrate a non-trivial tradeoff between the number of arm pulls required by each of the players, and the amount of communication between them. In particular, our main result shows that by allowing the $k$ players to communicate only once, they are able to learn $\sqrt{k}$ times faster than a single player. That is, distributing learning to $k$ players gives rise to a factor $\sqrt{k}$ parallel speed-up. We complement this result with a lower bound showing this is in general the best possible. On the other extreme, we present an algorithm that achieves the ideal factor $k$ speed-up in learning performance, with communication only logarithmic in $1/\epsilon$.
Exponentiated Gradient LINUCB for Contextual Multi-Armed Bandits  [PDF]
Djallel Bouneffouf
Computer Science , 2013,
Abstract: We present Exponentiated Gradient LINUCB, an algorithm for con-textual multi-armed bandits. This algorithm uses Exponentiated Gradient to find the optimal exploration of the LINUCB. Within a deliberately designed offline simulation framework we conduct evaluations with real online event log data. The experimental results demonstrate that our algorithm outperforms surveyed algorithms.
Multiple Identifications in Multi-Armed Bandits  [PDF]
Sébastien Bubeck,Tengyao Wang,Nitin Viswanathan
Computer Science , 2012,
Abstract: We study the problem of identifying the top $m$ arms in a multi-armed bandit game. Our proposed solution relies on a new algorithm based on successive rejects of the seemingly bad arms, and successive accepts of the good ones. This algorithmic contribution allows to tackle other multiple identifications settings that were previously out of reach. In particular we show that this idea of successive accepts and rejects applies to the multi-bandit best arm identification problem.
Multi-Armed Bandits in Metric Spaces  [PDF]
Robert Kleinberg,Aleksandrs Slivkins,Eli Upfal
Computer Science , 2008,
Abstract: In a multi-armed bandit problem, an online algorithm chooses from a set of strategies in a sequence of trials so as to maximize the total payoff of the chosen strategies. While the performance of bandit algorithms with a small finite strategy set is quite well understood, bandit problems with large strategy sets are still a topic of very active investigation, motivated by practical applications such as online auctions and web advertisement. The goal of such research is to identify broad and natural classes of strategy sets and payoff functions which enable the design of efficient solutions. In this work we study a very general setting for the multi-armed bandit problem in which the strategies form a metric space, and the payoff function satisfies a Lipschitz condition with respect to the metric. We refer to this problem as the "Lipschitz MAB problem". We present a complete solution for the multi-armed problem in this setting. That is, for every metric space (L,X) we define an isometry invariant which bounds from below the performance of Lipschitz MAB algorithms for X, and we present an algorithm which comes arbitrarily close to meeting this bound. Furthermore, our technique gives even better results for benign payoff functions.
A Survey on Contextual Multi-armed Bandits  [PDF]
Li Zhou
Computer Science , 2015,
Abstract: The natural of contextual bandits makes it suitable for many machine learning applications such as user modeling, Internet advertising, search engine, experiments optimization etc. In this survey we cover three different types of contextual bandits algorithms, and for each type we introduce several representative algorithms. We also compare the regrets and assumptions between these algorithms.
Bounded regret in stochastic multi-armed bandits  [PDF]
Sébastien Bubeck,Vianney Perchet,Philippe Rigollet
Computer Science , 2013,
Abstract: We study the stochastic multi-armed bandit problem when one knows the value $\mu^{(\star)}$ of an optimal arm, as a well as a positive lower bound on the smallest positive gap $\Delta$. We propose a new randomized policy that attains a regret {\em uniformly bounded over time} in this setting. We also prove several lower bounds, which show in particular that bounded regret is not possible if one only knows $\Delta$, and bounded regret of order $1/\Delta$ is not possible if one only knows $\mu^{(\star)}$
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.