Exploring regret bounds for Thomspson sampling using Information Theory

Thompson sampling, dates back to a research work from 1933, and is still a topic of research under different settings. This is indicative of its far reaching utility that makes researchers and practitioners both interested in the topic. So, what is Thompson sampling and why do we need it? It is easier to answer the second question. In a number of settings we are faced with a decision to choose an action, among a set of actions and playing the action gets us a reward. This reward can be stochastic and our objective is to find an action that maximizes my reward in the long term. This is also known as the “Multi-armed bandit” problem. What makes this problem interesting, is you need to explore a number of action a number of times to be actually able to understand the expected reward for each action. This is the famous exploration-exploration tradeoff, even humans perform to find best actions — for example, choosing a restaurant. A real example of the multiarmed bandit problem can be the movie recommendations in netflix, netflix needs to choose an action (movie recommendation) that the users will like (reward). We are usually interested in an algorithm that minimizes the expected “regret” in such a setting:

Regret(T)=E[t=1T[R(A,θ)R(At,θ)]

where A denotes the optimal action, At denotes the action played by the algorithm and θ denotes the environment. Intuitively regret is a measure of the best reward I could have gotten vs the reward I actually got. A environment decides what rewards we get for each actions. This notion can of Regret can be generalized where we consider a distribution over environments P under which we aim to minimize regret (we aim to do well in a distribution over different environments). This is called as Bayesian Regret.

Bayesian-Regret=EθP[E[t=1T[R(A,θ)R(At,θ)]]]

Thompson sampling is one such algorithm that aims to achieve minimum regret. We are now in a good place to understand what the algorithm is. The only unknown in our ability to minimize regret is the environment we are currently in. Knowing the environments amounts to knowing the rewards each action give, and allows us to select the best action deterministically. Thompson sampling starts with a estimate of the environment distribution. This estimate is set to the distribution over environments P (same as in the definition of bayesian regret). As it gets more information about the environment it is truly in, the algorithm refines its estimates. Overall there are three steps:

Algorithm: Thompson Sampling

  • Sample an environment from the posterior distribution Pk, where P0=P.
  • Play the optimal action for the selected environment in true environment. Since we have chosen the environment we know the optimal action ak.
  • Observe the reward from the true environment.
  • Update posterior distribution over the environments based on history H of observed action-reward pairs. Pk+1=Pk(θrk,ak)

Regret bound for Thompson Sampling

In this blog, our focus is to analyze the behavior of the algorithm and derive a bound on its regret/performance. This blog is based on the paper An Information-Theoretic Analysis of Thompson Sampling by Russo et al. We will use the following notations: At is the action played by the algorithm at time t, A is the optimal action that could have been played, θ are the environment parameters, R(At,θ) is the reward obtained from playing action At in environment θ, Ht:=[A1,R1,A2,R2...,At,Rt] is the history of actions and rewards obtained till time t.

We also needs some information theoretic quantities going forward in the analysis that is best clarified at this point. H(X) denotes the entropy of discrete random variable X, I(X;Y) denotes the mutual information between X and Y, and can be written as: I(X;Y)=H(X)H(X|Y). Intuitively, the mutual information definition says that how much does knowing about Y reduce my entropy over X. If knowing about Y reveals what X, then H(X|Y)=0 and mutual information is maximized as the entropy of a discrete random variable is always greater than 0. Mutual information also satisfies the chain rule: I(X;(Z1,Z2,..ZT))=I(X;Z1)+I(X;Z2|Z1)+I(X3;Z3|Z2,Z1)+..+I(X;ZT|Z1..ZT1)

First, lets get comfortable with the notion of bayesian regret. Bayesian-Regret(T)=EθP[EHt[t=1T[R(A,θ)R(At,θ)]]]

The outer expectation is over the possible environments. Once, we have picked an environment, choosing different environments gives us rewards that is fully specified by the environment. The algorithm we have uses the history of observed action-reward pairs upto time t, denoted by Ht and produces an action to be played and its decision can be stochastic. The inner expectation is over the stochastic decisions of the algorithm based on the observed history. We can rewrite the bayesian regret as follows:

Bayesian-Regret(T)=EθP,ht1Ht1[E[t=1T[R(A,θ)R(At,θ)]]]=EhtHT[EθP|ht1[t=1T[R(A,θ)R(At,θ)]]]

The first equation above just rewrites our original definition of bayesian regret using a joint probability distribution over environments and histories. The second equation takes the outer expectation over histories and inner expectation over environments conditioned on the history. All the definitions are mathematically equivalent although the original definition of bayesian regret is most intuitive.

Now we define a new quantity called \textbf{information ratio} that is of much importance in the exploration-exploitation literature. We denote information ratio by Γt and is given by:

Γt=Eθ|ht1[R(A,θ)R(At,θ)]2I(A;(At,R(At,θ))|ht1)

Let’s break down what this quantity means.

Numerator: Eθ|ht1[R(A,θ)R(At,θ)]2

  • This term is the difference of the best reward I can get vs the reward I get when I play based on the history knowledge of ht1. Note that since the only given quantity is the history ht1, the true environment is uncertain and consequently the best actions to play as well.

Denominator: I(A;(At,R(At,θ))|ht1)

  • This term tells us “How much does the action I take at time t with it corresponding observed reward reduce my entropy over Aht1 “.

Now suppose this information ratio is bounded by a small constant. What does that mean?

  • Either the algorithm picks an action that will have a small numerator. i.e The best action given the information (history) it has (exploit).
  • Or the algorithm pics an action that will have a high denominator. i.e decrease the uncertainty over optimal action A explore.

We are typically interested in algorithms with such a small bounded information ratio that is bounded by Γ¯ (ΓtΓ¯).

We can now proceed to deriving a general regret bound that is applicable to all algorithms with a bounded information ratio. Later, we show that Thompson sampling has a bounded information ratio and the bound we derived is applicable to it as well. Starting with our definition of bayesian regret:

Bayesian-Regret(T)=EhtHT[EθP|ht1[t=1T[R(A,θ)R(At,θ)]]]=EHt1[t=1TΓtI(A;(At,R(At,θ))|ht1)]   (Information ratio definition)=EHt1[Γ¯t=1TI(A;(At,R(At,θ))|ht1)]   (Bounded information ratio)

All we have done above is plug in the definition of information ratio in the definition of bayesian regret, and used the assumption that the information ratio is bounded. We can proceed further by doing some additional algebraic manipulations:

Bayesian-Regret(T)=EhtHT[EθP|ht1[t=1T[R(A,θ)R(At,θ)]]]EHt1[Γ¯t=1TI(A;(At,R(At,θ))|ht1)]   (Bounded information ratio)Γ¯TEHt1[t=1TI(A;(At,R(At,θ))|ht1)]   (by Cauchy schwartz inequality)

Recall that ht1=(a1,r1,a2,r2,...,at1,rt1) is a particular history that was observed. We use Ht1=(A1,R1,A2,R2,..,At1,Rt1) to denote the random variable for history—the possible histories that can be observed. The terms inside the inner expectation in the last equation above is the definition of conditional mutual independence.

EHt1[I(A;(At,R(At,θ))|ht1)]=I(A,(At,R(At,θ)|Ht1))

Summing these terms till time T,

EHt1[t=1TI(A;(At,R(At,θ))|ht1)]=t=1T[I(A,(At,R(At,θ)|Ht1)]=t=1TI(A,(At,R(At,θ))|(A1,,R1,A2,R2,...At1,Rt1))=I(A;(A1,R1,A2,R2,...AT,RT))   (by chain rule of mutual information)=I(A;HT)H(A)   (by definition of mutual information) Thus plugging the above derived bound back in the inner expectation for bayesian regret, we have the general regret bound when the information ratio is upper bounded:

Bayesian-Regret(T)Γ¯H(A)T

Thus the regret bound depends on the entropy of the optimal action distribution and the information ratio while scaling sublinearly with T. The uncertainty in the random variable A comes from the prior distribution over environments P. Bayesian regret can also be thought of total mistakes made till time T. So, the average mistakes made as time T goes to is:

Bayesian-Regret(T)TlimTΓ¯H(A)TT=0

Any algorithm that has a bounded information will make zero mistakes eventually by performing intelligent exploration and exploitation. Thompson sampling is precisely one such algorithm.

Thompson Sampling has a Bounded Information Ratio

We will be discussing a proof that produces a loose bound for information ratio in the thompson sampling setting and is applicable in a variety of settings. When there is a more rich information structure, i.e playing one action reveals information about other action, we can derive even tighter bounds. We leave that exploration to interested readers in the original paper.

We can concisely write the thompson sampling algorithm in one line: It selects action based on its probability of being optimal given the history. Mathematically,

Thompson Sampling Rule: P(At|ht)=P(A|ht)   t

As usual we will need some mathematical tools to proceed.

Useful fact 1: KL divergence form of Mutual information

I(X;Y)=EX[D(P(Y|X))||P(Y)]

Useful fact 2: Simplified expressions for Information bound

Denominator=I(A;(At,R(At,θ))|ht1)=I(A;At|ht1)+I(A;R(At,θ)|At,ht1)   (Chain rule)=I(A;R(At,θ)|At,ht1)   (A and A are independent)=aAP(At=a)I(A;R(at,θ)|at,ht1)   (Useful fact 1)=aAP(At=a)I(A;R(at,θ)|ht1)=aAP(At=a)(aAP(A=a)[D(P(R(at,θ|ht1))P(R(at,θ|ht1)))])=aAP(A=a)(aAP(A=a)[D(P(R(at,θ|ht1))P(R(at,θ|ht1)))])   (Thompson sampling rule)=a,aAP(A=a)P(A=a)[D(P(R(at,θ|ht1))P(R(at,θ|ht1)))]

Numerator=

=Eθ|ht1[R(A,θ)R(At,θ)]=aAP(A=a)Eθ|ht1[R(a,θ)|A=a]aAP(At=a)Eθ|ht1[R(a,θ)|At=a]=aAP(A=a)R(a,θ|A=a)aAP(A=a)R(a,θ)   (Thompson sampling rule)=aAP(A=a)[R(a,θ|A=a)R(a,θ)]

Now with these simplified expressions, we can easily bound the numerator of the information ratio with its denominator:

Eθ|ht1[R(A,θ)R(At,θ)]2=(aAP(A=a|ht1)[R(a,θ|A=a)R(a,θ)])2   (Simplified numerator expression)|A|aAP(A=a|ht1)2[R(a,θ|A=a)R(a,θ)]2   (Cauchy schwartz inequality)|A|a,aAP(A=a|ht1)P(A=a|ht1)[R(a,θ|A=a)R(a,θ)]2|A|2a,aAP(A=a)P(A=a)[D(P(R(a,θ)|A=a,ht1)P(R(a,θ)|ht1))]   (Useful fact 2)|A|2I(A;(At,R(At,θ))|ht1)   (Simplified denominator expression)

Thus we have shown that the information ratio is bounded by A2 when using Thompson sampling with finite actions and a bounded reward function. Plugging in the information ratio in our general bound for bayesian regret, the bayesian regret for thompson sampling is bounded by:

Bayesian-Regret(T)|A|H(A)T2
You will need to sign in to GitHub to add a comment! To edit or delete your comment, visit the discussions page and look for your comment in the right discussion.