Thursday, January 28, 2016

Lecture 5

I started today by describing a famous problem about searching for a randomly moving hidden object. This is an example of a partially observed Markov decision process. This problem has not yet been solved in full generality. However, the problem has been solved in continuous time. See R. R. Weber. Optimal search for a randomly moving object. J. Appl. Prob. 23:708-717, 1986. The reason the problem is easier in continuous time is that the state $x_1$ (the probability that the object is in location 1) changes continuously, rather than in jumps.

Let me emphasize that in Lecture 4, we used $F_s(x)$ to mean the value function of a MDP over $s$ steps, with $F_0(x)=0$. In the N case of $c(x,u)\geq 0$, $F_s(x)$ is clearly nondecreasing in $s$. By contrast, in this lecture, $F_s(x)$ was used to mean the minimum cost in a stopping problem in which we must stop within $s$ steps, and $F_0(x)=k(x)$. $F_s(x)$ is now clearly nonincreasing in $s$. That is because as $s$ increases we are being given more flexibility as to when we must stop. In both cases $F_s(x)$ satisfies an optimality equation and $F_s(x)\to$ a limit, say $F_\infty(x)$, but from a different direction.

Having completed these lectures you know all you need to do all questions on Examples Sheet 1.

Questions 7 and 8 use the idea of a one-step look-ahead (OSLA) rule that we have met in today's lecture. Question 8 is subtle, because although it is easy to guess the answer by thinking about a OSLA rule, how do you prove this answer is correct? Hint: use Theorem 4.2. Be careful when writing down the dynamic programming equation for Question 8 that you put the expected value (or integral and density function for an exponential random variable) in the right place relative to where you place your max{ , }).

Tuesday, January 26, 2016

Lecture 4

In Section 4.3 (optimal gambling) we saw that timid play is optimal in the gambling problem when the game is favourable to the gambler ($p \geq 0.5$).

Similarly, a bold strategy is optimal in the case $p<0.5$. But this is harder to prove because it is not so easy to find an expression for the value function of the bold strategy. (This might remind you of the question on the IB Markov Chains examples sheet 1 that begins "A gambler has £2 and needs to increase it to £10 in a hurry. The gambler decides to use a bold strategy in which he stakes all his money if he has £5 or less, and otherwise stakes just enough to increase his capital, if he wins, to £10.")

If $p=0.5$ all strategies are optimal. How could we prove that? Easy. Simply show that given any policy $\pi$ the value function is $F(\pi,i)=i/N$ and that this satisfies the dynamic programming equation. Then apply Theorem 4.2. You can read more about these so-called red and black games at the Virtual Laboratories in Probability and Statistics.

In Section 4.5 (pharmaceutical trials) I introduced an important class of very practical problems. One obvious generalization is to a problem with $k$ drugs, each of which has a different unknown probability of success, and about which we will learn as we make trials of the drugs. This is called a multi-armed bandit problem. The names comes from thinking about a gaming machine (or fruit machine), having $k$ arms that you might pull, one at a time. The arms have different unknown probabilities of generating payouts. In today's lecture we considered a special case of the two-armed bandit problem in which one of the arms has a known success probability, p, and the second arm has an unknown success probability, theta. I will say more about these problems in Lecture 7. The table on page 16 was computed by value iteration. The table is from  the book Multi-armed Bandit Allocation Indices.

Thursday, January 21, 2016

Lecture 3

Theorem 3.1 is our first serious theorem. It had an easy but non-trivial proof. It is important because it allows us to know that $F(x)$ satisfies a DP equation (3.7). It holds under the cases of D (discounted), N (negative) or P (positive) programming.

In the proof of Theorem 3.1 we used that $\lim_{s\to\infty}EF_s(x_1)=E[\lim_{s\to\infty}EF_s(x_1)]$ when $F_s$ is either monotone increasing or decreasing in $s$, as it indeed is in the N and P cases. This is called the Lebesgue monotone convergence theorem. In the D case the interchange of $E$ and $\lim_{s\to\infty}$, it is true because $F_s(x)$ is close to its limit for large $s$, uniformly in $x$.

The problem of selling a tulip bulb collection in Section 3.5 is very much like the secretary problem in Section 2.3. The differences are that now (i) we observe values (not just relative ranks), (ii) wish to maximize the expected value of the selected candidate (rather than probability of choosing the best), and (iii) the number of offers is infinite, but with a discount factor $\beta$. We see that one way in which discounting by a factor beta can naturally occur is via a catastrophe, with probability $1-\beta$, bringing the problem to an end.

How might the asset selling problem differ if past offers for the tulip bulb collection remain open (so long as the market has not collapsed)? The state $x$ is now the best offer so far received. The DP equation would be
$$
F(x) = \int_0^\infty\max\Bigl[x,y,\beta F(\max\{x,y\})\Bigr] g(y) dy.
$$
The validity of this equation is from Theorem 3.1 and that fact that this is a Positive case of dynamic programming. In fact the solution is exactly the same as when offers did not remain open. Can you see why intuitively? Can you prove it from the above dynamic programming equation?

Tuesday, January 19, 2016

Lecture 2

The highlight of today's lecture was the Secretary Problem. This is the most famous of all problems in the field of optimal stopping. It is credited to Merrill M. Flood in 1949, who called it the fiancĂ©e problem. It gained wider publicity when it appeared in Martin Gardner's column of Scientific American in 1960. There is an interesting Wikipedia article about it. One of the interesting things said in this article is that in behavioural studies people tend to stop too soon (i.e. marry too soon, make a purchase too soon). See The devil is in the details: Incorrect intuitions in optimal search.

The story about Kepler's search for a wife is taken from the paper, Who Solved the Secretary Problem, by Thomas S. Ferguson. He also discusses the related game of Googol.

A variation of the problem that has never been completely solved is the so-called Robbin's Problem. In this problem we do observe values of candidates, say $X_1,\dotsc, X_h$, and these are assumed to be independent, identically distributed uniform$[0,1]$ random variables. The objective is to maximize the expected rank of the candidate that is selected (best = rank 1, second-best = rank 2, etc). It is known only that, as $h$ goes to infinity, the expected rank that can be achieved under an optimal policy lies between 1.908 and 2.329. This problem is much more difficult that the usual secretary problem because the decision as to whether or not to hire candidate t must depend upon all the values of $X_1,\dotsc, X_t$, not just upon how $X_t$ ranks amongst them.

Following this lecture you can do questions 1–4 and 10 on Example Sheet 1. Question 2 is quite like the secretary problem (and also has a surprising answer). The tricks that have been explained in today's lecture are useful in solving these questions (working in terms of time to go, backwards induction, that a bang-bang control arises when the objective in linear in $u_t$, looking at the cross-over between increasing and decreasing terms within a $\max\{ , \}$, as we did in the secretary problem with $\max\{t/h, F(t)\}$).

Thursday, January 14, 2016

Lecture 1

Today we had definitions and notation for state, control, history, value function, etc, and have developed dynamic programming equations for a very general case and a state-structured case,. Please be patient with the notation. It is not as complex as it may first appear. Things like $a(x,u,t)$, $F(x_t,t)$, $u_t$, and $U_t$ will begin to seem like old friends once you have used them a few times.

The terminology "plant equation" for $x_{t+1}=a(x_t,u_t,t)$ derives from the fact that early optimal control theory was developed with applications to industrial processes in mind, especially chemical plants. We also call it the dynamics equation.

From this first lecture you should be taking away the key idea of dynamic programming, and the fact that problems in stages (with separable cost and a finite time horizon) can often be solved by working backwards from the final stage. The minimum length path (stage coach) problem is trivial, but should have made these ideas very intuitive. You might like to read the Wikipedia entry for Richard Bellman, who is credited with the invention of dynamic programming in 1953.

The course page gives some hyperlinks to the recommended booklist. In particular, Demitri Bertsekas has a web page for his book and slides from his lectures. You might find these interesting to browse through at some later stage.

I mentioned that I had once appeared on ITV's Who Wants to be a Millionaire (October 2003) and that playing it has aspects of dynamic programming. There is a nice Part II exam question on this, including the model solution. You might like to look at this now – simply to see the sort of mathematical problem that this course will enable you to solve. You can also view the overhead slides for a little presentation called A Mathematician Plays "Who Wants to Be a Millionaire?" which I once gave to a Royal Institution workshop for school children. You might like to see how I made best use of my "Ask the audience" lifeline by employing an idea from statistics. Basically, I asked members of the audience not to vote if they felt at all unsure of the right answer.

Examples Sheet 1. Today's lecture has provided all you need to know to do question #1 on Examples Sheet 1. In doing this rather strange question you should grasp the idea that dynamic programming applies to problems that in which cost is incurred in stages. In many problems the stages are time points ($t=0,1,\dotsc$), but in others the stages can be different.

The remaining questions are on Markov decision problems, which we will be addressing in Lectures 2-6.

Monday, January 4, 2016

Course starts January 14, 2016

The 2016 course will start at 11am on Thursday January 14 in MR5. Blog postings for previous years of the course can be found below. However, new entries will be written this year, appropriate to the lectures as they proceed. Examples sheets are available from the link at the right.

Preliminary course notes are in place. My aim in these notes is to tread a Goldilocks path, by being neither too brief nor too verbose. I try to make each lecture a sort of self-contained seminar, with about 4 pages of notes. I will slightly amend and change these notes as the course proceeds. In particular, I may be doing some things differently in the later lectures. Some students like to print notes in advance of the lecture and then write things in the margins when hearing me talk about things that are not in not in the notes.

 I will use this space to talk about some extra things. Sometimes leaving a lecture I think, "I wish I had said ...". This blog gives me a place to say it. Or I can use this space to talk about a question that a student has asked. Or I might comment on an examples sheet question.