Thursday, January 14, 2016

Lecture 1

Today we had definitions and notation for state, control, history, value function, etc, and have developed dynamic programming equations for a very general case and a state-structured case,. Please be patient with the notation. It is not as complex as it may first appear. Things like $a(x,u,t)$, $F(x_t,t)$, $u_t$, and $U_t$ will begin to seem like old friends once you have used them a few times.

The terminology "plant equation" for $x_{t+1}=a(x_t,u_t,t)$ derives from the fact that early optimal control theory was developed with applications to industrial processes in mind, especially chemical plants. We also call it the dynamics equation.

From this first lecture you should be taking away the key idea of dynamic programming, and the fact that problems in stages (with separable cost and a finite time horizon) can often be solved by working backwards from the final stage. The minimum length path (stage coach) problem is trivial, but should have made these ideas very intuitive. You might like to read the Wikipedia entry for Richard Bellman, who is credited with the invention of dynamic programming in 1953.

The course page gives some hyperlinks to the recommended booklist. In particular, Demitri Bertsekas has a web page for his book and slides from his lectures. You might find these interesting to browse through at some later stage.

I mentioned that I had once appeared on ITV's Who Wants to be a Millionaire (October 2003) and that playing it has aspects of dynamic programming. There is a nice Part II exam question on this, including the model solution. You might like to look at this now – simply to see the sort of mathematical problem that this course will enable you to solve. You can also view the overhead slides for a little presentation called A Mathematician Plays "Who Wants to Be a Millionaire?" which I once gave to a Royal Institution workshop for school children. You might like to see how I made best use of my "Ask the audience" lifeline by employing an idea from statistics. Basically, I asked members of the audience not to vote if they felt at all unsure of the right answer.

Examples Sheet 1. Today's lecture has provided all you need to know to do question #1 on Examples Sheet 1. In doing this rather strange question you should grasp the idea that dynamic programming applies to problems that in which cost is incurred in stages. In many problems the stages are time points ($t=0,1,\dotsc$), but in others the stages can be different.

The remaining questions are on Markov decision problems, which we will be addressing in Lectures 2-6.