Thursday, February 25, 2016

Lecture 13

The slides for the short talk on stochastic scheduling today are here

The method of uniformization that I described in this lecture seems to have been introduced by Jensen (1953), but it was first used extensively in solving queueing problems in the late 1970s (by Grassman (1977) and Keilson (1979)). The Wikipedia article has more about it and summarises the method in these words:
The method involves the constructions of an analogous discrete time Markov chain, where transitions occur according to an exponential distribution with the same parameter in every state.
I have used this idea many dozens of times in my own research. It is usually the best first step to make when  tackling a continuous-time Markov decision problem.

Kalman filter in economics

The Kalman filter was an important tool in space exploration, and is often mentioned in connection with the Apollo XI guidance system. Let me make some remarks about where Kalman filtering ideas are used, in areas adjacent to operations research, such as economics. 

I once asked Lord Eatwell, President of Queens', and a famous economist, "Is the Kalman filter much used in economics". He immediately replied, "Yes, all the time".

Eatwell is one of the compliers of the Palgrave Dictionary of EconomicsIt is a good place to go if ever you need to find a short article on any economics topic. I searched in Palgrave for the Kalman Filter, and read:

The Kalman filter

The Kalman filter deals with state-space representations where the transition and measurement equations are linear and where the shocks to the system are Gaussian. The procedure was developed by Kalman (1960) to transform (‘filter’) some original observables $y_t$ into Wold innovations at and estimates of the state $x_t$. With the innovations, we can build the likelihood function of the dynamic model. With the estimates of the states, we can forecast and smooth the stochastic process.

The use of unobserved components opens up a new range of possibilities for economic modelling. Furthermore, it provides insights and a unified approach to many other problems. The examples below give a flavour.

The local linear trend model generalizes (1) by the introduction of a stochastic slope, βt, which itself follows a random walk. Thus ...

Econometricians are often interested in building linear models in which some variables are explained by other variables (in some sort of regression model). 

As the values of variables become known over time one wants to update estimates of other variables. The machinery for doing this is provided by the Kalman filter. Notice that the Kalman filter does not have anything to do with the Q assumptions of our LQG model. It is only the Linear Gaussian parts that are relevant.

You might like to try a Google search for "Kalman Filter and finance". Many things turn up. For example, here is a talk, Kalman Filtering in Mathematical Finance

Tuesday, February 23, 2016

Lecture 12

Make sure you have the most recent version of the notes. I have redesigned Chapter 13. Previously, the notes covered only the Gaussian noise case. However, I think it easier to understand how certainty equivalence if I do it without the Gaussian assumptions, as I have now in Theorem 13.1.

When Gaussian assumption are made then $\hat x_t=E[x_t \mid W_t]$ is particularly simple. It is a linear function of $\hat x_0$ and $W_t$ and can be computed recursively using the Kalman filter.

The name "Kalman filter" refers to the estimation equation (13.12) and takes its name from Rudolf Kalman (1930 –), who developed it in the years 1958-64. He also coined the terms controllable and observable, and gave the criteria that we have seen in previous lectures. The fact that a system is controllable iff the matrix $[B\ AB\ \cdots\ A^{n-1}B]$ is of full rank is sometimes called Kalman's criteria. In the IEEE biography of Kalman it is stated
The Kalman filter, and its later extensions to nonlinear problems, represents perhaps the most widely applied by-product of modern control theory. It has been used in space vehicle navigation and control (e.g. the Apollo vehicle), radar tracking algorithms for ABM applications, process control, and socioeconomic systems.
The theory in this lecture is admittedly quite tricky - partly because the notation. As a test of memory, can you say what roles in the theory are taken by each of these?

 $x_t$,  $u_t$, $A$, $B$, $\epsilon_t$, $y_t$, $C$, $\eta_t$, $\hat x_t$, $\Delta_t$, $\xi_t$, $\zeta_t$, $R$, $S$, $Q$, $K_t$, $\Pi_t$, $N$, $L$, $M$, $H_t$,  $V_t$. 

 You will understand the ideas better once you have worked through the details of a scalar example (in which $n=m=p=1$). You do this in Example Sheet 3 Question 2. When you do this question, start by supposing that $\hat x_t=\hat x_{t-1}+u_{t-1}+h_t(y_t-\hat x_{t-1})$, and then find the value of $h_t$ that minimizes the variance of $\hat x_t$. You can start by subtracting $x_t=x_{t-1}+u_{t-1}+3\epsilon_t$ and using $y_t=x_{t-1}+2\eta_t$. You get,

$\hat{x}_{t+1}-{x}_{t+1}=\Delta_{t+1}=\Delta_t-3\epsilon_t-h_t\Delta_t+h_t2\eta_t.$

Then square, take the expected value, and minimize the variance of $\hat x_t$ with respect to $h_t$, to find a formula for $V_{t+1}$ in terms of $V_t$.

You will not be asked to reproduce the proof of Theorem 13.1 or 13.3 in examinations. You should simply know that $\hat{x}_t$ is computed from $\hat{x}_{t-1}$ and $y_t$ in the linear manner specified by (13.12), and that the covariance matrix $V_t$ satisfies a Riccati equation. You are not expected to memorize Riccati equations.

Notice that the Riccati equation for $V_t$, i.e. $V_t = g\, V_{t-1}$ runs in the opposite time direction to the one we had for $\Pi_t$ in lecture 10, where $\Pi_{t-1} = f\, \Pi_t$. We are given $V_0$ and $\Pi_h$.