Survival Theory Course

Martingales – definitions, properties, and applications

Michael Sachs

Outline

  1. Discrete time
  2. Continuous time intuition
  3. Limit theorems
  4. Applications

Martingales in discrete time

A process \(M = \{M_0, M_1, \ldots, M_n\}\) is a martingale if \(E|M_n| < \infty\) for all \(n\) and \[ E(M_n | \mathcal{F}_{n-1}) = E(M_n | M_1, \ldots, M_{n-1}) = M_{n-1}, \mbox{ for } n = 1, 2, \ldots, \] where \(\mathcal{F}_{n-1}\) is the history up to \(n-1\).

Example: Suppose \(X_1, X_2, \ldots\) be independent with \(P(X_i = 1) = P(X_i = -1) = 0.5\). \(M_n = X_1 + X_2 + \cdots X_n\), the winnings after \(n\) games, is a mean 0 martingale.

More examples

  1. Let \(X_1, X_2, \ldots\) be independent with mean 0. Is \(M_n = X_1 + \cdots + X_n\) a martingale?
  2. Let \(X_1, X_2, \ldots\) be independent with mean \(\mu\). Is \(S_n = X_1 + \cdots + X_n\) a martingale?
  3. Let \(X_1, X_2, \ldots\) be independent with mean 1. Is \(M_n = X_1 \cdot X_2 \cdots X_n\) a martingale?

Properties

  1. \(E(M_n | \mathcal{F}_m) = M_m\) for \(m < n\). How would you prove this?

Define the increments or martingale differences as \(\Delta M_n = M_n - M_{n-1}\).

  1. \(M_n\) is a martingale if and only if \(E(\Delta M_n | \mathcal{F}_{n-1}) = 0\)
  2. If \(M_n\) is a martingale then \(Cov(\Delta M_n, \Delta M_{n-1}) = 0\)

Transformations

Consider \(X_1, X_2, \ldots\) be independent with history \(\mathcal{F}_n\).

Let \(H_n\) be a process such that \(H_n \in \mathcal{F}_{n-1}\). \(H_n\) is called a predictable process. Then the process \[ Z_n = H_0 X_0 + H_1 (X_1 - X_0) + \cdots + H_n (X_n - X_{n-1}) \] is called the transformation of \(X\) by \(H\) and we will write \(Z = H \bullet X\). We can also write \[ Z_n = (H \bullet X)_n = \sum_{s = 1}^nH_s \Delta X_s. \]

Important result

How would you prove this? Interpret this result in the context of our coin-flipping example.

Derivation of transformation result

Hint: prove it directly by considering \(E(Z_n - Z_{n-1} | \mathcal{F}_{n-1})\)

The Doob decomposition

Let \(X = \{X_0, X_1, \ldots X_n\}\) be a process with history \(\mathcal{F}\) and with \(X_0 = 0\). Define \(M = \{M_0, M_1, \ldots, M_n\}\) by \(M_0 = 0\) and \[ \Delta M_n = X_n - E(X_n | \mathcal{F}_{n-1}). \] The Doob decomposition is \[ X_n = E(X_n | \mathcal{F}_{n-1}) + \Delta M_n \] where the first term \(\{E(X_n | \mathcal{F}_{n-1})\}\) is a predictable process and the second term are the increments of a martingale.

Interpretation: \(X_n\) is decomposed into a predictable part and an innovation (mean zero, uncorrelated increments).

Continuous time

If \(M\) is a martingale then \(E(M(t) | \mathcal{F}_{s}) = M(s)\) for \(s < t\) and \(Cov(M(t) - M(s), M(v) - M(u)) = 0\) for \(0 \leq t < s < v < u \leq \tau\). Equivalently, \(E(dM(t)|\mathcal{F}_{t-}) = 0\).

\(M\) is a sub martingale if \(E(M(t) | \mathcal{F}_{s}) \geq M(s)\) for \(s < t\). All counting processes are sub-martingales (why?).

Transformations in continuous time

Recall that in discrete time \(H \bullet M = \sum_{i = 1}^nH_i \Delta M_i\) for a predictable process \(H\).

What do we need to define something like this in continuous time?

Stochastic integral

As with \(H\bullet M\) before, if \(M\) is a continuous time mean zero martingale and \(H\) is predictable, then \(I = \int H \, dM\) is a mean zero martingale.

The Doob-Meyer decomposition

If \(X\) is a right-continuous, non-negative sub-martingale with respect to a history \(\{\mathcal{F}_t\}\), then there is a unique decomposition \[ X(t) = X^*(t) + M(t) \] where \(X^*\) is an increasing predictable process (called the compensator) and \(M\) is a mean zero martingale.

Poisson process

Let \(N(t)\) be the number of events in \([0,t]\) where \(N(t) - N(s) \sim \mbox{Poisson}((t - s)\lambda)\) for \(s < t\) and \(N\) has independent increments.

Does the Doob-Meyer decomposition apply to \(N(t)\)? If so, identify the compensator of \(N(t)\).

Solution

Problem statement: Find the unique predictable process \(X(t)\) such that \(N(t) - X(t)\) is a martingale.

Steps (WAG method):

Steps (start at the end method):

More on predictability

Variation processes

If \(M\) is a martingale and \(E(M^2(t)) < \infty\) for all \(t \geq 0\), then by Jensen’s inequality: \[ E(M^2(t) | \mathcal{F}_{s}) \geq (E(M(t) |\mathcal{F}_{s}))^2 = M^2(s), \] which means …?

Hence, we can apply the Doob-Meyer decomposition and there exists a unique predictable process \(\langle M\rangle\) such that \[ M^2(t) - \langle M\rangle(t) \] is a martingale. \(\langle M\rangle\) is called the predictable (quadratic) variation process.

Properties

\[ \langle M\rangle(t) = \lim_{n\rightarrow \infty} \sum_{i = 1}^n Var(\Delta M_i | \mathcal{F}_{(i-1)t/n}) \] where \([0, t]\) is partitioned into \(n\) parts of length \(t/n\) and \[ \Delta M_i = M(it / n) - M((i-1)t / n). \] Loosely speaking \(d\langle M\rangle(t) = Var(dM(t) | \mathcal{F}_{t-})\).

Similarly, the predictable covariation process of two martingales \(M_1, M_2\) is \[ d\langle M_1, M_2 \rangle(t) = Cov(dM_1(t), dM_2(t) | \mathcal{F}_{t-}). \] Like the regular covariance, we have \[ \langle M_1 + M_2 \rangle = \langle M_1\rangle + \langle M_2 \rangle + 2\langle M_1, M_2\rangle. \]

Optional variation

The optional variation process is defined \[ [M](t) = \lim_{n\rightarrow \infty} \sum_{i = 1}^n (\Delta M_i)^2 \] where \([0, t]\) is partitioned into \(n\) parts of length \(t/n\) and \[ \Delta M_i = M(it / n) - M((i-1)t / n). \]

We also have \(M^2(t) - [M](t)\) is a mean zero martingale.

Why is \([M]\) not the compensator of \(M^2\)?

Finally, since \(M\) is mean 0, \[ Var(M(t)) = E(M^2(t)) = E([M](t)) = E(\langle M\rangle(t)). \]

Poisson process example

In the Poisson process example, we showed earlier that \(M(t) = N(t) - \lambda t\) is a martingale. Show that \(\langle M \rangle = \lambda t\).

What is \([M]\)? Hint: \([M](t) = \sum_{s \leq t} (M(s) - M(s-))^2\) for processes with finite variation.

Transformations

For a predictable process \(H\) and a mean 0 martingale \(M\), we know that \(I(t) = \int_0^t H(s) \, dM(s)\) is a mean 0 martingale. It is also true that \[ \langle \int H\, dM \rangle = \int H^2 \, d\langle M \rangle \] and \[ [\int H\, dM] = \int H^2 \, d[M]. \]

Putting it all together

Suppose \(N(t)\) is a general counting process, i.e., it is right-continuous, nondecreasing, with jumps of size 1, and is adapted to \(\mathcal{F}_t\).

The Doob-Meyer decomposition says that \[ N(t) = \Lambda(t) + M(t) \] where \(\Lambda(t)\) is the compensator (unique, predictable nondecreasing process) and \(M(t)\) is a mean 0 martingale.

Assuming that it exists, we can write \(\Lambda(t) = \int_0^t\lambda(s)\, ds\). \(\lambda(t)\) is called the intensity process and \(\Lambda(t)\) the cumulative intensity process. The classical counting process martingale is written \[ M(t) = N(t) - \int_0^t\lambda(s)\, ds. \] Since \(M(t)\) is mean 0, we have \(\lambda(t) dt = P(dN(t) = 1| \mathcal{F}_{t-})\) since the steps are of size 1. Hence, \(Var(dM(t)|\mathcal{F}_{t-}) = Var(dN(t)|\mathcal{F}_{t-}) = \lambda(t)(1-\lambda(t))dt \approx \lambda(t) dt\), and we have \[ \langle M\rangle(t) = \int_0^t\lambda(s) \, ds = \Lambda(t). \] Finally, \([M](t) = N(t)\), the process itself (since again the jumps are of size 1).

From these, we get \[ \langle \int H \, dM \rangle(t) = \int_0^t H^2(s)\lambda(s) \, ds \] and \[ [\int H \, dM ](t) = \int_0^t H^2(s) \, dN(s) = \sum_{T_j \leq t} H^2(T_j), \] where \(T_1 < T_2 < \cdots\) are the unique jump times of \(N(t)\).

Martingale convergence theorem

Suppose \(\tilde{M}^{(n)}(t)\) is a sequence of mean-0 martingales for \(t \in [0, \tau]\). Define \[ \tilde{M}_\epsilon^{(n)}(t) = \sum_{s\leq t: |\tilde{M}^{(n)}(s) - \tilde{M}^{(n)}(s-)|\geq \epsilon}|\tilde{M}^{(n)}(s) - \tilde{M}^{(n)}(s-)|. \] If

  1. \(\langle \tilde{M}^{(n)}(t) \rangle \rightarrow_p V(t)\) for all \(t\) as \(n \rightarrow \infty\) where \(V(t)\) is strictly increasing continuous with \(V(0) = 0\)
  2. \(\langle \tilde{M}_\epsilon^{(n)}(t) \rangle \rightarrow_p 0\) for all \(t\) and \(\epsilon > 0\) as \(n\rightarrow \infty\)

Then \(\tilde{M}^{(n)}(t)\) converges in distribution to a mean 0 Gaussian martingale \(U(t) = W(V(t))\), where \(W\) is a Brownian motion.

Note that \(U(t) = W(V(t))\) is a “time transformation” of a Brownian motion, and \(U(t)\) is a mean 0 martingale with \(\langle U\rangle(t) = V(t)\).

Summary

Next time