The long shadow of the life table
For centuries, the life table has been at the centre of actuarial work. It sets out the gradual extinction of a hypothetical population, often a birth cohort. At age \(x\), \(l_x\) individuals are still alive, and \(d_x\) of them die during the next year, so in a year's time \(l_{x+1} = l_x - d_x\) are still alive, and so on. The following table is an extract from a life table, describing 100,000 persons in a birth cohort, of whom the last dies age 114.
Age \(x\) | \(l_x\) | \(d_x\) |
---|---|---|
0 | 100,000 | 362 |
1 | 99,638 | 236 |
2 | 99,402 | 225 |
3 | 99,177 | 229 |
4 | 98,948 | 234 |
\(\ldots\) | \(\ldots\) | \(\ldots\) |
\(\ldots\) | \(\ldots\) | \(\ldots\) |
111 | 18 | 10 |
112 | 8 | 5 |
113 | 3 | 2 |
114 | 1 | 1 |
115 | 0 |
The life table is simple, intuitive, and requires hardly any mathematics to grasp its meaning. This was vitally important when, in 1756, James Dodson used it to invent modern life insurance. But it casts a long shadow, even to this day, and has tended to guide actuaries' thinking along tramlines of which many may not even be aware.
The modern context for mortality analysis is really quite recent; it barely existed in 1956, never mind 1756. It exists now, in the form of building statistical models that in some sense are 'capable of generating the observed data', and using the properties of these models to examine goodness-of-fit, test hypotheses, select the 'best' of several candidate models, and much else. With that in mind, where does the life table lead the analyst?
- It suggests that the target of estimation, from observed data, is the life table itself, since this is the actuary's principal tool.
- In probabilistic terms, this translates quite easily into a binomial model of deaths between integer ages. Of \(E_x\) people alive at age \(x\), a random number \(D_x\) die. Then \(D_x\) has a binomial distribution with parameters \(E_x\) and \(q_x\), where \(q_x\) is the probability that any one person age \(x\) will die before age \(x+1\). The binomial distribution is simple and well-understood.
- However, this focuses our attention on a collective — the \(E_x\) persons alive at age \(x\), and away from individuals. Modelling the death or survival of a single individual is much more fundamental. Individuals are the 'atoms' of the observed data that we want a model to be capable of generating.
- It also focuses our attention on a time interval — deaths observed between ages \(x\) and \(x+1\). The \(D_x\) people who die all die sometime between ages \(x\) and \(x+1\), but each one of them has their own exact age at death. Ignoring this is a loss of information.
- We must assume a high degree of homogeneity among the \(E_x\) observed individuals. We can allow for differences in sex, smoking habits, income and so on only by subdividing into smaller homogeneous groups (a process called stratification, which can quickly exhaust even large data sets' ability to support many risk factors; see Richards et al, 2013).
- The binomial model does not allow for the realities of observation. Some persons among the \(E_x\) alive at age \(x\) will leave before age \(x+1\) for reasons other than dying — lapsing a life insurance policy for example. This is right-censoring, and is perhaps the canonical feature that separates survival modelling from ordinary statistical modelling. Others not among the \(E_x\) alive at age \(x\) will join before age \(x+1\), for example by buying an annuity at age 63.5. This is left-truncation and is almost always encountered in actuarial studies. Approximate methods of allowing for these features in a binomial model comprised a form of mental torture inflicted on many generations of actuarial students (including my own).
A more fundamental approach is to break away from the mental model provided by the life table, and ask the simplest possible question about a single person now age \(x\): what is the probability that they will die in a short interval of time \(dt\)? This leads to the hazard rate (or force of mortality to actuaries). This is not a modern concept; it is almost as old as the life table itself. Indeed Gompertz's law of 1825 was expressed as a hazard rate. What is modern is the formulation of statistical models based on the mortality of an individual person, in which the hazard rate is the fundamental quantity, and is the target of estimation. This is the approach adopted in our recent book, Modelling Mortality with Actuarial Applications. There, the reader will still find many examples of estimation based on collective data — observations of groups of lives between ages \(x\) and \(x+1\) — but always now based on the sounder foundations of individual life histories.
References:
Dodson, J. (1756), First Lectures in Insurance.
Gompertz, B. (1825) The Nature of the Function Expressive of the Law of Human Mortality, Philosophical Transactions of the Royal Society, 115, 513–585.
Macdonald, A. S., Richards, S. J. and Currie, I. D. (2018) Modelling Mortality with Actuarial Applications, Cambridge University Press.
Richards, S. J., Kaufhold, K. and Rosenbusch, S. (2013) Creating portfolio-specific mortality tables: a case study, European Actuarial Journal, 3(2), 295–319.
Previous posts
Hedging or betting?
Last week I presented at Longevity 14 in Amsterdam. A recurring topic at this conference series is index-based approaches to managing longevity risk. Indeed, this topic crops up so reliably, one could call it a hardy perennial.
Add new comment