Information Matrix
Filter Information matrix
Posts feedM is for Estimation
In earlier blogs I discussed two techniques for handling outliers in mortality forecasting models:
Measuring liability uncertainty
Pricing block transactions is a high-stakes business. An insurer writing a bulk annuity has one chance to assess the price to charge for taking on pension liabilities. There is a lot to consider, but at least there is data to work with: for the economic assumptions like interest rates and inflation, the insurer has market prices. For the mortality basis, the insurer usually gets several years of mortality-experience data from the pensi
Normal behaviour
One interesting aspect of maximum-likelihood estimation is the common behaviour of estimators, regardless of the nature of the data and model. Recall that the maximum-likelihood estimate, \(\hat\theta\), is the value of a parameter \(\theta\) that maximises the likelihood function, \(L(\theta)\), or the log-likelihood function, \(\ell(\theta)=\log L(\theta)\). By way of example, consider the following three single-parameter distributions:
Lost in translation (reprise)
Laying down the law
In actuarial terminology, a mortality "law" is simply a parametric formula used to describe the risk. A major benefit of this is automatic smoothing and in-filling for areas where data is sparse. A common example in modern annuity portfolios is that there is often plenty of data up to age 75 (say), but relatively little data above age 90.
For example, if we use a parametric formula like the Gompertz law:
One small step
A likely story
The foundation for most modern statistical inference is the log-likelihood function. By maximising the value of this function, we find the maximum-likelihood estimate (MLE) for a given parameter, i.e. the most likely value given the model and data. For models with more than one parameter, we find the set of values which jointly maximise the log-likelihood.