Modelling natural catastrophes by means of stochastic simulation started in the late 1980s and has become increasingly common since. The degree of details – be it the geographic resolution, the exposure data or the event modelling – has led to an enormous amount of data which suggests that modelling should have improved over time. Nevertheless there is still considerable uncertainty in case of major events which has been demonstrated by nearly every major event in the past two decades.
As early as 1994 – immediately after the Northridge earthquake – Eberhard Müller published some general remarks on the usability of natural catastrophe simulation models and their limitations and tried to forecast some future developments. Now, more than 20 years later, he compares his predictions with the most recent developments.
He brings with him the latest news from the annual RAA´s (Reinsurance Association of America) Natural Catastrophe Modelling Conference in Orlando, including a comparison of vendor model results for selected “as-if” US events.
His conclusion – as 20 years ago – is, that there is nothing better available than “state of the art” natural catastrophe simulation models for determining exposures by return periods and net risk premiums (expected value of losses) as well as volatility measures from full probability distribution, often in the form of “non-exceedance curves”. But that does not mean that you simply “feed a model and get the truth”.
Everybody active in this field has substantial “room for decisions” whom to trust and how own assessments – deviating from vendor models - could be figured in.
This matters even more when model results are used for Solvency II purposes e.g. within internal models. Finally it must be concluded that there is no “absolute truth” but only a “transparent process” in which results – including own assessments – are regularly checked whether they are still trustworthy or whether process adjustments are advisable.
Organised by the EAA - European Actuarial Academy GmbH.