Catastrophe models must keep pace with climatology
Avoid disruption from weather peril models by renewing climate data every year, Karen Clark & Company’s chief executive says
Pioneer of catastrophe modelling urges re/insurers to switch focus from static event sets to dynamic science
Catastrophe modelling will remain the “global standard technology” for pricing and managing extreme event risk, but it must incorporate the latest climate science, the president and chief executive of Karen Clark & Company (KCC) says.
Karen Clark pioneered the probabilistic catastrophe modelling techniques that revolutionised the way re/insurers and financial institutions manage their catastrophe risk.
She founded the first catastrophe modelling company, Applied Insurance Research (AIR), in 1987. In 2002, she sold AIR to Insurance Services Office, which later became Verisk Analytics. She co-founded KCC in 2007 with Vivek Basrur, who created the first data standards and web-based applications for the catastrophe modelling industry in the mid-1990s.
The awards Clark has received include a Nobel Peace Prize certificate in 2007 for her contributions to the work of the Intergovernmental Panel on Climate Change (IPCC). Her close collaboration with climate scientists over decades means she is highly qualified among re/insurance veterans to speak about the value of mutual respect.
In an interview with Insurance Day, Clark says climate scientists sometimes make “derogatory” statements about catastrophe modelling because they fail to appreciate the “main crux”, which is that it is an event-based technology.
“Catastrophe modelling makes hundreds of thousands of projections of potential future events to ensure insurance companies and reinsurance companies are not overly exposed. Scoring algorithms that make use of aerial photography and machine learning may be able to give highly valuable property-specific information, and they can complement catastrophe models, but not replace them,” Clark says.
Catastrophe models thus provide information far beyond that which is specific to a given property. “The point is that insurance companies use catastrophe models to ensure they have a good spread of risk and aren’t overly concentrated and over-exposed to one event. If they are, then they need to buy enough reinsurance, or other types of risk transfer, to cover that exposure,” she adds.
The threat to catastrophe modelling, however, is a failure to encompass climate science. “All the historical data KCC uses for weather-related perils are completely climate conditioned, so the science is an enhancement to our catastrophe models,” she says.
Converting the industry
On a panel with Bermuda market veterans John Swan, Locke Burt, Fiona Luck and Brian O’Hara at the Bermuda Risk Summit in March, Clark said re/insurers themselves can have misconceptions about catastrophe and climate modelling.
“I’ve heard it said, wrongly, that catastrophe models were developed after Hurricane Andrew, which is not the case,” Clark told delegates at the conference.
Before the formation of AIR, she added, all property catastrophe re/insurance was written out of London or New York. The challenge was to convince those markets that they needed a “new-fangled technology” called a computer model.
“AIR’s first product, Catmap, was for many underwriters in New York and Lloyd’s the first contact they had ever had with a computer application,” Clark said. It was hard to convince re/insurance underwriters of the value of computer models because the US had been “in a lull”, with respect to hurricane activity.
“Up to 1987, the most costly hurricane had been Alicia in 1983, which was a loss of about $1bn. In 1989, Hugo hit, which was about $4bn. And the industry thought, at the time of Andrew in 1992, that the worst-case scenario was about $7bn,” Clark said.
The issue was that insurers had stopped tracking their exposures.
Clark explained: “Way back when, insurers used to have maps on the wall and put pins in where their insureds’ properties were. When they got too many properties, they didn’t have enough pins and so they weren’t tracking them at all. What were Lloyd’s underwriters using to model their exposure? Premiums. The only data reinsurance underwriters were getting were state, line of business premiums. That was it.”
To enable re/insurers to use catastrophe models, AIR created the first Industry Exposure Database, in 1987. This had exposure by five-digit zip codes, so it was a “reasonable resolution”, Clark said. Underwriters could run losses based on exposures and then use the premiums by state and line of business to calculate the market shares of individual ceding insurers.
"What investors and reinsurers don’t like is trapped capital because indemnity cover means it could take years for them to know what their final loss will be after a major event. And there can be loss creep, like the cost to rebuild a home in that period can increase by 10% to 20%, which wasn’t accounted for when their contract was priced"
Karen Clark
Karen Clark & Company
“It sounds straightforward, but it was difficult to get reinsurers to embrace this new technology. All the same, at the time of Andrew, we had about 30 Lloyd’s syndicates and New York reinsurers using the model, so we weren’t doing that badly, but it wasn’t a wave of people saying, ‘We want this technology’.”
AIR’s model indicated that the worst-case scenario was “more like” $60bn and not $7bn, “so even our own clients didn’t really believe the numbers”. For Hurricane Andrew, AIR estimated the insured losses were likely to be more than $13bn.
“That got faxed out, and our landlines started ringing off the hook. Lloyd’s underwriters argued with me, saying, ‘I bet you five quid it won’t be more than $6bn’.”
The reason for their bet was that Hurricane Andrew made landfall about 50 miles south of Miami. “One underwriter told me it had to be less than Hugo, because Hugo hit Charleston and Andrew missed Miami. Well, what that underwriter didn’t realise is the property value in Dade County alone at the time was more than the property value in the whole state of South Carolina,” Clark said.
The full impact of Hurricane Andrew sunk in “about six months later”, that the industry was going to have to pay out around $15bn. The Bermuda Class of 1993 emerged, including RenaissanceRe, founded by one of AIR’s original clients, Jim Stanard, formerly of F&G Re.
Exposure information improved from zip codes to geo-coded locations, Clark said, improving the accuracy of catastrophe models, which in turn were adopted not just to assess individual risks and portfolios, but to build diversified portfolios.
Trapped capital
Noting the increasing appeal of parametric insurance to manage climate risk, Clark highlights the disquiet indemnity-based cover creates for providers of insurance-linked securities (ILS). The final loss from Hurricane Ian in 2022, for example, is still unknown.
“What investors and reinsurers don’t like is trapped capital because indemnity cover means it could take years for them to know what their final loss will be after a major event. And there can be loss creep, like the cost to rebuild a home in that period can increase by 10% to 20%, which wasn’t accounted for when their contract was priced,” Clark tells Insurance Day.
The “beauty” of a parametric trigger, therefore, is knowing the contract can settle soon after an event. Catastrophe models inform parametric triggers, such as the probability of a certain wind speed in each location. Based on that probability, counterparties will price a parametric deal.
Climate risk does not spell the decline in indemnity insurance, however, because parametric products create basis risk for re/insurers.
Clark explains: “For example, there could be a 150 mph hurricane, but it just takes a certain track that is not really where the insurance company has a lot of exposure. That would mean they get a payout when they don’t need it. In contrast, there could be a hurricane with a lower maximum wind speed that goes directly through their exposures and causes them a huge loss.
“The rating agencies, like AM Best and S&P, want to know what kind of reinsurance protection an insurance company has, and they would not look as favourably on a parametric deal as they would on a pure indemnity product. And so, parametric is good for state and federal governments, for a quick payout for recovery, but I don’t think we’ll ever see a world where there’s no indemnity cover for insurers.”
KCC has therefore developed a new product, called a modelled loss transaction, to provide additional reinsurance protection to insurers who have found that traditional reinsurance capacity is shrinking relative to increasing risk.
“A few weeks after a hurricane like Ian, we can run a company’s exposures through our model and give an estimate of the actual loss and that is the number on which the contract will settle, eliminating the issue of trapped capital. There’s still a little bit of basis risk in that, but a lot less than there would be from a parametric trigger,” Clark says.
KCC is providing the necessary information for a lot of these covers now, she adds, especially for severe convective storm, for which reinsurers are requiring insurers to raise their retentions. “We’re innovating with different types of reinsurance transactions, and there will be innovation going forward, but I don’t think parametric-based is the ideal solution for insurers.”
Physical techniques
The main output of a catastrophe model is the exceedance probability (EP) curve, which indicates the likelihood of losses of different sizes via a stochastic model output. The EP curve is based on hypothetical events but the modelled loss transaction triggers are based on the actual event, Clark stresses.
“At KCC, we’ve implemented very advanced physical modelling techniques – that ingest high-resolution radar, atmospheric and satellite data – and that can accurately reproduce the complexities of frequency perils such as severe convective storm. We model the physics of the atmosphere in four dimensions, including time,” she says.
KCC has adapted numerical weather prediction used by weather forecasters to catastrophe modelling.
“Our models automatically produce every morning, by 7am Eastern time, a hail and a tornado wind footprint that insurers use to estimate their claims that day. And of course, those are the same footprints that will be used to settle modelled loss transactions. It’s a very advanced technology, based on the physics of the atmosphere and not on a parameterised model,” Clark adds.
A catastrophe model can quantify risk reduction, enabling insurers to give credits to insureds who have mitigated their risks, incentivising property owners to “do the right thing”. Beyond that, however, climate risk reduction needs societal and political will. “If we really want risk reduction as a society, then we’re going to have to do more than give insurance mitigation credits. We’re going to need to stop building properties in areas highly exposed to sea-level rise, for example,” Clark says.
Sourcing the best and latest data is a constant pursuit, Clark says, noting that, the higher the resolution, the more chance there is for mistakes. That means data must be continually tested and verified, she says. However, there is an issue with data on commercial and industrial properties, which are not always in the best format nor of the highest quality, she adds.
California problem
On incentives for insurers to continue to provide cover, Clark describes the situation in California where potential losses from wildfire have made it unprofitable for some of them to stay.
As insurers have reduced writings in the state, homeowners can obtain only limited coverage from the Fair Plan Association, established in 1968 to meet the needs of California homeowners unable to find insurance in the traditional marketplace. This is a syndicated fire insurance pool comprising all insurers licensed to conduct property/casualty business in California.
Clark highlights the problem it creates of not allowing the use of catastrophe models to assess the risk. “If insurers can’t use the catastrophe models, then they’re not going to write as much there,” she says. “In the US, a lot of the regulators are elected politicians and, of course, they want to keep their constituents happy, but insurers need to charge homeowners the true cost of the risk.”
My Photo Buddy/Alamy Stock Photo
There are reasons to be hopeful, however. KCC is among the companies working with the California insurance commissioner on how to allow the use of catastrophe models in the state. The California Department of Insurance held a hearing on catastrophe modelling and insurance on April 23 to discuss a proposed regulation that would allow for the use of wildfire and flood catastrophe models for residential property insurance ratemaking. Not only would it allow the state to use the tools the rest of the insurance industry uses, but it also establishes an independent review process that aims to provide transparency about the scientific appropriateness of these models.
“They’re moving in the right direction,” Clark says, “because homeowners have to understand that climate change is happening, property values are going up, and this is increasing the risk, which has to be paid for.”
Confidence levels
The science continues to evolve. Clark notes that, in its last two reports, the IPCC has expanded its established focus on changes to atmospheric variables to also include what catastrophe modellers already analyse – extreme weather events.
“For example, current scientific consensus is that hurricane severity is increasing, not frequency, but severity. With respect to wildfires, it’s both frequency and intensity that are increasing. The same with floods.
“In their most recent assessment report, AR6, the scientific community assigned confidence levels to their projections. There’s high confidence in the impacts of climate change on hurricanes, but confidence is lower for severe convective storms. It is very important for insurers and reinsurers to know, not just what the projections are, but how much confidence we have in those projections. And of course, as time goes on, we’ll get more and more confidence across more perils.”
Clark encourages re/insurers to drop the term secondary peril, which the sector created to explain events, such as severe convective storms, that were not expected to produce solvency-impairing losses. Instead, KCC has always used the term frequency peril and started modelling severe convective storm as long ago as 2015.
"KCC has always put a big emphasis on the frequency perils – not secondary perils, frequency perils – with severe convective storm, winter storm and wildfire being the challenges of today. What the industry is focusing on a little bit more, is not necessarily the tail of the distribution, but the lower return period losses, and how climate change and other factors are influencing those losses.
“A lot of the disruption we’re seeing in the market is coming from the frequency of medium-sized losses, and it’s generally accepted that we shouldn’t be using the term secondary because these events are just as important as hurricanes and earthquakes,” Clark says.
Frequent perils need frequent updates and Clark warns re/insurers against catastrophe models that do not keep pace with climate science. So that updates are “evolutionary rather than disruptive”, KCC refreshes its weather-related peril models “at least every two years, if not every year”.
Clark concludes: “The industry has been in a world where most of the catastrophe models they’re using are very static, with an update once every five to six years or longer. That simply isn’t going to cut it in the world of climate change.”