CategoriesActuarial Science Insurance Insurance Statistics Marketing Reinsurance

This article was reproduced strictly for illustrative purposes during the development of the FBSRe Website. Full attribution is given to:

PREDICTIVE MODELING APPLICATIONS IN ACTUARIAL SCIENCE Volume I: Predictive Modeling Techniques

Edited by

EDWARD W. FREES

University of Wisconsin, Madison

RICHARD A. DERRIG

Opal Consulting LLC, Providence, Rhode Island

GLENN MEYERS

ISO Innovative Analytics, Jersey City, New Jersey

© Cambridge University Press 2014

Information on this title: www.cambridge.org/9781107029873

A classic definition of an actuary is “one who determines the current financial impact of future contingent events.” Actuaries are typically employed by insurance companies who job is to spread the cost of risk of these future contingent events. The day-to-day work of an actuary has evolved over time.

Initially, the work involved tabulating outcomes for “like” events and calculating the average outcome. For example, an actuary might be called on to estimate the cost of providing a death benefit to each member of a group of 45-year-old men. As a second example, an actuary might be called on to estimate the cost of damages that arise from an automobile accident for a 45-year-old driver living in Chicago. This works well as long as there are large enough “groups” to make reliable estimates of the average outcomes. Insurance is a business where companies bid for contracts that provide future benefits in return for money (i.e., premiums) now. The viability of an insurance company depends on its ability to accurately estimate the cost of the future benefits it promises to provide. At first glance, one might think that it is necessary to obtain data  from sufficiently large groups of “like” individuals. But when one begins the effort of obtaining a sufficient volume of “like” data, one is almost immediately tempted to consider using “similar” data. For example, in life insurance one might want to use data from a select group of men with ages between 40 and 50 to estimate the cost of death benefits promised to a 45-year-old man. Or better yet, one may want to use the data from all the men in that age group to estimate the cost of death benefits for all the men in that group. In the automobile insurance example, one may want to use the combined experience of all adult men living in Chicago and Evanston (a suburb north of Chicago) to estimate the cost of damages for each man living in either city arising from an automobile accident. Making use of “similar” data as opposed to “like” data raises a number of issues. For example, one expects the future lifetime of a 25-year-old male to be longer that of a 45-year-old male, and an estimate of future lifetime should take this difference into account. In the case of automobile insurance, there is no a priori reason to expect the damage from accidents to a person living in Chicago to be larger (or smaller) than the damage to a person living in Evanston. However, the driving environment and the need to drive are quite different in the two cities and it would not be prudent to make an estimate assuming that the expected damage is the same in each city. The process of estimating insurance costs is now called “predictive modeling.” In a very real sense, actuaries had been doing “predictive modeling” long before the term became popular.   In 1869, the U.S. Supreme Court ruled in Paul v. Virginia that “issuing a policy of insurance is not a transaction of commerce.” This case had the effect of granting antitrust immunity to the business of insurance. As a result, insurance rates were controlled by cartels whose insurance rates were subject to regulation by the individual states. To support this regulation, insurers were required to report detailed policy and claim data to the regulators according to standards set by an approved statistical plan. The Supreme Court changed the regulatory environment in 1944. In United States v. Southeast Underwriters Association it ruled that federal antitrust law did apply under the authority of the Commerce Clause in the U.S. Constitution. But by this time, the states had a long-established tradition of regulating insurance companies. So in response, the U.S. Congress passed the McCarran-Ferguson Act in 1945 that grants insurance companies exemption from federal antitrust laws so long as they are regulated by the states. However, the federal antitrust laws did apply in cases of boycott, coercion, and intimidation. The effect of the McCarran-Ferguson Act was to eliminate the cartels and free the insurers to file competitive rates. However, state regulators still required the insurers  to compile and report detailed policy and claim data according to approved statistical plans. Industry compilations of these data were available, and insurance companies were able to use the same systems to organize their own data. Under the cartels, there was no economic incentive for a refined risk classification plan, so there were very few risk classifications. Over the next few decades, insurance companies competed by using these data to identify the more profitable classes of insurance. As time passed, “predictive modeling” led to more refined class plans. As insurers began refining their class plans, computers were entering the insurance company workplace. In the 60 s and 70 s, mainframe computers would generate thick reports from which actuaries would copy numbers onto a worksheet, and using at first mechanical and then electronic calculators, they would calculate insurance rates. By the late 70 s some actuaries were given access to mainframe computers and the use of statistical software packages such as SAS. By the late 80 s many actuaries had personal computers with spreadsheet software on their desks. As computers were introduced into the actuarial work environment, a variety of data sources also became available. These data included credit reports, econometric time series, geographic information systems, and census data. Combining these data with the detailed statistical plan data enabled many insurers to continue refining their class plans. The refining process continues to this day.

Although actuarial predictive modeling originated in rate making, its use has now spread to loss reserving and the more general area of product management. Specifically, actuarial predictive modeling is used in the following areas:

  • Initial Underwriting. As described in the previous section, predictive modeling has its actuarial roots in rate making, where analysts seek to determine the right price for the right risk and avoid adverse selection.
  • Renewal Underwriting. Predictive modeling is also used at the policy renewal stage where the goal is to retain profitable customers.
  • Claims Management. Predictive modeling has long been used by actuaries for
  1. managing claim costs, including identifying the appropriate support for claims-handling expenses and detecting and preventing claims fraud, and for
  2. understanding excess layers for reinsurance and retention.
  • Reserving. More recently predictive modeling tools have been used to provide management with an appropriate estimate of future obligations and to quantify the uncertainty of the estimates.

As the environment became favorable for predictive modeling, some insurers seized the opportunity it presented and began to increase market share by refining their risk classification systems and “skimming the cream” underwriting strategies.   Actuaries learn and develop modeling tools to solve “actuarial” problems. With these tools, actuaries are well equipped to make contributions to broader company areas and initiatives. This broader scope, sometimes known as “business analytics,” can include the following areas:

  • Sales and Marketing – these departments have long used analytics to predict customer behavior and needs, anticipate customer reactions to promotions, and to reduce acquisition costs (direct mail, discount programs).
  • Compensation Analysis – predictive modeling tools can be used to incentivize and reward appropriate employee/agent behavior.
  • Productivity Analysis – more general than the analysis of compensation, analytic tools can be used to understand production by employees and other units of business, as well as to seek to optimize that production.
  • Financial Forecasting – analytic tools have been traditionally used for predicting financial results of firms. Predictive modeling in the insurance industry is not an exercise that a small group of actuaries can do by themselves. It requires an insurer to make significant investments in their information technology, marketing, underwriting, and actuarial functions.