Removing the Guesswork for Improved Premium Rating
By Stuart Rose
The business of insurance consists of evaluating and pricing risks, resulting in a business model that is unlike any other industry. Insurance is the one product that the manufacturer sells without knowing exactly, before the product is sold, how much it will cost to produce. Insurers must make educated guesses about their future costs, and the penalties for inaccurate guesses range from a loss of customers to insolvency.
Ratemaking is the process of establishing rates charged by an insurer for accepting the risk. In a broader sense, the goal of ratemaking is to determine rates that will, when applied to the exposures underlying the risks being written, provide sufficient funds to cover expected losses and expenses; maintain an adequate margin for adverse deviation; and produce reasonable returns on funds provided by investors.
The determination of premium rates has always been an area requiring vast amounts of data, deep insurance expertise in the form of actuaries, and advanced analytical techniques. Today, rapid advances in technology are providing an opportunity for insurers to take this discipline and knowledge to the next level with price optimization.
Insurers looking to implement a price optimization strategy must consider these essential components:
- Data management.
- Data exploration.
- Predictive modeling.
The most significant foundation of the ratemaking process is data. Without data in sufficient quantity and of acceptable quality, the creation of actuarial pricing models is impossible. The insurance industry is well known for having a broad set of quantitative data, but sometimes failing to use it efficiently.
Inconsistent, incomplete and inaccurate data spread across multiple operational systems often causes actuaries to make pricing decisions based on just a fraction of the available data. In many insurance companies, data remains in product, channel, geographic and business-unit silos. Companies tend to have multiple legacy systems built on different technologies using different hardware, operating systems and database engines. And, of course, they all have vastly different data structures.
To combat this silo approach, and to alleviate problems with data quality, a growing number of insurers are undertaking enterprisewide data management projects. By combining data quality within data integration, insurers can transform and merge disparate data, remove inaccuracies, standardize on common values, and cleanse data to create a single strategic, trustworthy and valuable asset that enhances the ratemaking process.
Multiple factors go into determining premium rates, and as competition increases, insurers are introducing new, innovative rate structures. The critical question in ratemaking is, “What risk factors or variables are important for predicting the likelihood, frequency, and severity of a loss?” Although there are many obvious risk factors that affect rates, subtle and nonintuitive relationships can exist among variables that are difficult, if not impossible, to identify without applying more sophisticated analyses.
However it remains a challenge for most insurance organizations to turn increasingly large amounts of data into useful insights and find the relationships among variables. Robust data exploration and visualization are a prerequisite for any ratemaking process. Data visualization enables business analysts and actuaries to see things that were not obvious to them before. In addition, data visualization conveys information in a universal manner and makes it simple to share findings with others.
The insurance industry depends heavily on predictive models, so an essential part of any ratemaking process is the creation and deployment of predictive models. The most common types of ratemaking models in the insurance industry are frequency, severity, and pure premium models. Frequency models predict how often claims are made and are typically modeled using a Poisson distribution. Severity models predict claim amounts and are modeled using a gamma distribution. Pure premium models, which do not include general expenses of doing business such as overheads and commissions, are modeled using a Tweedie distribution.
In recent years the insurance industry has undergone significant transformation as more insurance companies have adopted the use of generalized linear models (GLMs) for ratemaking. By identifying additional characteristics that can segment existing rating cells into smaller cells with different rates, an insurer with this more granular structure can compete with other insurers by using lower prices, while continuing to charge an appropriate premium. In addition, GLMs provide statistical diagnostics that aid in selecting only significant variables and in validating model assumptions. Today GLMs are widely recognized as the industry standard.
The insurance industry is currently stormed by new challenges. The escalating claims costs, low interest rates, volatile investment markets, and regulatory compliance are driving insurers to seek new strategies to sustain value for their stakeholders. Insurance rating and pricing is an area where even small improvements can have a dramatic impact on profitability.
Proper price segmentation, especially in the lines of business where price is a key differentiator—such as auto, home, and some commercial lines—represents the future for insurance. It determines the best ratemaking strategy for achieving your company`s specific business goals by incorporating data related to operating costs, consumer buying behavior and competitive environment into your pricing models. Ultimately, even with the most sophisticated analytical software and the explosion of data, insurers should not forget the fundamental principle for successful pricing: The only badly priced risk is an underpriced risk.
(Stuart Rose is Global Insurance Marketing Manager at business analytics provider SAS)