Sometimes the problem lies with the algorithm, rather than the data. Algorithms may fail to produce results that are reliably accurate. (Even an algorithm that’s right much of the time won’t always be correct.) Other algorithms are so complex that the developers may not be able to tell regulators or consumers how a particular decision was reached. In addition, algorithms may base their decisions on race or other factors that cannot lawfully be taken into account. (The challenge is that certain data that feeds into the algorithm could be highly correlated with race or another prohibited factor, but the correlation wouldn’t necessarily be evident without testing.)
These problems with data and algorithms could harm consumers, and the harm could be widespread.
- How has the National Association of Insurance Commissioners responded?
The concerns articulated by Commissioner Nickel last year are precisely why the NAIC formed a Big Data Working Group. The mission of the Working Group is “to assist state insurance regulators in obtaining a clear understanding of what data is collected, how it is collected, and how it is used by insurers and third parties in the context of marketing, rating, underwriting, and claims.” The initial focus is on auto and homeowner’s insurance, but regulators have made it clear that they will soon look at other lines.
One of the first things the Working Group did was to research the existing laws addressing insurers’ use of consumer and non-insurance data, particularly as it relates to rating and claims handling. Key findings included that:
-
Insurers cannot refuse to insure or limit the amount of coverage available to an individual because of the sex, marital status, race, religion or national origin of the individual.
- Rates can’t be excessive, inadequate or unfairly discriminatory. (A rate is unfairly discriminatory if differences in price do not fairly reflect differences in expected losses and expenses.)
- Risk classifications cannot be based on the race, creed, national origin or the religion of the insured.
The Big Data Working Group will consider whether additional consumer protections are warranted, but the key takeaway is that existing law provides regulators with the basic tools they need to address insurers whose data or algorithms run amuck.
- What’s the future hold?
Regulators’ questions and concerns about big data and algorithms are not going away. Sooner or later, the NAIC will come up with a way to protect consumers without overly stifling innovation. And, if the regulators are slow to act, there’s a good chance that the plaintiffs’ lawyers will make some noise of their own.
It’s possible that states will enact data privacy legislation that, while not specifically aimed at insurers, nevertheless could significantly impact their ability to collect and use consumer data. The California Consumer Privacy Act of 2018 (signed into law on June 28) is a good example, as many are calling it the strictest online privacy law in the country. Depending on how the mid-term elections shake out, it’s even possible that Congress could take up legislation (perhaps inspired by Europe’s GDPR or the latest Facebook revelation) that could be broad enough to impact insurers.
- What should insurers do?
First, know what the law requires and keep up with developments at the NAIC and elsewhere.
Second, take a hard look at your company’s use of data and algorithms. Deficiencies in data and algorithms present regulatory, litigation and reputational risk, but can be difficult to detect. We think there’s enough risk here that insurers should consider independent testing and validation of their data and algorithms to identify any problems before they come home to roost.
Here are some of the questions that insurers should consider asking themselves:
-
Are we permitted to use the information that we’re collecting?
- Is our data accurate, complete, up-to-date and free of embedded bias?
- Does our algorithm produce reliably accurate results?
- Do the results make sense? Can we explain them in a way that regulators and consumers will understand?
- Are we monitoring the performance of our algorithm to make sure that it continues to operate as intended?
- Does our algorithm comply with the law? In particular, does it include proxies for race or other factors that cannot lawfully be taken into account?
That’s probably enough for now but stay tuned. There’s surely more to come.