The rapid pace of technological advancements and the complexity of AI systems pose challenges for regulators to ensure that fair and ethical practices are used. In the pursuit of establishing the configuration for a control environment, governmental bodies and regulatory authorities encounter the challenge of disparate and frequently inharmonious approaches. Consequently, insurance organizations face obstacles to effectively navigate through these discrepancies, thereby fostering significant levels of uncertainty.
On July 4, 2023, the European Parliament gave its seal of approval to two laws: the Digital Markets Act and Digital Services Act. The Digital Markets Act focuses on anti-competitive behavior and is expected to enter into force in the coming months. The regulation aims to establish explicit guidelines for major platforms, outlining prohibited, and recommended actions to prevent them from enforcing unjust terms upon both businesses and consumers. Examples of these practices include prioritizing the gatekeeper's own services and products over comparable ones from external parties on the platform and denying users the option to remove any preinstalled software. Comparatively, the Digital Services Act addresses content considered illegal in Europe. The Digital Services Act aims to create a secure and safe online environment for everyone by implementing a framework that restricts the dissemination of illicit content on the Internet. In extreme cases, the aforementioned laws carry significant weight, as they can impose fines on noncompliant companies of up to 20% of their annual worldwide revenue. Together, the Digital Markets Act and Digital Services Act represent the most extensive efforts in the Western world to control technology companies. The approval of sweeping digital regulations by European Union lawmakers sets the stage for potential confrontations between regulators and major tech giants regarding the implementation of these rules. These laws further the European Union's ambition to assume a leading role in global technology regulation.
Although global AI implementation continues to expand steadily, there is currently no all-encompassing United States federal legislation established regarding AI. Instead, the United States relies on a patchwork of different existing and potential AI regulatory frameworks. On October 30, 2023, President Biden issued an Executive Order "to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence." The Executive Order introduces novel guidelines aiming to safeguard the privacy of American citizens, propel fairness and equal rights, champion the interests of consumers and workers, foster innovation and competition, and propel American leadership on the global stage.
Due to a lack of federal action, during the 2023 legislative session, there was a notable increase in the number of AI laws introduced within the various states. Numerous states suggested the creation of task forces to examine the impacts of AI, while others raised apprehensions regarding AI's influence on areas such as healthcare, insurance, and employment. Notable active legislation in the United States includes: Connecticut's AI Bill S 1103 and Colorado’s Senate Bill 21-169, which resulted in the enactment of Colorado Insurance Regulation 10-1-1. Connecticut’s bill provides government oversight regarding responsible use of AI. Beginning on February 1, 2024, as mandated by the bill, the Connecticut Department of Administrative Services must initiate an inventory of all artificial intelligence systems currently utilized by state agencies. Regular assessments of these AI systems must be conducted to ensure compliance with anti-discrimination laws and to prevent any form of disproportionate impact. Connecticut’s bill also stipulates that the Office of Policy and Management is responsible for establishing comprehensive policies and procedures governing the entire life cycle of AI systems within state agencies, including development, procurement, implementation, utilization, and ongoing assessment. Further, Colorado’s Senate Bill 21-169 resulted in a regulation on the use of consumer data and algorithms and machine learning by life insurance companies. The regulation, effective on November 14, 2023, aims to address biases in using consumer data in various types of algorithmic models and machine learning processes. The regulation monitors the use of algorithms and data in general “insurance practices,” which includes “marketing, underwriting, pricing, utilization management, reimbursement methodologies, and claims management in the transaction of insurance.”
The National Association of Insurance Commissioners (NAIC) has also acted in regard to AI. As a way to keep up with technological advancements in the industry, the NAIC created the Innovation Cybersecurity and Technology (H) Committee. Lately, like the various states, the H Committee turned their focus to regulating AI. In an effort to address AI in a uniform manner, the H Committee drafted a model bulletin on AI, algorithms, and AI systems. Adopted on December 4th, 2023, the bulletin offers guidance to insurance departments on regulating the use of AI by insurers.
Regulatory bodies are collaborating with industry experts and stakeholders to develop best practices and standards for AI adoption in insurance. In particular, regulators are targeting the insurers ability to explain their algorithms and those algorithms interpretations. Insurers must be able to explain how AI algorithms arrive at their decisions, especially in cases where such decisions significantly impact policyholders. This ensures that customers have a clear understanding of how their insurance premiums are calculated and have the opportunity to identify any potential disparities or biases in the process. Striking the right balance between encouraging innovation and safeguarding consumer interests is paramount. To do so, regulators should collaborate with industry players, technology experts, and legal professionals to establish comprehensive regulations and guidelines that protect consumers while fostering AI-driven advancements in the insurance industry.
In conclusion, the current regulations and guidelines for AI in the insurance industry primarily revolve around data protection, transparency, fairness, and accountability. Insurers must stay proactive and responsive to the evolving technology and its impact on the industry. Understanding the intricacies of AI, addressing biases, safeguarding data privacy and security, and fostering collaboration are key considerations for the insurance sector and regulators alike. A collaborative approach will enable regulatory bodies to keep up with emerging AI trends and assist in the design of effective regulations. By striking the right balance between innovation and regulation, insurers and regulators alike can harness the potential of AI to improve the products and experience for the consumer.