Chicago Locke Lord lawyers Paige Waters and Stephanie O’Neill Macro co-authored an article featured as an “Expert Analysis” in Law360. The article examines the National Association of Insurance Commissioners’ (NAIC) Exposure Draft of the Model Bulletin on the Use of Algorithms, Predictive Models and Artificial Intelligence Systems by Insurers, and details the bulletin’s four basic areas of best practices and guidelines.
“It is important to note that the draft bulletin is not a model law or regulation, but is intended as a template communication that insurance regulators can use to guide insurers to employ AI consistent with existing market conduct, corporate governance, and unfair and deceptive trade practice laws,” said Waters and O’Neill Macro. “The draft bulletin is intended to create a balance between encouraging innovations and protecting the insurance-buying public from potential harm associated with the use of AI, for example, unlawful bias or discriminatory practices.”
Read the full Law360 article or view the full article below.
Technology companies are rapidly developing artificial intelligence, and the insurance industry is embracing these new innovations. Insurers are deploying AI in various functional areas, including customer service, underwriting and pricing, claims adjudication, and fraud detection.
AI facilitates these functional areas by increasing the insurer's responsiveness and improving the accuracy of the outputs. Consequently, insurers hope to increase profitability and enhance the customer experience by incorporating AI into their operations.
As a heavily regulated industry, insurers are responsible for compliance with, among other things, state insurance laws. As discussed further below, insurance regulators are diligently working to understand how AI is being deployed by insurers so they can create a regulatory framework to monitor and police the industry's use of AI.
On July 17, the National Association of Insurance Commissioners released the Exposure Draft of the Model Bulletin on the Use of Algorithms, Predictive Models, and Artificial Intelligence Systems by Insurers.
It is important to note that the draft bulletin is not a model law or regulation, but is intended as a template communication that insurance regulators can use to guide insurers to employ AI consistent with existing market conduct, corporate governance, and unfair and deceptive trade practice laws. The draft bulletin is intended to create a balance between encouraging innovations and protecting the insurance-buying public from potential harm associated with the use of AI, for example, unlawful bias or discriminatory practices.
Creation of the NAIC H Committee
The NAIC H Committee is a relatively new committee.
The NAIC formed the Innovation Cybersecurity and Technology (H) Committee (formerly Innovation and Technology (EX) Task Force) to explore the technological developments in the insurance sector[.] In 2019, the Task Force established the Big Data and Artificial Intelligence (H) Working Group to study the development of artificial intelligence, its use in the insurance sector, and its impact on consumer protection and privacy, marketplace dynamics, and the state-based insurance regulatory framework[.] The Working Group developed regulatory principles on artificial intelligence that were adopted by the full NAIC membership at the 2020 Summer National Meeting. Beginning in 2021, the Working Group began surveying insurers by line of business to learn how AI and machine learning techniques are currently being used and what governance and risk management controls are in place. [1]
Maryland Insurance Commissioner Kathleen Birrane currently chairs the H Committee along with co-chairs Michael Conway, the Colorado insurance commissioner, and Doug Ommen, the Iowa insurance commissioner. While these three states led the effort to develop a regulatory framework of AI in the insurance industry, which culminated in the draft bulletin, the effort was enthusiastically supported by all the states on H Committee: the District of Columbia, Georgia, Hawaii, Illinois, Missouri, Montana, New York, North Dakota, Ohio, Tennessee, Vermont and Washington.
In an effort to understand how the insurance industry is using AI, the H Working Group conducted three different AI surveys: (1) private passenger auto; (2) homeowners; and (3) life insurance. The first two surveys are complete, and the life insurance survey results are being analyzed.
Draft Bulletin
The draft bulletin sets forth the insurance regulators' expectations of insurers using artificial intelligence systems, and encourages insurers to implement and maintain a written program based on the insurer's assessment of the risks posed by its use of artificial intelligence systems. The draft bulletin is designed to ensure that insurers' use of artificial intelligence systems is accurate and complies with applicable law, including unfair trade practice laws.
It encompasses four basic areas of best practices and guidelines addressing:
In doing so, the draft bulletin defines key terms such as artificial intelligence, artificial intelligence systems and others.
Artificial intelligence systems "is an umbrella term describing artificial intelligence and big data resources utilized by insurers." [2] Additionally, it provides guidelines for insurers regarding documentation of their AI system governance, risk management and use protocols.
The draft bulletin details the type of information that insurers should document, including policies, procedures, training materials and other information relating to all aspects of the artificial intelligence systems program. The draft bulletin addresses AI implementation from soup to nuts, including the insurer's objectives, monitoring and auditing procedures, and overall oversight of the artificial intelligence systems program.
Under the draft bulletin, artificial intelligence systems include the insurer's algorithms and predictive models. Insurers are encouraged to document the types of controls implemented and how management is structured to ensure compliance and enhance governance.
The draft bulletin also indicates that insurers should document their management and oversight of third-party artificial intelligence systems, including the due diligence conducted in determining to utilize third-party artificial intelligence systems. The draft bulletin addresses monitoring and auditing procedures employed by the insurer to ensure compliance of third-party artificial intelligence systems with contractual and regulatory obligations.
The draft bulletin contemplates that insurance regulators have authority to examine insurers' use of artificial intelligence systems and specifically states that:
an Insurer can expect to be asked about its governance framework, risk management, and internal controls (including the considerations identified in Section 3 [Risk Management and Internal Controls]), as well as questions regarding any specific model, AI System, or its application, including requests for the following kinds of information and/or documentation.
Pursuant to the draft bulletin, insurers also should be prepared to respond to regulator inquiries regarding their use of third-party developed artificial intelligence systems. Insurers are responsible for the third-party artificial intelligence systems they are using and are encouraged to monitor and audit the artificial intelligence systems provided by third parties. The draft bulletin provides guidance for provisions that should be included in third-party contracts.
The draft bulletin is comprehensive and intended to be risk-based rather than prescriptive, acknowledging that insurers will be able to demonstrate their compliance with applicable laws through various means. During the comment period, insurance regulators likely will hear from the industry regarding whether the draft bulletin strikes an appropriate balance between encouraging innovation and preventing unlawful uses of AI by establishing guardrails and standards.
Some potential areas of industry comment may include:
Comment Period for Draft Bulletin
Written comments will be accepted by the NAIC through Sept. 5. In addition, the H Committee will hear comments from in-person attendees during its meeting on Aug. 13 at the NAIC's summer national meeting. Persons who wish to comment in person are requested to notify the NAIC by Aug. 9 of their desire to speak and their affiliation, in order to properly allocate time among speakers. [3]
Paige D. Waters is a partner and Stephanie O'Neill Macro is of counsel at Locke Lord LLP.
The opinions expressed are those of the author(s) and do not necessarily reflect the views of the firm, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.
[1] https://content.naic.org/cipr-topics/artificial-intelligence.
[2] Draft Bulletin, Section 2.
[3] https://content.naic.org/cmte_h.htm
Reproduced with permission from © 2023, Portfolio Media, Inc.
Sign up for our newsletter and get the latest to your inbox.