State regulators tackle insurers’ use of AI: 11 states adopt NAIC model bulletin | McDermott Will & Emery

In December 2023, the National Association of Insurance Commissioners (NAIC) adopted a model bulletin on the use of artificial intelligence (AI) systems by insurers. The model bulletin reminds insurers that they must comply with all applicable insurance laws and regulations (e.g, prohibition of unfair commercial practices) when making decisions that affect consumers, including when those decisions are made or supported by advanced technologies, such as AI systems. The model bulletin was issued by the NAIC’s Innovation, Cybersecurity, and Technology Committee, which includes representatives from insurance regulators from 15 states and is tasked with discussing how technological developments could impact consumer protection and insurance supervision . Although not a model law or regulation, the model bulletin is intended to provide guidance to state insurance regulators to promote uniformity in state regulatory frameworks. The NAIC’s model only applies to the extent that a state adopts the bulletin.



To date, nearly a dozen states have adopted the model bulletin with minor modifications, and many other states are likely to adopt similar standards regarding the use of AI in health insurance.

The state bulletins are based on key principles outlined in the NAIC model, including:

Insurers must maintain a written program for using AI systems.

The model bulletin requires insurers to implement and maintain a written program to ensure AI systems are used responsibly. This includes mitigating negative consumer outcomes and using verification and testing methods to identify potential biases in the use of AI systems and other predictive models. Responsibility for overseeing the program should rest with senior management or a committee accountable to the insurer’s board.

The governance framework must be based on transparency, fairness and accountability.

Insurers are expected to implement a governance framework that oversees the AI ​​system, including policies and procedures, risk management and internal controls, documentation requirements and similar oversight methods to be used at each stage of the AI ​​system’s cycle. The governance structure should include committees of the appropriate disciplines (e.g, actuarial, data science, underwriting, legal); clearly defined command structures; monitoring, auditing and reporting protocols; and ongoing training and supervision requirements for staff.

Risk management and internal controls must be documented.

An insurer’s AI system must document the insurer’s risk management and internal control framework, including how AI systems are approved and deployed. This extends to the management and oversight of predictive models, including the expectation that insurers maintain descriptions of such models, document their use, and conduct regular audits and reviews of the tools. Insurers must validate, test and retest the output of AI systems as necessary. Insurers are expected to protect non-public information (i.econsumer information) against unauthorized access.

Insurers are responsible for managing third-party providers.

The model bulletin requires insurers to develop clear processes for using or acquiring AI-related systems developed by third parties. This includes protocols for assessing the third party system to ensure that decisions made or supported by such systems meet the regulatory standards imposed on the insurer. The model bulletin also encourages insurers to include terms in their contracts with third parties that grant audit rights and require third parties to cooperate with the insurer in investigations by government agencies.

Regulators can ask questions about an insurer’s use and development of AI.

The model bulletin indicates that during a state investigation or market conduct action, an insurer can be asked about the development and use of AI, including the insurer’s governance framework, risk management and internal controls. This could include a request for policies, procedures, training materials and other information regarding the implementation, monitoring and supervision of AI systems by the insurer.


While nearly a dozen states have adopted the NAIC model bulletin, other states are taking different approaches to regulate the use of AI in insurance. For example:

  • In 2022, California’s insurance commissioner issued a bulletin noting that he is aware of and investigating instances of potential bias and alleged unfair discrimination between insurance lines due to the use of technology and data. The bulletin expressed concern about the irresponsible use of ‘Big Data’ and encouraged insurers to review their practices.
  • Colorado has established governance and risk management framework requirements for life insurers regarding the use of third-party consumer data, algorithms, predictive models and similar systems. The regulation aims to prevent unfair discrimination and includes documentation, management and reporting requirements. Colorado also recently adopted the nation’s first comprehensive framework to govern the development and use of AI, which will go into effect on February 1, 2026. Importantly, the law includes an exemption for insurers, fraternal benefit associations, and developers of AI systems used by insurers subject to CRS § 10-3-1104.9 and the rules adopted by Colorado Insurance Commissioner Michael Conway.
  • In 2019, New York issued a circular highlighting the use of third-party consumer data in life insurance purchasing. More recently, in 2024, New York proposed a circular focused on adoption and pricing, and the risk of negative impacts from the use of AI. The proposed 2024 circular states that while AI systems can benefit both insurers and consumers by simplifying pricing and underwriting processes, the systems can also increase inequality. The letter outlines the state’s expectation that insurers establish good governance and risk management frameworks to limit potential damage.
  • In 2020, Texas issued a bulletin reminding regulated entities – including their agents and representatives – that they are responsible for the accuracy of data used in claims processing, underwriting and rating practices.


As AI systems become increasingly used in all facets of the insurance industry – from product development to sales, pricing, claims management and more – AI regulation in the health insurance industry is likely to increase. Health insurers should take into account these developments and the different approaches states can take when monitoring and investigating insurers’ use of AI. While many states have adopted the NAIC model, other states are implementing state-specific requirements, which could result in a patchwork of AI standards. Health insurers operating in multiple states must be aware of the different standards implemented and maintain compliance as these expectations evolve.

(View source.)

Back To Top