Legal Focus: New cyber rules concerning US insurers
Committee has been evaluating whether third-party AI service providers can be regulated through the insurers they do business with or need to be licenced separately by insurance departments
New NAIC model bulletin ‘encourages’ insurers to adopt a written artificial intelligence systems programme
At a recent meeting of the National Association of Insurance Commissioners (NAIC), the newly unveiled exposure draft of the proposed model bulletin “Use of Algorithms, Predictive Models and Artificial Intelligence Systems [AIS] by Insurers” was the primary topic of conversation for the NAIC’s innovation, cyber security and technology committee.
The current draft of the bulletin is principles-based and not prescriptive and the committee made clear that, at this time, it is not yet looking at the adoption of a model rule.
The draft bulletin “encourages” insurers to adopt a written AIS programme. As part of an AIS programme, insurers are asked to address their standards for the acquisition of, use of or reliance on AI systems developed or deployed by third parties.
Insurers should, ideally, include terms in contracts with third parties that “require third-party data and model vendors and AI system developers to have and maintain an AIS programme commensurate with the standards expected of the insurer; entitle the insurer to audit the third-party vendor for compliance; entitle the insurer to receive audit reports by qualified auditing entities confirming the third party’s compliance with standards; and require the third party to co-operate with regulatory inquiries and investigation related to the insurer’s use of the third party’s product or services and require the third party to co-operate with the insurer’s regulators as part of the investigation or examination of the insurer”.
Further, the draft bulletin provides “investigations and examinations of an insurer may include requests for the following kinds of information and documentation related to data, models and AI systems developed by third parties that are relied on or used by or on behalf of an insurer, directly or by an agent or representative: due diligence conducted on third parties and their data, model or AI systems; contracts with third-party AI systems, model or data vendors, including terms relating to representations, warranties, data security and privacy, data sourcing, data use, intellectual processes rights, confidentiality and disclosure, and co-operation with regulators; and audits and confirmation processes performed with respect to third-party compliance with contractual and, where applicable, regulatory obligations”.
The wording of the above sections concerning third-party vendors received much attention during the committee meeting. Many industry groups, especially those representing small- and medium-sized insurers, were concerned they have little leverage when contracting with third parties to change contract language to include the terms specified by the NAIC draft bulletin.
The committee seemed particularly interested in hearing from insurers about the amount of control they have when products are purchased to include language requiring co-ordination with insurance departments that may have questions for third-party vendors.
The committee signalled a question it has been evaluating is whether third-party AI service providers can be regulated through the insurers they do business with or need to be licensed separately by insurance departments. This last point was not elaborated on but is worth paying attention to.
With the rise of such recent cases as Kisting-Leung and others v Cigna Corp and others, which alleges violation of, inter alia, certain insurance regulations, including California Insurance Code § 790.03(h) (Unfair Practices) and California Code of Regulations title 10, §2695.7(b)(1), (d) and (e) (Standards for Prompt, Fair and Equitable Settlements), regulators are starting to take an even closer look at both AI solutions developed and used in-house by insurers and those developed by third-party AI service providers.
Heidi Lawson is a partner, and Sarah Hopkins and Faye Wang are associates at Fenwick & West