Governing AI Use in Healthcare Claims Administration

August 13, 2024

By: Daniel Rubio

Calculators, or “adding machines” as they were once called, once represented revolutionary technology.[1] For a given set of inputs, a definite output was provided.[2] The simple concept was a complex invention. Today, calculators are ubiquitous—every smartphone has one. No one really questions the effectiveness of the technology.

Artificial intelligence (“AI”) software represents the modern reinvention of the adding machine in many ways. Sure, the set of inputs is a touch more complex than mathematical equations. And yes, AI does not always provide a mathematically confirmable response. Checking whether the AI produced a correct result is a tremendous source of uncertainty. Despite this reality, the AI industry continues to develop and permeate many industries.

The emergence of artificial intelligence (“AI”) within the healthcare industry is no surprise. AI is the new frontier. The great unknown is the scope of AI’s applications and limitations.

What exactly is AI? Why define it? And what regulatory implications do AI create?

The Various Definitions of AI

Of course, AI must be defined because the regulation of AI is impossible without doing so. Legislators are searching for an appropriately definite, yet inclusive, definition for AI. To that end, three sources of AI definitions provide some guidance.

The U.S. Department of Health and Human Services broadly defines AI as technology which “enables computer systems to perform tasks normally requiring human intelligence.”[3] Tech giant IBM defines AI as “technology that enables computers and machines to simulate human intelligence and problem-solving capabilities.”[4] Still the National Association of Insurance Commissioners’ (“NAIC”) Model Bulletin defines AI more mechanically as “an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI Systems are designed to operate with varying levels of autonomy.”[5]

All three definitions, though similar, have varying scopes, and lead to the same interesting series of questions that lawmakers want answered: can AI be trusted? If yes, what is the best regulatory framework to have in place to promote responsible implementation and use of AI?

Notice and Discrimination are the focus of Regulations

Currently, legislators throughout the country have focused on introducing regulations with two consistent objectives: (1) limit algorithmic discrimination and (2) mandate disclosure of AI use to consumers.[6] The two objectives provide sound foundations for creating proper regulatory framework. Of course, those two issues are only the very tip of the iceberg. It appears that legislators are tip toeing around more robust AI regulations with the seemingly more important objective in mind: allow the proliferation of AI to encourage the potentially immense innovations and efficiencies which AI might promote.

As is often the case when new technologies emerge, as legislators slow play the creation of regulatory frameworks for AI, courts throughout the country are moderating hotly contested legal disputes relating to the use of AI, particularly in the healthcare industry.

Healthcare Litigation regarding AI

According to a recently filed lawsuit in Minnesota, Plaintiffs allege that health insurance carriers implemented AI algorithms to systematically deny health insurance claims for purportedly medically necessary treatment.[7] The Plaintiffs claim the AI system used by the insurance carrier has a known error rate of approximately 90%.[8]

According to the allegations by the Minnesota plaintiffs, their health insurance claims were denied when the health insurance carriers relied on the problem-solving capabilities of the AI technology rather than the treating physicians’ opinion that a treatment was medically necessary.

In response, the spokesperson for the AI technology developer denied the allegations. Specifically, the representative advised that the AI was simply a tool that carriers use not to make coverage determinations, but rather as “a guide to help [insurance companies] inform providers … about what sort of assistance and care the patient may need.”[9]

Similarly, a lawsuit in the Eastern District of California alleges that an insurance company’s doctors “instantly reject claims on medical grounds without ever opening patient files, leaving thousands of patients effectively without coverage and with unexpected bills.”  ProPublica discussed this California lawsuit and raised concerns citing the insurance carrier’s alleged use of artificial intelligence to deny more than 300,000 claims in a two-month span, allegedly spending “just 1.2 seconds ‘reviewing’ each request.”[10]

Federal Governance regarding AI

The National AI Initiative Act of 2020 outlined the United States’ national strategy for the development and implementation of AI. The Centers for Medicare and Medicaid Services (“CMS”) created a CMS AI Playbook that discusses a governance framework that defines the principles and practices an organization follows to address the societal, ethical, and legal impacts of AI.[11]

State Regulatory Guidance for Insurers’ Use of AI

In December 2023, the National Association of Insurance Commissioners (“NAIC”) developed a general AI regulatory framework titled the “NAIC Model Bulletin: Use of Artificial Intelligence Systems by Insurers.” The Model Bulletin provides guidance to insurers seeking to implement AI systems in their claims processes. Eleven states have adopted the NAIC’s Model Bulletin, with others currently developing their own framework regarding AI governance. With CMS, the White House, and the NAIC promoting additional AI governance and development, it is likely that more states will contribute to the regulatory process regarding the responsible adoption of AI.

On May 17, 2024, Colorado enacted a law (Colorado Senate Bill 24-205) requiring the use of reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination.[12] The Colorado law focuses on regulating “High-risk artificial intelligence” systems.[13] According to the law, “High-risk artificial intelligence systems” are defined as “Any artificial intelligence system that, when deployed, makes, or is a substantial factor in making, a consequential decision.” Consequential decisions are defined as any decision which “has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or cost or terms of: . . . healthcare services.”[14]

Overall Regulatory Outlook

Hospitals systems, healthcare providers, and health insurance companies will continue to monitor the action in Minnesota, specifically as it pertains to payor provider disputes relating to the use of AI. It’s important that healthcare systems equip themselves with legal experts that can help them navigate these evolving issues. Here at Wolfe Pincavage, our multidisciplinary team uses its legal, clinical and revenue cycle expertise to assist clients in being on the forefront of these complex claims, payor disputes, and related litigation. We are actively monitoring the status of the litigation and will keep our clients appraised of any relevant outcomes.



[1] https://edtechmagazine.com/k12/article/2012/11/calculating-firsts-visual-history-calculators

[2] Id.

[3] https://ai.cms.gov

[4] https://www.ibm.com/topics/artificial-intelligence

[5] See NIST AI RMF 1.0; NAIC Model Bulletin

[6] https://www.ncsl.org/technology-and-communication/artificial-intelligence-2024-legislation

[7] https://www.cbsnews.com/news/unitedhealth-lawsuit-ai-deny-claims-medicare-advantage-health-insurance-denials/

[8] Id.

[9] Id.

[10] See https://www.propublica.org/article/cigna-health-insurance-denials-pxdx-congress-investigation#:~:text=The%20letter%20follows%20an%20investigation,PXDX%20system%2C%20spending%20an%20average; see also https://www.cbsnews.com/news/unitedhealth-lawsuit-ai-deny-claims-medicare-advantage-health-insurance-denials/

[11] https://ai.cms.gov

[12] https://leg.colorado.gov/bills/sb24-205#:~:text=On%20and%20after%20February%201,in%20the%20high%2Drisk%20system.

[13] See Senate Bill 24-205, 6-1-1701(9)(a),  available at https://leg.colorado.gov/sites/default/files/2024a_205_signed.pdf

[14] Id. at 6-1-1701(3)(e).