What Hospitals Can Learn from the UnitedHealth AI Lawsuit – A Legal and Compliance Framework for Responsible AI Governance
A federal court in Minnesota is in the process of forcing UnitedHealth Group to open its internal files on one of the most consequential questions in modern medicine: when an artificial intelligence algorithm and a treating physician disagree about what a patient needs, who wins?
The lawsuit — Estate of Lokken v. UnitedHealth Group — alleges the answer at UnitedHealth was the algorithm. Every time.
That answer is now the subject of a sprawling discovery order requiring production of tens of thousands of documents covering nearly a decade of internal operations, AI governance records, cost-saving analyses tied to the NaviHealth acquisition, and performance metrics for the clinical staff who were, plaintiffs allege, required to follow the algorithm’s discharge targets under threat of termination.
The internal documents are the existential threat here — not the trial. What is produced in discovery over the coming months may reshape how every healthcare organization in the country thinks about where AI belongs in clinical and coverage decision-making. Healthcare providers, clinicians, and health systems should be paying very close attention — because the compliance expectations and litigation theories being tested in Minneapolis are directly relevant to their operations.
What the Algorithm Could Not See
To understand why this case matters, it helps to understand what was actually happening to patients.
The nH Predict algorithm, built on a database of 6 million patient records, generated a predicted length of stay for elderly Medicare Advantage members receiving post-acute rehabilitation care. According to the lawsuit, rather than using that prediction as one input among many, UnitedHealth set internal targets requiring clinical staff to discharge patients the moment the algorithm said they should no longer need care — regardless of what the treating physician determined was medically necessary.
The algorithm could see patterns across a population. It could not see that this particular patient was recovering more slowly because of a comorbidity the population data didn’t weight appropriately. It could not see that the patient’s home environment made early discharge dangerous. It could not see what the physician saw when she examined the patient that morning, or what the family reported about function the day before. It could not see trajectory — where this patient had been and where this patient was actually going.
Plaintiffs allege that UnitedHealth was aware the algorithm’s outputs were reversed on appeal more than 90% of the time when patients challenged them. They further allege — based on figures cited in the complaint — that UnitedHealth also knew that fewer than 0.2% of patients actually appealed, because patients in post-acute rehabilitation are, by definition, among the most vulnerable: elderly, often cognitively impaired, physically declining, and without the resources or knowledge to navigate a multi-level federal appeals process while simultaneously fighting for their lives.
The scheme the plaintiffs allege is straightforward: deny claims at scale, retain Medicare Advantage premiums for care not delivered, and rely on patients being too sick or too exhausted to appeal. If those allegations are proven, this is not a technology failure. It is a decision to weaponize a technology against the population it was ostensibly designed to serve.
The Regulatory Floor Has Already Been Set
The healthcare industry has treated AI governance largely as an internal ethics question. It is not. It is a compliance question, and the regulatory floor is already in place.
On February 6, 2024, the Centers for Medicare and Medicaid Services issued authoritative subregulatory guidance — in the form of FAQs interpreting the 2024 Medicare Advantage Final Rule — that drew a clear and enforceable line. CMS stated explicitly that an algorithm determining coverage based on a population data set, rather than the individual patient’s medical history, the physician’s recommendations, or clinical notes, is not compliant with 42 C.F.R. § 422.101(c). CMS went further: an AI tool predicting length of stay in post-acute care cannot be the sole grounds for terminating services. Period.
This is not aspirational language. It is the operative standard against which Medicare Advantage organizations will be audited and against which courts are beginning to evaluate AI-driven coverage decisions. Provider organizations contracting with MA plans should be tracking it closely.
CMS also addressed the risk that AI tools, as they are trained and updated over time, may quietly alter the coverage criteria they apply — deviating from the publicly available standards the organization has represented it follows, without anyone making a conscious decision to change coverage policy. The guidance is explicit: any AI or algorithm used in coverage decisions may not incorporate criteria beyond what is publicly accessible and otherwise permissible under federal standards. If your AI tool is making coverage decisions that you cannot fully explain and document against public criteria, you are already out of compliance.
To understand why algorithmic substitution is particularly dangerous in the post-acute rehabilitation context, it helps to understand what Medicare’s coverage standard for inpatient rehabilitation actually requires. Under CMS guidelines, IRF admission is covered only where the patient requires active and ongoing therapeutic intervention across multiple disciplines, can reasonably be expected to participate in and benefit from an intensive rehabilitation program of at least three hours of therapy per day at least five days per week, and requires physician supervision including face-to-face visits by a rehabilitation physician at least three times per week throughout the stay. Coverage must be supported by a comprehensive pre-admission screening completed within 48 hours of admission and an individualized plan of care developed within the first four days — both grounded in the specific patient’s condition and functional status. According to CMS’s Medicare Learning Network, medical necessity failures accounted for 93.8% of improper payments for inpatient rehabilitation hospitals in the 2024 reporting period, with a projected improper payment total of $2 billion across IRF services — a figure that reflects how frequently documentation fails to establish individualized clinical justification. That is the standard the nH Predict algorithm was allegedly replacing with a population-based discharge prediction. An algorithm cannot conduct a pre-admission screening. It cannot perform a face-to-face clinical assessment. It cannot develop an individualized plan of care. When AI output substitutes for those requirements, it is not merely a governance failure — it is a failure to deliver the covered service Medicare paid for.
The Office of Inspector General reinforced this framework in its February 2026 Industry Compliance Program Guidance for Medicare Advantage — its first major update since 1999, and voluntary rather than binding in its own right — specifically recommending that organizations review denial and appeal trends to ensure their policies do not inappropriately restrict coverage. Taken together, these regulatory signals make clear: the government is watching the data, and the litigation is supplying the roadmap for what to look for.
The Legal Exposure Is Not Theoretical
Healthcare providers sometimes read insurance litigation and conclude it has little to do with them. In this area, that conclusion is wrong.
Provider organizations that integrate AI into clinical workflows, prior authorization support, utilization management, or care coordination face meaningful legal risk if that AI functions as a decision-maker rather than a decision-support tool.
Breach of contract may arise where plan documents, provider agreements, or patient consent forms represent that clinical decisions will be made by qualified medical professionals — and AI substitution makes those representations false.
Negligence may arise where AI outputs replace individualized clinical assessment, and a patient is harmed by a decision the physician would not have made independently. The AI does not absorb the liability. The organization does.
False Claims Act exposure may arise under some fact patterns where AI-driven denials or care limitations are used to retain federal reimbursements — Medicare Advantage capitation payments, for instance — without delivering the care those payments are intended to fund. Where applicable, this is the theory with the sharpest teeth: it carries treble damages and qui tam provisions allowing employees and whistleblowers to bring suit on behalf of the government.
Civil rights exposure is an emerging but serious risk. CMS has explicitly warned that AI algorithms can exacerbate discrimination and bias, and that Medicare Advantage organizations must ensure their AI use does not violate Section 1557 of the Affordable Care Act’s nondiscrimination provisions. HHS’s 2024 Section 1557 final rule specifically regulates discrimination through patient care decision-support tools, creating ongoing compliance obligations for covered entities. An AI trained on historical data encodes historical disparities. If it consistently produces different outcomes for patients by race, age, disability status, or geographic location, the organization using it owns that disparity.
What Responsible AI Governance Actually Requires
The question for healthcare organizations is not whether to use AI. AI has genuine and significant value in clinical settings — in synthesizing longitudinal data, flagging risk signals, supporting differential diagnosis, improving care coordination across complex patients, and surfacing patterns in population health that no individual clinician could track manually. These are legitimate and important applications.
The question is whether the organization has built the governance to ensure AI does what it is supposed to do — deepen the clinician’s understanding of the individual patient — rather than what it should never do: make the decision for them.
Responsible AI governance in healthcare rests on five requirements that are no longer optional.
First: The AI’s role must be documented at the point of care, not just in policy. It is insufficient to have a policy that says AI is decision-support if the clinical record does not reflect physician engagement with and independent assessment of that patient. When a physician reviews an AI output and concurs with it, that is a clinical decision. When the AI output and the clinical record are indistinguishable, no physician decision has been documented — and no physician defense is available.
Second: Physician override must be non-punitive, accessible, and tracked. If the data shows that override rates are extremely low — approaching zero — that is not evidence the AI is always right. It is evidence that the culture has suppressed clinical independence. That culture creates both patient harm and legal liability. The UnitedHealth litigation specifically alleges that employees faced termination for departing from the algorithm’s outputs. If that allegation is proven, it eliminates any good-faith argument that the tool was used as a guide.
Third: Denial and reversal rates must be audited against physician-only benchmarks. If AI-assisted decisions result in denials at rates that diverge significantly from purely physician-driven decisions — particularly for post-acute care, skilled nursing, or other services where nH Predict has been deployed — that divergence is a liability signal. It is also precisely the type of internal data CMS and OIG have indicated they will examine, and that plaintiffs’ counsel will seek in discovery.
Fourth: Federally reimbursed programs require heightened compliance review. Any AI tool used in Medicare Advantage, Medicaid, or other federally funded programs operates in a legally distinct environment with regulatory requirements that are not satisfied by general healthcare AI best practices. Coverage decisions must be based on the individual patient’s circumstances. AI predictions about population-level patterns are not individualized determinations. Any organization deploying AI in these programs should have compliance counsel review the implementation against CMS’s February 2024 guidance before — not after — a denial pattern emerges.
Fifth: Patient-facing transparency is becoming a legal expectation, not just an ethical preference. Patients have a right to understand how decisions about their care are being made. As this litigation develops, organizations that can demonstrate they disclosed AI’s role in the assessment process — and that a physician made the final determination — will be in a fundamentally different legal position than those that cannot.
The Standard Being Set Today Will Govern Tomorrow’s Cases
The Minnesota litigation is among the first prominent federal cases testing AI accountability in healthcare coverage decisions, and it is shaping how courts, regulators, and compliance programs think about these questions in real time. Each ruling — on discovery scope, on what internal governance documents must be produced, on how “decision-support” is legally distinguished from “decision-making” — will influence how healthcare organizations across the country approach AI governance and how they will be evaluated when they get it wrong.
The organizations that will be best positioned are not necessarily the ones that use AI least. They are the ones that use it most deliberately: with documented governance, physician accountability at every decision point, regular outcome audits, and a clear-eyed understanding that the patient in front of the clinician is never reducible to the database behind the algorithm.
AI can tell a clinician what happened to patients who looked like this patient on paper. Only the physician can determine what is happening to this patient, today, in this room — and what that patient actually needs.
That distinction is not only good medicine. In the current legal environment, it is the difference between a defensible practice and a discovery order.
Stephenson, Acquisto & Colman is a California-based healthcare reimbursement litigation firm with decades of experience representing healthcare providers in disputes involving payers, coverage denials, and regulatory compliance. If your organization has questions about AI governance in clinical or coverage decision-making, or about your exposure in the evolving regulatory environment, contact our office.
Sources & Citations
The Litigation
- Estate of Gene B. Lokken, et al. v. UnitedHealth Group, Inc., et al., U.S. District Court, District of Minnesota, Case No. 0:23-cv-03514 (JRT/SGE), filed November 14, 2023.
- Estate of Lokken v. UnitedHealth Grp., Inc., et al., No. 23-cv-3514 (JRT/SGE), Doc. 91, Memorandum Opinion and Order Granting in Part and Denying in Part Motion to Dismiss (D. Minn. Feb. 13, 2025).
- Estate of Lokken v. UnitedHealth Grp., Inc., et al., No. 23-cv-3514 (JRT/SGE), Doc. 162, Order Granting in Part and Denying in Part Motion to Compel (D. Minn. Mar. 9, 2026).
Investigative Journalism
- Mukherjee, Sy, et al. “Denied by AI: How Medicare Advantage plans use algorithms to cut off care for seniors.” STAT News, November 14, 2023.
Federal Regulatory Guidance
- Centers for Medicare & Medicaid Services. “Frequently Asked Questions related to Coverage Criteria and Utilization Management Requirements in CMS Final Rule (CMS-4201-F).” HPMS Memo, February 6, 2024.
- Centers for Medicare & Medicaid Services. “Medicare Program; Contract Year 2024 Policy and Technical Changes to the Medicare Advantage Program.” CMS-4201-F. 88 Fed. Reg. 22120 (Apr. 12, 2023).
- Centers for Medicare & Medicaid Services. “Inpatient Rehabilitation Hospitals & Inpatient Rehabilitation Units.” Medicare Learning Network Provider Compliance Tips. Last modified November 25, 2025.
- Centers for Medicare & Medicaid Services. “Medicare Payment Systems: Inpatient Rehabilitation Facility Prospective Payment System.” Medicare Learning Network. Last modified November 25, 2025.
- U.S. Department of Health and Human Services. “Nondiscrimination in Health Programs and Activities.” 89 Fed. Reg. 37,522 (May 6, 2024).
- U.S. Department of Health and Human Services, Office of Inspector General. “Medicare Advantage Industry Segment-Specific Compliance Program Guidance.” February 3, 2026.
Federal Statutes and Regulations
- 42 C.F.R. § 422.101(b) and (b)(6) — MA Coverage Criteria Requirements
- 42 C.F.R. § 422.566(d) — Physician Review Requirement for Adverse Coverage Decisions
- 45 C.F.R. § 92.210 — Nondiscrimination in the Use of Patient Care Decision Support Tools
- 31 U.S.C. §§ 3729–3733 — False Claims Act
- 42 U.S.C. § 18116 — ACA Section 1557
Additional Sources
- American Medical Association. “Principles for Augmented Intelligence Development, Deployment, and Use.” 2023.
- Hilton, John. “Judge gives UnitedHealth until April 29 to hand over AI claim denial docs.” InsuranceNewsNet, April 2, 2026.