On February 13, 2025, a federal court issued a highly anticipated ruling in Estate of Gene B. Lokken v. UnitedHealth Group (CASE 0:23-cv-03514-JRT-SGE)—denying UnitedHealthcare’s attempt to dismiss certain state law claims and allowing breach of contract and good faith claims to move forward. It’s a major development in a case I first discussed back in November 2023 (see my post here), when UHG was sued over AI-driven coverage denials under its Medicare Advantage plans. Given this new ruling, it’s a perfect time to revisit the original lawsuit’s claims and the broader legal risks that AI poses in healthcare.
Quick Recap of the Initial Lawsuit (Filed November 2023)
In my previous blog post, When AI Denies Your Healthcare: The UnitedHealthcare Lawsuit and the Legal Dangers of AI in Medicine, I examined the allegations that:
-
- naviHealth, a UHG subsidiary, used an AI tool called nH Predict to determine whether Medicare Advantage patients should receive post-acute care.
-
- This AI-driven approach allegedly overruled treating physicians, relied on rigid (and often inaccurate) predictions for patient recovery, and caused vulnerable beneficiaries to be discharged prematurely—sometimes with dire consequences.
I also highlighted five major legal risks for healthcare organizations using AI:
-
- AI and Insurance Discrimination
- Algorithmic Bias
- Privacy Violations and HIPAA Risks
- Medical Malpractice Liability
- Lack of AI Transparency (“Black Box” Problem)
(for a more detailed discussion about each of these legal risks, see my previous post here). These are exactly the kinds of pitfalls I warned about—especially where AI tools end up denying legitimate claims.
The February 2025 Ruling: What Happened?
On February 13, 2025, the U.S. District Court for the District of Minnesota granted in part and denied in part UnitedHealthcare’s motion to dismiss. Here’s the gist:
1. Some State Law Claims Survived. The court ruled that breach of contract and breach of the implied covenant of good faith and fair dealing can proceed. According to the court, these claims primarily turn on whether UHG broke its own insurance contract provisions—particularly the policy language stating that coverage decisions would be made by clinical staff or physicians, not by an AI algorithm.
2. Most Other Claims Were Dismissed. The judge found that the Medicare Act’s broad preemption barred other claims—like unjust enrichment, insurance bad faith, and state statutory claims under consumer-protection or unfair-insurance laws—because they effectively regulate the same subject matter as Medicare’s federal coverage rules.
3. Exhaustion of Administrative Remedies Waived. Plaintiffs normally must exhaust a Medicare appeals process before filing a lawsuit. The court, however, waived that requirement, noting that many patients faced irreparable harm (e.g., being forced to forgo vital care) and that futility was likely if UHG repeatedly overturned any favorable appeal rulings by issuing new denials.
In short, the court concluded that UHG couldn’t simply knock out the entire case by invoking Medicare preemption. While it did dismiss several state-law claims, the lawsuit will continue on with the breach of contract and implied good faith claims—potentially exposing UHG to further discovery and liability.
Why it Matters to AI in Healthcare
1. Enforceability of AI-Driven Coverage. This ruling suggests that if your plan documents promise physician-led medical review, using AI as a near-automatic gatekeeper may open you to contract-based litigation—even if federal law preempts other claims.
2. Preemption Is Not a Silver Bullet. Medicare Advantage organizations sometimes rely on federal preemption to shield themselves from state law claims. Courts may still allow certain contract-based or good-faith claims to stand, particularly where an insurer’s own policy language is at issue.
3. Legal Complexity of AI “Substitute Decision-Making.” As I cautioned in November 2023, AI cannot simply replace human clinical judgment without rigorous safeguards. This case shows that courts will scrutinize how insurers or providers integrate algorithmic tools—especially if those tools lead to unexpected or “rubber-stamp” denials.
4. Revisiting AI’s Ethical and Operational Framework. In the bigger picture, the partial denial of UHG’s motion to dismiss highlights AI’s fragile position in healthcare. The technology must support, rather than undermine, a provider’s duty to deliver medically necessary care. As AI adoption grows, so does the risk of litigation when algorithms go against the grain of accepted clinical standards.
Practical Tips in Light of the New Ruling
🔹 Review Your Policy Language! If plan documents promise “clinical staff” or “physicians” will make coverage decisions, health plans must be sure they’re actually doing that—over-reliance on AI can trigger breach of contract claims if the AI effectively makes final decisions. Likewise, hospitals and physician practices should confirm that their internal policies, disclaimers, and patient forms align with accepted clinical standards and HIPAA requirements, ensuring that AI-driven processes neither override human judgment nor expose protected health information (PHI) in unintended ways.
🔹 Fortify AI Governance! Establish rigorous oversight committees and clinical review protocols to ensure AI suggestions don’t automatically translate to denials.Hospitals and large clinical settings can benefit from a dedicated AI oversight team that not only prevents unintended delegations of critical judgment to algorithms but also implements robust data protection measures—minimizing risks of HIPAA violations and maintaining patient privacy.
🔹 Stay Current on Applicable Laws & Regulations. The latest Medicare Advantage regulations emphasize compliance with Original Medicare coverage standards and no undue barriers to medically necessary care—including purely algorithmic denials. Similarly, staying up to date on HIPAA rules is crucial: as AI tools evolve, so do potential privacy vulnerabilities. Ensuring that your AI processes align with CMS directives and HIPAA’s security and privacy standards can help avoid costly compliance pitfalls.
🔹 Anticipate Litigation. If you’re an MA organization or healthcare provider using AI, document every step—have clear, consistent procedures for appeals, physician overrides, and addressing patient or provider complaints. Physician groups, clinics, and hospitals should likewise keep meticulous records of both clinical decision-making and data-handling procedures, reducing the risk of malpractice disputes and allegations of privacy breaches under HIPAA or state laws.
Conclusion
The court ruling on Estate of Gene B. Lokken v. UnitedHealth Group this month is a wake-up call for the entire healthcare industry—underscoring that AI-based coverage denials can expose insurers to serious legal challenges. While Medicare law does preempt some claims, insurers remain accountable for what their own policies promise. In other words, no algorithm should overshadow a plan’s contractual commitments or a physician’s clinical judgment.
For anyone following the November 2023 lawsuit, this latest ruling serves as a broader cautionary tale:
While AI can speed up everything from coverage decisions to clinical diagnoses, it can also lead straight into legal quicksand if not managed properly.