Artificial intelligence (AI) is supposed to make healthcare smarter, more efficient, and—ideally—better for patients. But as UnitedHealthcare Group (UHG) recently learned, AI can also go horribly wrong, leading to denied care, regulatory scrutiny, and class-action lawsuits.
On November 14, 2023, UHG was sued for allegedly using an AI-powered claims review system to systematically deny post-acute care under Medicare Advantage plans (see Estate of Lokken v. UnitedHealth Group). The case isn’t just about one company’s missteps—it’s a wake-up call for the entire healthcare industry. AI in medicine comes with serious legal risks, from discrimination and malpractice to privacy violations and regulatory noncompliance.
Today’s posts breaks down what happened in the UHG case, what it reveals about some of AI’s legal minefields, and how healthcare organizations can avoid becoming the next target of an AI-related lawsuit.
The UnitedHealthcare Lawsuit: When AI Gets It Wrong
At the center of the case is naviHealth, a UHG subsidiary that used an AI tool called nH Predict to evaluate whether Medicare Advantage patients should receive post-acute care. According to the lawsuit, the AI system:
🔹 Overruled treating physicians on medically necessary care.
🔹 Had a 90% error rate, wrongly denying coverage in the vast majority of cases.
🔹 Prematurely cut off care, forcing vulnerable patients to leave facilities before they were ready.
Imagine recovering from surgery in a rehab facility, only to be told by an algorithm: “You’re good to go—time to leave.” Never mind that your doctor disagrees—the AI has spoken.
The consequences were severe. Many patients allegedly suffered harm after being discharged too soon. Regulators quickly took notice. Although CMS has not publicly commented on this lawsuit specifically, it has long made clear that Medicare Advantage organizations must follow Original Medicare coverage criteria and may not impose barriers to medically necessary care. In its Contract Year 2024 Medicare Advantage Final Rule, for instance, CMS reinforced that coverage decisions should not conflict with clinically accepted standards or override treating physicians’ judgments, effectively putting MA plans on notice that purely AI-driven denials can lead to compliance violations.
This case is far from an isolated incident. It highlights deeper legal risks lurking in AI-powered healthcare—risks that are drawing increasing scrutiny from regulators, lawmakers, and patient advocates.
A Lawsuit Waiting to Happen
The UHG lawsuit exposes a harsh truth: AI in healthcare can be as legally risky as it is revolutionary. Here are five of the biggest legal dangers organizations must navigate.
1. AI and Insurance Discrimination: When Algorithms Violate the ACA
One of the most troubling allegations in the UHG case is that AI systematically denied care. But was it also discriminating?
Under the Affordable Care Act (ACA) (42 U.S.C. § 18001 et seq.), insurers cannot make discriminatory coverage decisions. Yet AI systems can unintentionally reinforce biases by learning from historical data, which may already reflect disparities in care.
For example, an AI model might:
🔹 Flag older or disabled patients as “high-cost,” leading to disproportionate coverage denials.
🔹 Make biased predictions based on race, gender, or socioeconomic factors if trained on non-representative data.
If AI outcomes disproportionately harm protected groups, healthcare organizations could face major civil rights lawsuits and regulatory crackdowns.
2. Algorithmic Bias: AI Can Be Just as Flawed as Human Decision-Makers
AI is often seen as objective, but it’s only as good as the data it’s trained on. If that data contains biases, AI will amplify them.
Under the Civil Rights Act of 1964 (42 U.S.C. § 2000d) and the Americans with Disabilities Act (ADA) (42 U.S.C. § 12101 et seq.), discrimination in federally funded healthcare programs is illegal. Yet biased AI models can lead to unequal treatment.
Some real-world examples of AI bias in healthcare include:
🔹 Diagnostic disparities: AI trained primarily on white male patients may be less accurate for women and people of color.
🔹 Hiring discrimination: AI-driven tools used in medical hiring have already been found to favor white male applicants. If the same flawed logic applies to patient care, legal trouble is inevitable.
Regulators are paying attention. The Federal Trade Commission (FTC), Department of Justice (DOJ), and HHS Office for Civil Rights (OCR) have all signaled they will crack down on AI-driven discrimination—and lawsuits are sure to follow.
3. Privacy Violations: AI and HIPAA Compliance Risks
AI needs vast amounts of patient data to function, but individually identifiable health information is protected under the Health Insurance Portability and Accountability Act (HIPAA) (45 CFR § 164.502). Healthcare organizations using AI must be careful not to violate HIPAA’s strict privacy and security requirements.
Some major AI-related HIPAA risks include:
🔹 Failure to properly legally de-identify patient data under HIPAA’s standards, leading to unauthorized exposure of PHI.
🔹 Sharing patient data with third-party AI vendors without proper patient authorization or a HIPAA Business Associate Agreement.
🔹 Automated decision-making tools inadvertently exposing private health details.
4. Medical Malpractice: Who’s Liable When AI Makes a Bad Call?
Traditionally, if a doctor misdiagnoses a patient, they can be sued for malpractice. But what happens when an AI system makes the wrong call?
Under Restatement (Second) of Torts § 299A, healthcare providers must adhere to accepted medical standards. However, AI complicates liability questions:
⚠ Is the doctor at fault for trusting AI’s recommendation?
⚠ Is the hospital liable for implementing the AI system?
⚠ Is the software company that developed the algorithm responsible?
5. AI Transparency: The “Black Box” Problem
One of the biggest legal risks in AI-driven healthcare is the lack of transparency. Many AI systems operate as “black boxes,” meaning even their developers can’t fully explain how they reach their conclusions.
This is a problem because under the 21st Century Cures Act (42 U.S.C. § 300jj-52), there’s increasing regulatory pressure for explainable AI in healthcare. If a hospital or insurer can’t justify an AI-driven decision, they could face lawsuits from patients and scrutiny from regulators.
In the UHG case, one of the biggest concerns was whether nH Predict’s decision-making process was properly vetted. If AI is making life-altering choices, healthcare organizations need to be able to explain how and why.
Conclusion: AI in Healthcare Needs Guardrails
The UnitedHealthcare lawsuit is a warning: AI in medicine isn’t a magic fix—it’s a legal and ethical minefield. For healthcare organizations investing in AI, the key lessons are:
✔ Transparency is non-negotiable. AI decisions must be explainable to regulators, patients, and providers.
✔ Bias must be actively mitigated. AI models should be trained on diverse data to prevent discrimination.
✔ Privacy cannot be an afterthought. AI must comply with HIPAA and protect PHI.
✔ AI should assist, not replace, human judgment. CMS has made it clear—AI should not be making final healthcare decisions.
AI has enormous potential to improve healthcare—but without proper implementation and oversight, the next big lawsuit is just around the corner. If healthcare organizations don’t tread carefully, the future of AI in medicine won’t be about innovation—it will be about damage control.
_________________________
Not sure if your HIPAA compliance program adequately addresses privacy risks with AI? Help is just a click away at legalhie.com/membership