This website uses cookies

Read our Privacy policy and Terms of use for more information.

In partnership with

Write docs 4x faster. Without hating every second.

Nobody became a developer to write documentation. But the docs still need to get written — PRDs, README updates, architecture decisions, onboarding guides.

Wispr Flow lets you talk through it instead. Speak naturally about what the code does, how it works, and why you built it that way. Flow formats everything into clean, professional text you can paste into Notion, Confluence, or GitHub.

Used by engineering teams at OpenAI, Vercel, and Clay. 89% of messages sent with zero edits. Works system-wide on Mac, Windows, and iPhone.

The agentic shift happened quietly. Last quarter, the AI in your tech stack was a tool. This quarter, it is making decisions. It flags transactions and freezes accounts before a fraud analyst sees them. It denies prior authorization for skilled nursing care. It scores job applicants on a five-point scale and discards everyone below a threshold no human ever reads. It drafts the credit memo that lands in front of an underwriter with a single signature at the bottom. Sometimes not even that.

When the call is right, nobody asks who made it. When it is wrong, three parties suddenly need lawyers: the vendor that built the system, the organization that deployed it, and the employee whose name sits on the workflow approval. The first wave of litigation is already on the docket. The answers are not what most enterprise risk frameworks assumed they would be.

What follows is the map. Not theory. The specific cases filed in 2025 and 2026, the statutes the courts are actually applying, and what each of them tells you about where the bill lands when an agent gets it wrong.

The triangle: vendor, deployer, approver

Three parties show up in every agentic AI dispute. The vendor built the model and licensed it. The deployer — usually the user's employer, insurer, or bank — chose to put it in front of real decisions. The approver is the human whose name authorized the workflow. Lawyers, until recently, treated the vendor relationship as a procurement matter and the approver as a low-risk signatory. Both assumptions are now wrong.

Courts are doing two things at once. They are pulling vendors into liability they thought they had contracted away, and they are holding deployers responsible for outcomes they cannot fully audit. The approver, in many jurisdictions, sits at the intersection — sometimes personally exposed under sectoral rules and increasingly named in regulatory enforcement. The contracts you signed two years ago were drafted for a world that does not exist anymore.

Healthcare: When the algorithm overrides the doctor

Start with the case that is now driving every healthcare general counsel's calendar. In Estate of Gene B. Lokken et al. v. UnitedHealth Group, filed in the District of Minnesota, the families of two deceased Medicare Advantage enrollees allege that UnitedHealth's nH Predict algorithm — built by its naviHealth subsidiary — was used to terminate post-acute care coverage in defiance of treating physicians. The plaintiffs claim the tool carries a roughly 90 percent error rate, meaning the overwhelming majority of denials it produced were reversed when patients had the resources to appeal. Most patients did not appeal. The economics worked.

UnitedHealth's defense was that nH Predict is a guide, not a coverage decision tool. The court was unpersuaded enough to let the case proceed. In February 2025, Judge John R. Tunheim allowed claims for breach of contract and breach of the implied covenant of good faith and fair dealing to survive, finding that those theories turned on whether UnitedHealth complied with its own promise that medical necessity would be determined by clinicians.

Then came the discovery order. On March 9, 2026, a federal magistrate judge directed UnitedHealth to turn over an extraordinarily broad set of internal documents: policies and procedures for post-acute claims back to 2017, every analysis of nH Predict, records relating to the naviHealth acquisition, and materials concerning government investigations into the company's use of AI in claims adjudication. A 2024 Senate investigation cited in the order found that UnitedHealth's denial rate for post-acute care more than doubled after naviHealth was integrated.

“The AI did not save the company from the contract. The contract promised clinicians, and an algorithm took the call.”

That is the doctrinal lesson hiding inside the discovery fight. The plaintiffs' winning theory is not that the AI was biased or that disclosures were missing. It is that the plan documents promised physician-led review, and an algorithm performed that function instead. If your plan, policy, or service agreement promises a human clinician, lawyer, or analyst as the decision-maker, deploying an agent in that seat is a contract problem first and a regulatory problem second.

Financial services: the FCRA flank attack

In financial services and HR, the fight has moved to a different statute. On January 20, 2026, two job applicants filed Kistler et al. v. Eightfold AI Inc. in California state court. The case is not a discrimination claim. It is a consumer-protection claim. The plaintiffs argue that Eightfold's Match Score — generated from public profiles, prior application history, and inferred attributes — is a consumer report, and Eightfold is a consumer reporting agency, under the Fair Credit Reporting Act. If they are right, FCRA's disclosure, authorization, and adverse-action machinery applies. With statutory damages of $100 to $1,000 per willful violation and a class that allegedly covers a billion profiles, the math gets uncomfortable in a hurry.

Read it next to Mobley v. Workday, where Judge Rita Lin in the Northern District of California held that an automated screening vendor could be treated as the employer's agent — meaning the vendor is not selling software, it is performing a function previously done by humans. The case earned preliminary nationwide collective certification in May 2025. The two cases together form the spine of what employment lawyers now openly call the vendor liability squeeze.

The squeeze runs in both directions. The vendor's standard contract caps damages, disclaims compliance warranties, and restricts independent audits. The deploying employer is left legally responsible for outcomes it cannot fully audit, generated by data it cannot see, processed through logic it cannot explain. When the class action arrives, the vendor points at the employer, and the employer points at the vendor. Plaintiffs are happy to sue both.

The regulatory backdrop makes this worse, not better. The CFPB's 2024 circular treating algorithmic employment scores as FCRA-covered was rescinded in 2025. The statute itself did not change. Private plaintiffs are now the primary enforcement engine for the same theory that the agency endorsed. Rescinding guidance reduces federal pressure and increases litigation pressure. That is the opposite of relief.

ECOA: a federal retreat that opens a state lane

On April 22, 2026, the CFPB finalized changes to Regulation B, eliminating disparate-impact liability under the Equal Credit Opportunity Act. The bureau took the position that ECOA's text does not authorize an effects test, ending decades of fair-lending enforcement that did not require proof of intent. State attorneys general saw the move coming and stepped in. In July 2025, Massachusetts Attorney General Andrea Joy Campbell settled with a student lender whose AI underwriting model used a school-level cohort default rate that produced disparate outcomes for Black and Hispanic applicants. California, New Jersey, and others are signaling the same posture.

The federal adverse-action notice obligation has not gone anywhere. ECOA still requires a creditor to give specific, accurate principal reasons for any credit denial, regardless of how complex the model is. “The algorithm decided” is still not a defense. If your model cannot produce a coherent reason that maps to a real factor in the data, you are operating an unlawful system, not just an opaque one.

The practical effect for a deploying institution: the federal floor moved, the state ceiling did not, and your model has to satisfy both. A national lender now has to keep documentation strong enough to defend against intentional-discrimination claims federally and effects-test claims at the state level, while also producing applicant-readable adverse-action explanations that hold up in CFPB exams. The same model. Three audiences. Different proof standards.

The diagnostic: contextual integrity in a five-minute conversation

Here is where the contextual integrity framework earns its keep. Helen Nissenbaum's five parameters — sender, recipient, information subject, transmission principle, and contextual norm — let you walk through any agentic deployment and find the breach point before a regulator does. Two scenarios make this concrete.

Scenario one. Marcus runs commercial credit at a regional bank. He deploys an agentic underwriting assistant that pulls from public filings, transaction patterns, and a third-party model that scores borrower “integrity signals.” The assistant drafts a memo, sets a recommended rate, and sends it to a human underwriter who approves nine out of ten without changes. A small business owner sues after a denial, alleging the integrity signals encoded protected-class proxies.

Run the parameters. The sender is no longer a loan officer; it is a model. The recipient is the underwriter, but functionally also the customer who receives the adverse-action notice. The information subject is the applicant, plus everyone in the training data. The transmission principle the customer expected was “my file is reviewed by a banker who can be questioned.” What actually transmitted is a probabilistic score derived from sources the customer never knew about. The contextual norm of commercial lending is human judgment supported by data. Marcus's deployment inverted it: data judgment lightly supervised by a human. The contextual integrity breach is structural. ECOA and FCRA are downstream symptoms.

Scenario two. Priya is a privacy officer at a regional health system that licenses an agentic prior-authorization tool from a vendor used by half the country's payers. The tool reviews documentation, flags claims for denial, and routes the rest. A nurse case manager spends about ninety seconds per record. A patient's rehab stay is cut short, and her condition deteriorates.

Re-run the parameters. The sender the patient assumed was a clinician. The recipient she expected was her treating physician's recommendation flowing to a payer reviewer. The information subject — her medical record — was processed by a third party, whom she never named in any consent. The transmission principle of medical necessity is clinical judgment. The contextual norm of utilization management has always involved a human reviewer with discretion, not an automated cutoff calibrated to length-of-stay predictions. Priya's exposure is the gap between what the plan documents promise and what the agent actually does. That is the same gap the Lokken plaintiffs are exploiting.

“Contextual integrity is not a compliance theory. It is a diagnostic that tells you which contract or statute is about to be enforced against you.”

The European calculus: Article 26 is the deployer's bill

On the other side of the Atlantic, the picture is different and clarifying. The EU AI Act's high-risk obligations under Article 26 become binding on August 2, 2026. Deployers — not just providers — must implement competent human oversight, retain logs for at least six months, manage input data, conduct fundamental rights impact assessments where required under Article 27, and inform affected workers and the individuals subject to AI-assisted decisions. Penalties for high-risk violations reach 15 million euros or 3 percent of global turnover, whichever is higher.

The cleaner answer Europe was supposed to provide on civil liability did not arrive. The AI Liability Directive was formally withdrawn in October 2025 after the Commission concluded there was no path to agreement. The directive would have eased the burden of proof for AI-harm plaintiffs through a rebuttable presumption of causality and court-ordered access to technical documentation. With it gone, fault-based liability defaults to the patchwork of national tort law. The revised Product Liability Directive picks up some slack — it now treats software and AI systems as products and applies strict liability for defects — but it does not cover service-side conduct or pure economic loss.

The deployer caught between US litigation and EU regulation now faces a curious asymmetry. In the United States, vendors are being pulled into liability through doctrines like agency and consumer reporting. In Europe, the formal civil liability route was closed, but the regulatory obligations on deployers are sharper, more documented, and enforced administratively. A multinational running the same agent on both continents needs two different defense postures.

What a privacy and AI governance lead does this quarter

Start with the contracts. Pull every active vendor agreement that touches a high-risk decision and read the indemnity, audit, and warranty clauses out loud. If the contract caps the vendor's liability at twelve months of fees while your exposure runs to per-violation statutory damages across millions of records, that gap belongs in the renewal, not in your incident report. Ask for FCRA, ECOA, and AI Act compliance representations in writing, audit rights with teeth, and a carve-out from the cap for regulatory penalties and class-action settlements traceable to vendor conduct. Vendors will resist. The Mobley decision is your leverage.

Next, the human-in-the-loop. If your workflow shows a ninety-second average review and a ninety-percent acceptance rate, you do not have human oversight. You have a rubber stamp with a name attached. The fix is not training. It is structural: queue limits, mandatory dissent fields, sample audits of approved cases, and metrics that track when humans actually overturn the model. Without those, the approver becomes evidence, not protection.

Then the disclosures. Map every place a customer, applicant, employee, or patient is told who or what is making the decision affecting them. Where the answer is “a person,” and the reality is “an agent supervised by a person,” you have a contextual integrity breach masquerading as a technical detail. Fix the disclosure or fix the workflow. Both are acceptable. Doing neither is not.

Finally, document the EU side. If any of your high-risk systems touch the EU market — and the territorial reach of Article 26 is broader than most US deployers think — your conformity record, log retention, and FRIA need to be defensible by August 2, 2026. The Commission has not published an FRIA template. That is not an excuse to wait; it is a reason to write your own and align it with Article 27's express elements.

The takeaway

The unsettled liability question is not a reason to slow agentic deployment. It is a reason to deploy with the assumption that every agent decision will, eventually, be litigated. The organizations that will come out of the next eighteen months without a sentinel case attached to their name are the ones that can show, in writing, who promised what to whom, what the agent actually did, and how the human in the seat was empowered to disagree.

Privacy and AI governance leads are the natural owners of that record. The role is not to slow down the technology. It is to make sure the technology arrives at the decision the same way a competent human would, on a path the lawyers can defend and the customer can recognize. That is what contextual integrity is for. That is what the next round of cases will turn on.

About this newsletter

Remote Work Privacy Insights is a weekly read on workplace privacy, AI governance, and the regulatory ground shifting under both. Written by Dr. Edward Halle, FIP, CIPM, CIPP/US, AIGP, CAIE, LL.M., D.B.A., Privacy & AI Governance Practitioner— author of Rethinking Workplace Privacy, Power, and Productivity in the Age of Remote Work (2025). Each edition applies Helen Nissenbaum's contextual integrity framework to the evolving intersection of workplace privacy, AI governance, and regulatory compliance.

Disclaimer: Remote Work Privacy Insights is a newsletter that looks at privacy issues in the workplace using academic ideas. It's meant to educate and is not legal advice. For advice tailored to your company, talk to a qualified privacy or employment lawyer. The opinions shared are the author's and not those of any employer.

Reply

Avatar

or to participate

Recommended for you