This website uses cookies

Read our Privacy policy and Terms of use for more information.

A Bot Joins the Call

Nobody announced it. There was no pop-up warning, no audio chime, no bold line in the calendar invite. Marcus, a remote team lead at a mid-sized financial services firm in New Jersey, had enabled his AI meeting assistant on all of his team’s recurring calls months ago. The tool was a game-changer for him — it transcribed every conversation, summarized action items, and saved him at least two hours a week in follow-up notes. He never gave it a second thought.

Then Priya joined the call.

Priya was a senior analyst based in Illinois, recently brought onto the team as part of a cross-functional project. She’d never heard of Fireflies.ai. She certainly hadn’t consented to it. But the moment she unmuted herself and began speaking, a cloud-based AI system captured her voice, analyzed her speech patterns, and — according to allegations now before a federal court — created and stored a biometric identifier derived from her unique voiceprint. All without her knowledge. All without her written consent. All in potential violation of one of the most consequential biometric privacy laws in the country.

This isn’t a hypothetical. This is the factual core of Cruz v. Fireflies.AI Corp., a class action lawsuit filed in December 2025 in the U.S. District Court for the Central District of Illinois. And if you haven’t heard about it yet, that’s exactly why I’m writing this edition.

The Tool Everyone Uses. The Risk Nobody Mapped.

AI meeting assistants — tools like Fireflies.ai, Otter.ai, Microsoft Copilot, and Zoom AI Companion — have become part of the furniture in most remote and hybrid workplaces. They auto-join calls. They transcribe in real time. They produce summaries, highlight action items, and sometimes even score conversation sentiment. In many organizations, they were deployed by one enthusiastic manager or IT administrator and then quietly spread across the enterprise, with no formal vendor assessment, no employee notice, and no privacy impact analysis.

That casual rollout is now colliding with serious legal exposure. Two active class action lawsuits — each targeting a widely used AI transcription platform — are forcing employers, privacy teams, and HR leaders to ask questions they probably should have asked two years ago.

The first case, Cruz v. Fireflies.AI Corp., No. 3:25-cv-03399, was filed in December 2025. The plaintiff, Katelin Cruz, is an Illinois resident who participated in a virtual meeting hosted by a nonprofit organization that had enabled the Fireflies.ai meeting assistant. Cruz alleges that the tool’s speaker recognition functionality created and retained a voiceprint derived from her voice, constituting a biometric identifier under Illinois law, and that it did so without the written notice and consent that the law expressly requires. She never signed up for Fireflies. She never agreed to its terms. She simply spoke in a meeting.

The second case, In re Otter.AI Privacy Litigation, is a consolidated action now before a federal judge in the Northern District of California. It alleges that Otter.ai not only recorded private conversations without the consent of all participants but then used those recordings to train its AI models — again without adequate disclosure. That second allegation matters enormously because it isn’t just about the recording. It’s about the downstream use of data that participants never agreed to provide in the first place.

No substantive rulings have been issued yet in either case. But employment attorneys are already paying close attention. As Bradford Kelley, a shareholder at Littler Mendelson, put it in a February 2026 analysis: the AI transcription and recording issue is “a hot issue.” That’s an understatement.

The Law Hasn’t Been Sleeping

To understand why these lawsuits carry real teeth, you need to understand the legal environment they’re operating in. And it starts with BIPA.

The Illinois Biometric Information Privacy Act — BIPA, for short — is the toughest biometric privacy law in the United States and the one with the most litigation history behind it. It requires any private entity collecting a biometric identifier — including voiceprints — to first provide written notice, obtain written consent, publish a publicly available data retention policy, and refrain from selling or profiting from that data. Crucially, BIPA includes a private right of action and statutory damages of $1,000 per negligent violation and $5,000 per intentional or reckless violation. In a class action involving thousands of participants across hundreds of recorded calls, that arithmetic gets uncomfortable very quickly.

The Cruz complaint specifically alleges that Fireflies.ai lacks both a publicly available retention policy and adequate disclosure practices regarding biometric collection. Those aren’t peripheral claims. They go directly to the core of what BIPA requires.

But BIPA is only part of the picture. Here’s the compliance layer that catches most employers completely off guard: all-party consent laws for recording.

Under the federal Wiretap Act, and most state counterparts, one party to a conversation can consent to its recording on behalf of everyone else. That’s the one-party consent standard. But twelve states — California, Connecticut, Delaware, Florida, Illinois, Maryland, Massachusetts, Michigan, Montana, New Hampshire, Pennsylvania, and Washington — require all participants to consent before a conversation can be recorded. Invite an employee from any one of those states into a virtual meeting where an AI notetaker is running, and the consent framework shifts entirely.

Think about what that means in practice for a remote-first team. A single Zoom call with participants in New York (one-party state), California (all-party state), and Illinois (all-party state and BIPA jurisdiction) could trigger three separate and overlapping legal obligations — none of which most organizations have mapped, and none of which their AI meeting tools are designed to navigate automatically.

A single virtual meeting spanning three states can trigger three different consent frameworks — simultaneously.

What Contextual Integrity Reveals

Helen Nissenbaum’s theory of contextual integrity gives us a precise and powerful lens for analyzing what’s actually going wrong here. The theory holds that privacy is violated not simply when information is collected without consent, but when information flows in ways that violate the contextual norms under which it was originally shared. Every context — a meeting, a medical appointment, a job interview — carries implicit expectations about who receives information, in what form, and for what purpose. When those expectations are broken, privacy is breached.

Let’s run the five-parameter contextual integrity analysis on our Marcus and Priya scenario, because the framework does something that a simple legal checklist can’t: it shows us exactly where the norm violation occurs, and why it feels wrong even before you look at the statute.

1. Sender

Priya is the sender. She’s speaking in what she reasonably understands to be a professional team meeting — an internal discussion between colleagues. She’s sharing analytical observations, asking questions, perhaps voicing concerns or disagreements. She is sharing her voice, her thoughts, and — unknowingly — her biometric data. Her expectations about who is receiving that information are anchored entirely in the visible participants on the call.

2. Recipient

Priya believes the recipients are her colleagues — Marcus and the rest of the team. But the actual recipients include Fireflies.ai’s cloud infrastructure, its speaker recognition system, and whatever third-party subprocessors or model training pipelines that data flows through downstream. None of those recipients was visible to Priya, and none were part of her reasonable expectation of the information flow.

3. Information Subject

The information subject is Priya herself — specifically, her voiceprint as a biometric identifier, combined with the substantive content of what she said, the timestamps of her speech, and potentially her communication patterns and sentiment. This isn’t just a transcript. Speaker identification technology produces a biometric profile anchored to an individual, which is why BIPA treats voiceprints the same way it treats fingerprints.

4. Transmission Principles

In a normal professional meeting, information flows under implicit norms of confidentiality and purpose limitation. What’s said in a team call stays within the team. It might be noted in minutes, shared with a manager, or referenced in a follow-up email — but it stays within the relational context of the workplace. The introduction of an AI meeting tool fundamentally disrupts this. The transmission principle shifts from “secure within team” to “transferred to a cloud platform for AI processing, model training, retention, and potential commercial exploitation” — without any disclosure that this shift occurred.

5. Contextual Norms

This is where the analysis lands hardest. The contextual norms of a professional meeting — even a recorded one — do not include biometric data collection, voiceprint creation, or use of one’s speech to train a commercial AI model. When Marcus enabled Fireflies, he altered the information environment of every meeting without disclosing that alteration to the other participants. Those participants, including Priya, operated under false assumptions about what the meeting context actually was. That gap between expectation and reality is precisely what contextual integrity identifies as a privacy violation, regardless of whether any law has been broken.

And in Illinois, the law has been broken, too.

The August 2026 Deadline Most Employers Haven’t Noticed

Domestic litigation is only part of the pressure building here. Beginning in August 2026, the EU AI Act’s high-risk AI classification framework will introduce a separate layer of obligation that could reach U.S. employers with European employees or operations. Under the Act, AI systems used for worker monitoring, performance assessment, and work management may qualify as high-risk systems — a classification that carries mandatory conformity assessments, transparency obligations, data governance requirements, and human oversight mechanisms.

Here’s the part that’s easy to miss: sentiment analytics and productivity scoring features, which are bundled into several AI meeting tools, are precisely the kind of functionality likely to trigger high-risk classification. If your AI notetaker is scoring how engaged your employees sound on calls, or flagging emotional states in transcripts, you may already be operating a high-risk AI system under the EU Act’s framework — and the clock is running.

For multinational employers, this compounds an already complex picture. Under the GDPR, consent to AI recording in a professional context must be freely given, specific, and unambiguous from each individual whose data is processed. The model that says the meeting host’s consent covers everyone else doesn’t survive GDPR scrutiny. And data transfers from recordings processed by U.S.-based vendors must comply with international transfer mechanisms such as Standard Contractual Clauses — a requirement that many organizations deploying AI meeting tools have simply not addressed.

What Employers and Privacy Teams Need to Do Now

Neither of these lawsuits has produced a binding precedent yet. But waiting for precedent is not a compliance strategy. Here’s what the current legal and regulatory environment actually demands of privacy and HR leaders.

Map Your Consent Exposure Before Your Next Call

Start with a simple question: Do you know which AI meeting tools are running in your organization right now? Not just the ones IT approved. The ones that managers quietly enabled because they were free and convenient. The ones embedded in Microsoft Teams or Zoom as default features that nobody turned off. An enterprise AI inventory isn’t optional in 2026 — it’s the baseline. And for each tool on that list, the next question is: what states are your employees and external participants in when these calls happen?

That geographic mapping exercise is non-trivial but essential. If your employees are distributed across California, Illinois, and Massachusetts, you are operating in three all-party consent jurisdictions simultaneously. Your AI meeting tools need to be configured to announce their presence at the start of every call, and your meeting policies need to specify who can authorize recording and under what conditions.

Turn Off What You Haven’t Audited

Speaker identification and voice recognition features are where the biometric liability lives. If your AI meeting tool has the ability to attribute statements to individual speakers — and most of them do — you should understand exactly how it creates those attributions, where that data is stored, how long it’s retained, and whether your vendor has a publicly available retention and destruction policy. If they don’t, that’s already a BIPA red flag. Consider disabling voice recognition features entirely until you’ve completed a proper vendor assessment and, where required, obtained written consent from all affected employees.

Conduct a Vendor Assessment That Goes Beyond the Security Checklist

The Cruz complaint’s allegation that Fireflies.ai lacks a publicly available data retention policy is instructive. Standard IT security questionnaires typically don’t ask about biometric data governance or BIPA compliance. Your AI meeting tool vendor assessment needs to ask: Does the platform create voiceprints or other biometric identifiers? What is the published retention schedule for biometric data? Does the vendor train its models on customer recordings, and if so, can customers opt out? Are there data processing agreements that address biometric data specifically? What subprocessors have access to recording data?

These are not hypothetical questions. The Otter.ai lawsuit turns in significant part on allegations that the platform used meeting recordings for AI model training without adequate disclosure. Your vendor due diligence should specifically surface that risk.

Update Your Workplace AI Policy

If your organization has a workplace AI policy — and if it doesn’t, that’s a conversation for a different edition — it almost certainly doesn’t address AI meeting assistants with enough specificity. A complete policy should identify which tools are approved for use, define who can authorize their activation in a meeting context, specify what notice must be given to participants before recording begins, address retention and deletion timelines, and clarify whether and under what conditions recordings can be used for purposes beyond notetaking. External participants — clients, candidates, contractors, consultants — deserve particular attention, because they’re the ones least likely to have any awareness that an AI tool is running.

Consider the Discovery Implications

This one doesn’t get enough attention. AI-generated meeting summaries, transcripts, and speaker attributions are discoverable documents in employment litigation. A February 2026 analysis from Fisher Phillips flagged that AI-generated ESI — especially from notetakers, meeting summaries, and auto-drafted communications — is becoming a core discovery battlefield in employment cases. If a manager’s AI meeting tool produced a transcript that characterizes what a terminated employee said in a performance discussion, that transcript may be central evidence in a wrongful termination or discrimination claim. You need to know where that data lives, how long it’s retained, and whether your litigation hold procedures reach it.

Back to Marcus and Priya

Marcus never set out to create a privacy problem. He was trying to be more organized. He found a tool that worked, and he used it. That’s a story playing out in thousands of remote workplaces right now, and in most of them, nobody has asked the follow-up questions.

But here’s what Marcus didn’t know: his employer is the one holding the liability. When an employee deploys an AI meeting tool in the course of their work, it’s the organization — not the tool vendor, not Marcus personally — that bears primary responsibility for ensuring that the deployment complies with applicable consent requirements, biometric data laws, and contextual information norms. Fireflies’ terms of service explicitly place responsibility on the customer to ensure that “suitable safeguards and consents” are in place. Marcus’ company accepted those terms when it signed up. The question is whether anyone on the privacy or legal team ever read them.

Priya, for her part, has a legitimate grievance. She participated in a professional meeting under a reasonable set of assumptions about how her voice and words would be used. Those assumptions were violated — not by anyone’s malice, but by a combination of technological convenience, organizational inattention, and a legal framework that simply hasn’t kept pace with how quickly these tools have proliferated.

The contextual norm of a team meeting is not “my voiceprint will be captured, processed, and retained by a cloud AI platform I’ve never heard of.” Until organizations design their AI meeting tool deployments around that reality, they’re building liability one call at a time.

The Broader Signal

The Fireflies.ai and Otter.ai lawsuits are early signals, not outliers. As AI tools become more deeply embedded in workplace communication, the consent and biometric privacy questions they raise will become harder to ignore — legally, regulatorily, and reputationally. Courts are beginning to define the liability framework. Regulators are watching. And employees, increasingly aware of their rights, are starting to ask questions that most HR leaders aren’t yet prepared to answer.

The organizations that will navigate this well aren’t the ones waiting for binding precedent. They’re the ones conducting vendor assessments now, updating their AI policies now, and designing employee notice mechanisms that treat consent as a genuine practice rather than a legal formality. They understand that privacy in the remote workplace isn’t just a compliance obligation — it’s a condition of trust.

And trust, once broken by a bot that nobody announced, is very hard to rebuild.

Disclaimer: Remote Work Privacy Insights is a newsletter that looks at privacy issues in the workplace using academic ideas. It's meant to educate and is not legal advice. For advice tailored to your company, talk to a qualified privacy or employment lawyer. The opinions shared are the author's and not those of any employer.

Primary Sources & Further Reading

Reply

Avatar

or to participate

Recommended for you