This website uses cookies

Read our Privacy policy and Terms of use for more information.

I want to start somewhere uncomfortable.

Over the past three editions, we've been building a case together — that the real problem with workplace monitoring isn't monitoring itself, but monitoring that workers don't know about, don't understand, and have no power to contest. We've talked about contextual integrity, about trust, about why transparency is the only durable foundation for any system that touches how people work.

And today I want to get specific. Because three things are happening right now in workplaces across the country — things that many leaders have either normalized or don't even know they've deployed — that I'd describe, without hesitation, as genuine privacy crises.

I've spent years in this field, reviewing data protection programs, advising organizations through regulatory examinations, and editing peer-reviewed research. What I'm watching unfold in 2025 and 2026 troubles me in a way that routine compliance questions don't. The scale is different. The permanence of the harms is different. And the gap between what employers are doing and what employees actually know about it is extraordinary.

So let's talk about it plainly. Not with a checklist, but with the kind of honesty this topic deserves.

What We're Actually Dealing With

Before we get into the three crises, a number that I think should stop you cold: according to a comprehensive February 2025 survey of 1,500 employers and 1,500 employees, 74% of U.S. employers are now using online tracking tools — real-time screen monitoring, web browsing logs, application tracking. 75% monitor physical workplaces through video surveillance and biometric access controls. 61% have deployed AI-powered analytics to measure employee productivity or behavior.

And only 22% of employees believe they know whether any of this is happening to them.

That isn't a communication gap. That is a structural feature of how these systems get deployed. And it's the foundation from which everything else follows.

Crisis One: We've Crossed a Line With Biometric Data, and Most Companies Haven't Noticed

Let me tell you something, the statute was understood before most technology vendors did.

When Illinois passed the Biometric Information Privacy Act (BIPA) back in 2008, the legislature made a finding that I consider foundational to everything we should be thinking about today. The text reads plainly: biometrics are "biologically unique to the individual; therefore, once compromised, the individual has no recourse." You can reset a password. You cannot reset your fingerprint. You cannot reset your face geometry. You cannot reset your retinal pattern.

That single insight is why biometric data sits in a fundamentally different legal and ethical category from everything else organizations collect about workers — and why treating it like any other operational dataset is a serious mistake.

Yet here's what's actually happening. 67% of U.S. employers now collect biometric data, including fingerprints and facial recognition. Behavioral biometrics — AI systems that identify individuals by their gait, their posture, the rhythm of their keystrokes — are becoming standard in warehouses and logistics operations. Camera systems in ambulances analyze facial expressions for signs of fatigue. Hand geometry scanners control medication access in hospitals. These aren't outliers. They're industry standard practice.

The compliance picture is becoming increasingly treacherous. BIPA — which remains the nation's most stringent biometric privacy law — requires written notice of what's being collected and why, written consent before collection, a publicly available retention and destruction policy, and prohibits any commercial profit from biometric data. Violations carry damages of $1,000 per negligent violation and $5,000 per intentional or reckless violation. Given that many workplace systems collect biometric data with every single clock-in or scan event, class action exposure can reach staggering figures. The litigation landscape across states with biometric statutes is active and growing.

At the federal level, the Department of Justice's Bulk Data Transfer Rule, which took effect in 2025, now restricts large-scale transfers of sensitive personal data — including biometric identifiers — to certain foreign countries and entities. The FTC's Policy Statement on Biometric Information signals that unfair or deceptive practices around biometric collection and use are firmly within the Commission's enforcement mandate.

And then there's a May 2025 Harvard Business Review analysis of the human dimension that I think belongs in every board-level conversation about these systems: when biometric collection feels coercive, or when it's linked — even informally — to performance evaluation, it produces lower morale, measurable mental health strain, and, in a result that should give pause to anyone who installed these systems to improve operations, reduced productivity.

So here's the question every compliance and HR leader should be able to answer before deploying any biometric system: Can we achieve the same operational goal without collecting permanently irreplaceable biological identifiers?

In most cases, the honest answer is yes. Access control functions perfectly well with badge-plus-PIN systems. Time tracking doesn't require fingerprint scanners. When you can achieve the goal another way, and you choose the biometric route anyway, you've created legal exposure and ethical obligation that didn't need to exist.

Where biometric collection is genuinely necessary, BIPA's framework gives you the compliance roadmap: informed, written consent; a published retention and destruction schedule; strict limits on who can access the data; and an absolute prohibition on using it for anything beyond its stated purpose. Even in states without a BIPA equivalent, following this framework is increasingly the de facto legal standard — and it's what a reasonable employer should be doing regardless of geography.

One more thing: biometric data collected for authentication absolutely cannot flow into the same system being used for productivity monitoring. The moment those two datasets touch, you have fundamentally changed the nature of what you collected and the relationship under which you collected it. That's not just an ethical problem. Under an expanding set of state privacy statutes, it may be an illegal one.

Crisis Two: The Productivity Paranoia Trap — And Why Surveillance Makes It Worse

I want to be direct with you about something that I see repeatedly when I review monitoring programs: the surveillance apparatus that organizations have built to manage their anxiety about remote work productivity is, in most cases, making that problem worse. Not marginally. Measurably, demonstrably worse.

Here's the research that should be at the center of every "return to surveillance" decision.

Great Place to Work's longitudinal analysis, which tracked more than 800,000 employees and has been updated through 2025, found that productivity at the Fortune 100 Best Companies — 97 of which support remote or hybrid work — is nearly 42% higher than at typical U.S. workplaces. A 2025 McKinsey analysis found that hybrid workforces are approximately 5% more productive than either fully remote or fully in-office configurations. Stanford economist Nicholas Bloom — whose research has tracked remote work outcomes since 2012 — found in updated 2024 findings that hybrid schedules produce output equivalent to or greater than full in-office work in roughly 70% of measured job categories. The U.S. Bureau of Labor Statistics found a statistically significant positive association between remote work adoption and total factor productivity across 61 private-sector industries.

The evidence is consistent. Remote and hybrid work, well-managed, does not destroy productivity. It frequently improves it.

And yet: a 2025 Microsoft Work Trend Index found that 85% of business leaders still struggle to feel confident that employees working outside the office are being productive. That anxiety — entirely disconnected from the research — is what's driving organizations toward keystroke logging, screenshot capture, mouse movement tracking, and real-time screen monitoring. Not evidence. Anxiety.

The U.S. Government Accountability Office's recent investigation into workplace digital surveillance found that the secrecy surrounding these tools is one of the most significant sources of employee stress — creating what GAO described as a sense of constant surveillance that erodes the boundary between legitimate management and invasive control. A 2023 American Psychological Association report reached the same conclusion: employees in heavily monitored environments are more likely to feel burned out, emotionally exhausted, and less motivated than those who aren't monitored. Workers in high-surveillance environments report stress levels of 45%, compared to 28% in less monitored settings.

Put those numbers together, and what you have is a feedback loop: leaders anxious about productivity install surveillance tools; those tools increase stress and reduce motivation; actual performance suffers; the performance data gets misread as evidence that more surveillance is needed. Round and round.

The legal framework is also evolving in ways that should concern any organization that has deployed monitoring software without clearly disclosing it. The Consumer Financial Protection Bureau has warned that invasive monitoring tools may implicate Fair Credit Reporting Act obligations when used to evaluate or score employees. The National Labor Relations Board has signaled that covert monitoring of employee communications may interfere with protected labor organizing activities. And California's expanding privacy regime continues to create disclosure and purpose-limitation obligations that apply to employee monitoring data.

The compliance path here is actually straightforward, even if implementing it requires a management culture change. Disclose what you monitor, specifically and in plain language. Tie your monitoring to documented business purposes that a reasonable employee would recognize as legitimate. Give employees access to the data collected about them and a meaningful process to contest inaccuracies. And most importantly: measure outcomes, not activity proxies. Project completion, quality of deliverables, and team collaboration — these are things you can justify monitoring because they reflect actual work. Screenshots every ten minutes and keystroke counts don't reflect actual work. They reflect whether someone is sitting at a computer with their hands on the keyboard, which is not the same thing.

Crisis Three: AI Emotion Tracking — The Surveillance That Shouldn't Exist

This is the one that I find most alarming, and I want to be careful to explain precisely why, because the legal and ethical dimensions here are more subtle — and more serious — than most organizations appreciate.

There is a meaningful distinction between what an employee communicates and what an AI system infers about their psychological state from that communication. When someone emails a colleague that a software system has crashed again and she's worried the team won't meet the client deadline, she has shared a piece of work-relevant operational information. She has not consented to share her stress levels, her emotional resilience score, or her burnout risk factor. But that is exactly what an AI sentiment analysis or emotion-tracking system extracts from that message — a psychological inference that she never intended to share and that no legitimate employment relationship has ever required her to disclose.

This is the contextual integrity violation at the heart of workplace emotion AI, and it's severe.

Despite significant scientific uncertainty about whether emotion recognition systems actually work as advertised, their market size is projected to reach $446.6 billion by 2032. The GAO found that these systems can misread the emotional tone of workers of certain racial or national backgrounds, penalize accents, and reinforce gender stereotypes. Workers with disabilities may be flagged as underperformers when the system simply isn't designed to accommodate diverse communication and work patterns. The discrimination risk is not theoretical. It is a documented, predictable consequence of applying systems trained on narrow behavioral datasets to a diverse workforce.

The European Union has been unambiguous about how it views these technologies. Article 5(1)(f) of the EU AI Act — Regulation (EU) 2024/1689 — explicitly prohibits "the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions," with exceptions only for medical or safety purposes. This prohibition became enforceable on February 2, 2025. Organizations that violate it face fines of up to €35 million or 7% of global annual turnover — whichever is higher. That's not a guidance document. That's the law.

The EU's reasoning, stated in Recital 44 of the Act, is worth sitting with: "expression of emotions varies considerably across cultures and situations, and even within a single individual." The Act prohibits these systems not because regulators wanted to be difficult, but because the scientific consensus supporting their reliability simply does not exist. We are talking about technology that makes high-stakes employment decisions — about who is engaged, who is at risk, who should be promoted or managed out — based on inferences that peer-reviewed science cannot validate.

In the United States, federal regulation of workplace emotion AI has not yet caught up with what Europe has done. But the direction of travel is clear. State-level action is accelerating. The FTC's expansive unfair and deceptive practices authority reaches AI systems that produce discriminatory or misleading outputs. The EEOC has affirmed that employment decisions made or influenced by biased algorithmic tools are actionable under existing anti-discrimination law. And any organization using sentiment analysis or emotion-tracking tools on U.S. employees should, at a minimum, be conducting a thorough impact assessment and documenting the business justification for each use case — because the enforcement environment is tightening.

The path forward here is cleaner than it might seem. The real problems — burnout, disengagement, poor communication — have real solutions that don't require surveillance. Regular anonymous pulse surveys give you aggregate organizational data without individual surveillance. Structured one-on-one conversations give managers actual information about what obstacles their people are facing. Addressing the structural causes of workplace stress — unrealistic workloads, unclear expectations, inadequate resources — changes the conditions that create the problem, rather than monitoring people's emotional responses to those conditions.

The technology is not neutral. Every organization deploying emotion AI on its workforce is choosing to assume liability, create discriminatory risk, and conduct surveillance that the EU has determined is incompatible with fundamental human rights in an employment context. That is a significant set of consequences to accept in exchange for insights that the science suggests are unreliable.

How to Evaluate Any Monitoring Tool — Five Questions That Matter

I want to leave you with something practical, because the "what do we do" question is where most organizations get stuck.

Before deploying — or continuing to operate — any monitoring system, these five questions should have documented answers. If they don't, that's your signal.

Is it necessary? What specific, documented business problem does this system address? Has anyone seriously evaluated less invasive alternatives? The burden of justifying surveillance belongs with the organization, not with the employees who will be subject to it.

Is it proportionate? Confirming that a project was completed doesn't require keystroke logging. Verifying facility access doesn't require retinal scanning. The scope of your monitoring should match the scope of your actual business need, not your maximum technological capability.

Is it transparent? Not in the sense that you've published a privacy notice somewhere in the onboarding paperwork, but genuinely transparent: do your employees know, in language they can understand, what is collected about them, how it's used, who sees it, how long it's kept, and whether it connects to any employment decision?

Is it symmetric? Can employees access the information collected about them? Can they contest it, add context, or request correction? Surveillance that moves in only one direction — from employer to employee, without recourse — is not a management tool. It's a power imbalance.

Does it measure outcomes or behavior? Work is about what gets done, not whether someone's hands are on a keyboard. Systems designed around outcomes and collaborative effectiveness can sometimes be justified. Systems designed around behavioral proxies almost always generate more problems than they solve.

The Bottom Line

There's a version of this conversation where I tell you that getting workplace privacy right is mostly about compliance — staying current with BIPA amendments, tracking state privacy legislation, ensuring your AI vendors are conducting proper impact assessments. All of that matters. I mean it when I say that the legal environment around these three issues is tightening quickly, and organizations that aren't actively managing their monitoring programs are accumulating exposure they may not see coming.

But the bigger story isn't really about compliance. It's about what kind of organization you want to run.

The research on this is remarkably consistent: organizations that build on trust outperform those that build on surveillance, in practically every metric that matters — talent retention, engagement, innovation, long-term productivity. Great Place to Work's 2025 data shows 81% of employees at the Fortune 100 Best Companies describing their workplace as psychologically and emotionally healthy, against 45% at typical U.S. workplaces. That gap has business consequences that compound over time.

Surveillance doesn't build that. Transparency does. Fairness does. Genuine managerial investment in people's ability to do their work — without constantly looking over their shoulders — does.

The three crises I've described aren't inevitable features of the technological landscape. They're choices. And there's still time to choose differently.

If you found this useful, forward it to one colleague who's making decisions about monitoring technology in your organization. Those conversations are worth having before the compliance deadline, not after.

Disclaimer: Remote Work Privacy Insights is a newsletter that looks at privacy issues in the workplace using academic ideas. It's meant to educate and is not legal advice. For advice tailored to your company, talk to a qualified privacy or employment lawyer. The opinions shared are the author's and not those of any employer.

Reply

Avatar

or to participate

Recommended for you