This website uses cookies

Read our Privacy policy and Terms of use for more information.

Contextual Integrity: The Framework That Explains Why Privacy Isn’t Binary

Last week, I shared something from my doctoral research that tends to stop people mid-sentence when I mention it at conferences. Employees don’t hate being watched. They hate being watched inappropriately. The same worker who filed a grievance over keystroke monitoring had absolutely no problem with her manager tracking project milestones. That apparent contradiction isn’t a contradiction at all. And today I want to give you the framework that explains it.

It’s called contextual integrity. Helen Nissenbaum developed it, and in my view, it’s the single most useful analytical tool available to anyone working in workplace privacy. Not because it’s complicated — it isn’t — but because it maps onto how human beings actually think and feel about their information. Once you understand it, you can’t unsee it. You’ll start recognizing norm violations in every poorly designed monitoring system you encounter. And unfortunately, you’ll encounter a lot of them.

Privacy Isn’t a Secret. It’s a Flow.

Most people still carry a binary idea of privacy in their heads. Something is either private or public. Hidden or disclosed. If you share information with someone, you’ve given up your privacy claim over it. This is the thinking behind that tired old line: “If you have nothing to hide, why do you care?”

Nissenbaum’s insight, laid out in her landmark 2004 paper Privacy as Contextual Integrity in the Washington Law Review, is that this framing is simply wrong. Privacy isn’t about secrecy. It’s about appropriate information flows. And what’s appropriate depends entirely on context.

Think about your last appointment with your doctor. You probably told them things you wouldn’t tell your closest friends. You described symptoms, medications, and lifestyle habits. A nurse may have been in the room. Your file was accessible to other providers in the practice. None of that felt like a privacy violation. Why? Because those information flows fit the norms of a healthcare context. Doctors receive health information to provide care. That’s what the context demands.

Now imagine your doctor mentioned your diagnosis at a dinner party. Same information. Same person sharing it. But now there’s a clear violation — one most of us would feel viscerally before we could even articulate why. The context changed. The norms changed with it. The flow became inappropriate.

“Contextual integrity maintains that privacy violations can invariably be traced to breaches of context-relative informational norms.” — Helen Nissenbaum

This is what Nissenbaum means when she writes, in her 2010 book Privacy in Context, that “contextual integrity maintains that privacy violations can invariably be traced to breaches of context-relative informational norms.” That uncomfortable feeling when something about how your data is being used seems ‘off’? That’s your internal contextual integrity detector. It works even when you don’t have the vocabulary to explain it.

The Five Parameters: A Diagnostic You Can Actually Use

Nissenbaum identifies five parameters that define any contextual information norm. Together they form a diagnostic framework that’s remarkably practical. When a monitoring practice generates resistance or legal exposure, you can almost always trace it back to a failure in one or more of these parameters.

 1. The Data Subject

This is the person the information is about. In workplace contexts, that’s usually the employee. But context matters here, too. A CEO’s schedule and movements are subject to higher transparency expectations — public accountability is part of the role. A junior analyst’s moment-by-moment computer activity isn’t, as long as they’re delivering results.

I’ve seen companies apply uniform monitoring policies across the entire organization without asking this question. When a VP of Compliance and an entry-level customer service rep are subject to identical activity tracking, someone hasn’t done the contextual analysis.

 2. The Sender

Who — or what — generates the information? This parameter carries more weight than most compliance professionals realize.

Consider two scenarios. In the first, a remote employee uses a project management platform to post daily status updates. They’re the sender. They’re consciously generating information and choosing to share it. In the second, background software logs every application they open, every website they visit, and the precise duration of each activity. Here, the software is the sender, operating without active participation from the employee — and often without their genuine awareness, regardless of what the onboarding paperwork said.

In my doctoral research, this distinction mattered enormously. Employees who actively generated data reported feeling in control. Employees subject to passive, automated data collection reported feeling surveilled. Same outcome — the employer has data — radically different experience.

 3. The Recipient

Who receives this information fundamentally shapes how appropriate it feels. Your direct manager seeing your project completion rates is one thing. A faceless analytics team at corporate headquarters reviewing screenshots of your home office every ten minutes is quite another.

Here’s a real pattern I’ve observed repeatedly. A software company implemented productivity tracking that made developers’ commit rates visible to their immediate team. The team accepted it. They knew each other. The data was relevant to their collaborative work. When the same data was opened to executives three organizational levels up — people who had never met the developers — the team revolted. Same information. Same numbers. Different recipient. Completely different response.

The recipient parameter also extends to third parties. When employees discover that an external vendor’s algorithm is analyzing their activity patterns, the reaction is almost always sharper than if the same analysis were being done internally. The FTC has noted this concern in its ongoing examination of commercial surveillance practices, and it’s increasingly appearing in state-level privacy litigation.

 4. Information Type

What, specifically, is being collected? This is where I see the most obvious failures in workplace monitoring design.

A contact center needs to know whether calls are being answered within service level targets. Whether customer satisfaction scores are trending up or down. Whether representatives are available during their scheduled shifts. These are outcome-based metrics directly connected to job performance. They fit the employment context.

What they don’t need — and what has no contextual justification — is keystroke frequency, mouse movement patterns, idle-versus-active time ratios, or screenshots captured every few minutes. That data doesn’t answer any legitimate business question about job performance. It answers questions about whether employees look busy. Those are not the same thing. And the NLRB’s recent guidance on electronic monitoring has made clear that employers who conflate the two may face unfair labor practice exposure under the National Labor Relations Act.

 5. Transmission Principle

This is the parameter that pulls everything together. Under what terms does the information flow? Is it transparent? Is it consensual in any meaningful sense? Is it proportionate to the business need? What happens to the data once collected?

Two companies I’ll call Meridian Financial and Apex Services both monitored customer service representatives. Both recorded calls. But the transmission principles were completely different.

At Meridian, supervisors randomly selected five calls per month for quality review. Representatives knew the practice existed, understood it was for coaching, could request to hear their own recordings, and received feedback privately. Data was retained for 90 days and used solely for development purposes.

At Apex, every call was recorded and processed by an algorithm that flagged “problematic” keywords. Representatives found out about issues only when confronted with them. There was no access to their own recordings, no defined retention limit, and no clarity about what else the data might be used for.

Meridian built a compliance culture. Apex built a culture of fear. Turnover at Apex in the following year ran nearly three times the industry average. The monitoring didn’t improve performance — it destroyed the trust that makes performance possible.

 Why Remote Work Changed Everything

The shift to remote work didn’t just change where people work. It changed the context of employment. And because it changed the context, the norms governing appropriate information flows shifted with it. Many organizations never noticed.

In a physical office, certain observations are ambient and expected. A manager walking through the floor sees who’s at their desk, who’s in a meeting, who stepped out for lunch. Employees expect this level of incidental visibility. It’s part of the office context.

But a home is not an office. When companies tried to replicate office-level visibility through digital surveillance of home environments, they imported office norms into a completely different context — one where the employee has a fundamentally different expectation of privacy. A software taking periodic screenshots doesn’t just capture whether someone is at their computer. It can capture family photos on the wall. A child walking through the background. A medical prescription on the desk.

The EEOC’s guidance on workplace privacy and emerging state legislation — including the New York Labor Law § 203-c requirements for electronic monitoring notice — reflect exactly this recognition. Legislators are catching up to what employees already knew: home-based remote work carries different contextual expectations than office work, and monitoring practices that ignore this will generate resistance, litigation, and turnover.

 A Tale of Two Companies

Case Study

Let me walk you through how this plays out in practice. These are composite scenarios drawn from patterns I’ve encountered across my research and advisory work.

Clearview Analytics had 200 remote engineers. Leadership was frustrated with project delays and wanted better visibility into how time was being spent. Their first instinct was to deploy activity monitoring software — keystroke logging, screenshots every fifteen minutes, and idle time reports.

Before they rolled it out, their privacy lead — credit to her — ran a contextual integrity audit. She asked a simple question: What does leadership actually need to know? The answer was project progress, technical blockers, and capacity planning. None of that required keystroke data. None of it required screenshots.

They redesigned. Engineers logged work through project management tools they were already using. Status updates went to immediate teams, not to a central analytics dashboard. Everything was transparent. Engineers could see the same reports their managers saw.

Six months later: voluntary turnover had dropped by half. Project completion rates improved. And not a single screenshot had been captured.

 Pinnacle Services took the other path. They deployed comprehensive activity monitoring without a contextual analysis, reasoning that disclosed monitoring is legally compliant monitoring. They were right about the disclosure — they sent the required notice — but wrong about the rest.

Within weeks, the monitoring system was generating automated warnings for “too many unaccounted activities,” which turned out to include bathroom breaks. Two employees were terminated after the system flagged their mouse activity as suspicious; they had been using physical mouse movers, which they bought specifically because the monitoring made them anxious about appearing idle during legitimate work.

Pinnacle settled two wrongful termination claims. They lost four senior employees to competitors who didn’t monitor at this level. And they never did answer the original business question about productivity, because the data they were collecting didn’t actually measure productivity.

The lesson isn’t that monitoring is wrong. The lesson is that monitoring without contextual integrity produces data that answers the wrong questions — and destroys the trust that would have made the right answers available.

 Three Questions Before You Deploy Anything

I use a three-question contextual integrity audit in my advisory work. It’s not complicated. It takes about an hour with the right stakeholders in the room. And it has saved more than one organization from an expensive mistake.

First: Does this align with established norms for our employment context? Would a reasonable employee, with full knowledge of their role and your industry, expect this type of monitoring? If you’re implementing something that would genuinely surprise your workforce when disclosed, that surprise is your signal. It means you’re importing a monitoring practice from a different context where it might be appropriate, or inventing one that doesn’t fit any established norm.

Second: Can we articulate a transparent, justified, and proportionate transmission principle? You should be able to explain clearly what you’re collecting, why that specific data serves a legitimate business purpose, who has access to it, how it’s used in decisions, how long it’s retained, and how employees can access their own records. If any of those answers are murky, the transmission principle isn’t ready. Under the NIST Privacy Framework, this kind of documented, proportionate data practice is foundational to responsible privacy governance.

Third: Is the information type necessary and proportionate to your actual business need? Not your surveillance appetite — your business needs. Are you measuring outcomes or activities? Could you answer the same business question with less invasive data? A general rule I offer clients: if you can’t explain the data collection to a new employee on their first day in a way that sounds reasonable, you probably can’t justify it legally or ethically either.

 From Surveillance to Partnership

The organizations that have figured out remote work — really figured it out, not just survived it — share a common orientation. They built information flows that work with their employees rather than around them.

That means transparent dashboards that employees and managers see together. Outcome-focused metrics that measure what actually matters to the business. Clear contextual boundaries that acknowledge the difference between a home and an office. Data practices designed to identify where people need support, not to build case files for discipline.

The evidence is consistent. The American Psychological Association’s research on workplace surveillance shows that monitoring is perceived as controlling rather than supportive, reduces performance, increases turnover intention, and generates exactly the disengagement it’s designed to prevent. Contextual integrity violations don’t just feel wrong. They actively undermine the productivity they’re meant to improve.

Next week, we’ll get into how these principles intersect with the emerging state-level electronic monitoring statutes — there are now more than a dozen, each with different notice requirements, consent standards, and enforcement mechanisms, and the patchwork is getting genuinely complicated for multi-state employers. If you’ve got questions about how any of this applies to your organization, reply to this email or connect with me.

#RemoteWork  #Privacy  #AIGovernance  #DataProtection  #ContextualIntegrity  #EmployeePrivacy  #FutureOfWork  #Compliance

About the Author

Dr. Edward Halle is a privacy and AI governance practitioner and published author. He holds the FIP, CIPM, CIPP/US, AIGP, and CAIE credentials, an LL.M. and D.B.A., and has completed AI Ethics and Governance studies at Oxford University. He is the author of Rethinking Workplace Privacy, Power, and Productivity in the Age of Remote Work (2025) and Intrapreneurship: How to Create a Company of Intrapreneurs.

Remote Work Privacy Insights | hallprivacy.beehiiv.com | © 2025 Edward Halle

Reach Edward at [email protected] or on LinkedIn

 

Reply

Avatar

or to participate

Recommended for you