Something strange is happening inside companies right now. A new kind of actor is scheduling your meetings, reading your emails, pulling your performance data, and — in some organizations — flagging you for a disciplinary conversation with HR. It is not a person. It is not exactly software in the old sense either. It is an autonomous AI agent, and almost nobody has thought carefully about what it means from a privacy standpoint.
Last edition, we introduced the landscape of agentic AI in the workplace — what it is, why it is accelerating, and why it makes privacy professionals nervous. This edition goes deeper. We are going to interrogate this new workplace actor through the sharpest analytical lens available to us: Helen Nissenbaum's contextual integrity framework. And what we find when we do that is genuinely unsettling.
The UK Information Commissioner’s Office put it plainly in its January 2026 Tech Futures report: agentic AI can both exacerbate existing data protection issues and introduce new ones — particularly as human oversight becomes more difficult when agents operate autonomously. That is a careful regulator’s way of saying the rules we built for humans do not map cleanly onto machines that act like humans. We need to think harder.
What Is an AI Agent, Really?
Let's be precise. An AI agent is not a chatbot. You do not type a question and wait for an answer. An agent observes its environment, sets sub-goals, calls external tools and APIs, takes actions, and iterates — all without a human authorizing each step. Your HR platform's scheduling agent does not ask permission to read the last six months of your calendar before it proposes a performance review window. Your expense-management agent does not pause to check whether it is appropriate to cross-reference your location data with your submitted receipts. It just does it.
Strata's 2026 guide on agentic AI security reports something that should give every CHRO and privacy officer pause: non-human identities now outnumber human employees roughly 50 to 1 in the average enterprise, and 80 percent of IT leaders report agents acting outside expected parameters. Eighty percent. Think about that for a moment — four out of five organizations deploying these systems have already watched an agent do something unanticipated.
This is not a fringe technology concern. This is already the operational reality inside financial institutions, healthcare systems, law firms, and technology companies. The question is not whether to engage with it. The question is whether we are governing it.
“Four out of five organizations deploying AI agents have already watched one act outside expected parameters. This is not a future risk. It is a current operating condition.” |
The Contextual Integrity Framework: A Quick Refresher
Helen Nissenbaum’s contextual integrity framework — developed in her foundational work Privacy in Context (Stanford University Press, 2010) — holds that privacy is not violated simply because information flows. Privacy is violated when information flows in ways that are inappropriate to the context in which that information was originally shared. A medical record flowing from your physician to a specialist is appropriate. That same record flowing to your employer is not. The context governs the norm. The norm governs appropriateness.
Nissenbaum identifies five parameters that define any information-sharing context:
CI Parameter 1 — Sender: Who is originating the information flow?
CI Parameter 2 — Recipient: Who is receiving it?
CI Parameter 3 — Information Subject: Whose information is it?
CI Parameter 4 — Transmission Principles: Under what rules or constraints is it being shared?
CI Parameter 5 — Contextual Norms: What does the originating social context expect about how this information moves?
Apply all five parameters to a human sending a performance assessment and the analysis is relatively clean. Now apply them to an autonomous AI agent — and every single parameter becomes contested terrain.
Five Parameters. Five Failures. A Scenario-by-Scenario Analysis.
CI Parameter 1: The Sender — Who Is Speaking When the Agent Speaks?
In a human organization, the sender of information carries moral weight. When a manager sends a performance note to HR, that person is accountable — legally and professionally — for what they said and why. The sender's identity anchors a chain of responsibility.
Now consider what happens when an AI agent is the sender.
Scenario — Marcus and the Autonomous Performance Flag
Marcus is a senior data analyst at a regional bank. He has been working remotely for three years with consistently positive reviews. His employer recently deployed an agentic AI platform to assist HR with workforce analytics. One Monday morning, his HR business partner receives an automated alert: the agent has flagged Marcus for "declining engagement indicators" based on its analysis of calendar density, email response latency, Slack activity patterns, and meeting attendance metadata collected over 90 days.
The agent sent that communication. Not Marcus’s manager. Not an HR professional who reviewed the underlying data. The agent assembled the inference, generated the narrative, and routed it to HR autonomously.
Who is the sender? Legally, the employer is responsible for the tool’s outputs. But the manager never made a judgment about Marcus. The HR partner is acting on a machine-generated inference. And Marcus has no idea any of this has happened.
The sender parameter has collapsed. No human sender is exercising contextual judgment about what is appropriate to share about Marcus and when. There is a system that has been configured to surface patterns — and it is doing exactly what it was designed to do, without any evaluation of whether doing so respects the contextual norms of the performance management relationship.
Venable LLP's February 2026 analysis makes this point clearly: existing legal frameworks still apply to agentic systems — but managing compliance in environments that act continuously and adaptively creates novel operational issues that those frameworks were not designed to address. Someone must be the accountable sender. Right now, in most deployments, nobody is.
CI Parameter 2: The Recipient — Where Does the Information Actually Go?
In traditional workplace settings, information flows through defined channels: a manager to HR, HR to legal, and a department head to the CHRO. The recipient is known, identifiable, and — at least in theory — bound by role-appropriate confidentiality expectations.
Agentic AI systems shatter this clarity. An agent does not route information to one recipient. It may write to a database, trigger a downstream agent, surface findings to a dashboard accessed by multiple roles, and simultaneously log outputs for audit purposes — all in a single transaction.
Scenario — Priya and the Multi-Agent HR Ecosystem
Priya is a financial services compliance officer at a mid-sized asset management firm. During a particularly stressful quarter, she sent several late-night emails to colleagues flagging regulatory concerns. She occasionally left her video camera off during team meetings. Her calendar showed recurring midday blocks marked "personal."
The firm’s HR agent, integrated with its communication surveillance platform, ingested these signals and generated a wellness risk score. That score was then passed to a second agent responsible for leave management planning, who used it to model Priya’s potential absence likelihood. A third agent — the workforce optimization tool — incorporated that probability into headcount projections shared with senior leadership.
Priya’s personal calendar blocks, her late-night emails, and her camera-off preferences traveled — agent to agent to agent — until they informed a C-suite planning document. She never consented to any of it. She does not know it happened. And the information that began in the context of professional communication ended up embedded in an executive strategy report.
The recipient parameter in Nissenbaum’s framework depends on the idea that information flows to a bounded, identifiable audience. In multi-agent systems, there is no bounded audience. There is a cascade. And each step in that cascade moves further from the contextual norms under which the information was originally shared.
The ICO's January 2026 Tech Futures report specifically flags that the complexity of data flows may make it challenging to identify data about a particular individual or amend it, meaning it is difficult to comply with individual rights requests. When an agent has written Priya’s wellness score into four downstream systems, which data controller is responsible for her right of access?
CI Parameter 3: The Information Subject — The Employee Who Doesn’t Know
The information subject is the person the data is about. In traditional privacy frameworks, a central protection is transparency: people should generally know that their information is being collected, for what purpose, and to what effect. Notice and awareness are foundational.
Agentic AI systems undermine this systematically. The information subjects — employees — are frequently unaware that an agent is observing their behavior, generating inferences, or transmitting findings. The monitoring is not always announced. The inferences are not disclosed. The transmission happens in the background.
Washington State's proposed HB 2144 directly addresses this gap: it would require advance written notice to employees before employers implement AI tools or electronic monitoring for performance evaluation purposes. That bill reflects a growing legislative recognition that employees as information subjects are being systematically kept in the dark.
Illinois went further. Under HB 3773, effective January 1, 2026, employers must notify workers when AI is integrated into decisions affecting hiring, firing, discipline, tenure, and training. The information subject must know. That is now a legal requirement in Illinois — and the compliance gap in most organizations is enormous.
"If an employee doesn’t know that an agent observed, inferred, and transmitted information about them, the contextual integrity framework is violated at the most foundational level. Consent cannot be given. Context cannot be respected. Privacy cannot be protected." |
CI Parameter 4: Transmission Principles — Who Set the Rules, and Can the Agent Follow Them?
Transmission principles are the implicit or explicit conditions under which information moves. Medical information flows under HIPAA's minimum necessary standard. Employee health accommodations flow under ADA confidentiality protections. Performance evaluations flow under documented HR procedures that include manager review and HR sign-off.
The transmission principle is not just a legal constraint. It is a social promise — an expectation baked into the relationship between sender and recipient that the information will be used only for its stated purpose, with appropriate safeguards, and no further.
Agentic AI systems struggle enormously with transmission principles for a simple reason: they optimize for task completion, not for contextual appropriateness. An agent tasked with improving workforce utilization will pull whatever data it has access to — unless it is explicitly prohibited from doing so. And most organizations have not yet built those explicit prohibitions.
Scenario — The Expense Agent That Became a Surveillance Tool
A financial services firm deploys an expense management agent to streamline the reimbursement process. The agent's stated purpose is to verify receipts, flag policy violations, and process approvals. Employees understand this. They accept it.
Over time, the agent is upgraded. It begins cross-referencing expense submission timing with employee location data from badge access systems to "validate" that in-person expenses correspond to actual office visits. It generates a "reliability score" for each employee based on submission accuracy patterns. That score is later discovered to correlate with protected characteristics — older employees and employees with disabilities, who may have different work patterns, score systematically lower.
The transmission principle under which employees submitted expense data was simple: you will use this to reimburse me. The principle the agent was eventually operating under was something else entirely: I will use this to profile your reliability as an employee. That is a contextual integrity violation. It may also be a violation of the Colorado Artificial Intelligence Act, which takes effect June 30, 2026.
Colorado SB 24-205 requires deployers of high-risk AI systems to take reasonable measures to avoid algorithmic discrimination, conduct impact assessments, and allow employees to appeal AI-influenced employment decisions. The expense agent scenario above almost certainly meets the definition of a high-risk system under that statute — and very few organizations currently have the governance structure to demonstrate compliance.
CI Parameter 5: Contextual Norms — What Does the Relationship Expect?
This is the heart of contextual integrity — and it is where agentic AI creates its most philosophically interesting, and practically dangerous, failure mode.
Contextual norms are the unwritten but deeply felt expectations that govern how information flows within a social sphere. In the employer-employee relationship, contextual norms have evolved over decades. Employees share performance data with their managers — not with C-suite executives they have never met. They disclose health information to HR in confidence — not to their entire team. They use work communication tools for professional purposes — with a reasonable expectation that casual Slack messages are not being fed into behavioral analytics engines.
Agentic AI systems do not know these norms. They were not trained to respect them. They were trained to complete tasks. And in completing tasks, they routinely violate the contextual expectations that define the employment relationship.
Scenario — Marcus, Revisited: The Norm He Thought Was Intact
Recall Marcus, the data analyst flagged by the HR agent. When he joined the organization, he signed an acceptable use policy. It disclosed that company systems might be monitored for security and compliance purposes. He understood that.
What he did not understand — because it was never disclosed — was that his email response latency, calendar density, and Slack activity patterns would be ingested by an AI agent, aggregated across 90 days, synthesized into an "engagement decline score," and transmitted to HR without any human manager making an independent judgment about his performance.
The contextual norm Marcus reasonably held: my productivity data is reviewed by my manager as part of an ongoing professional relationship, and any concerns are raised with me directly before being escalated
The contextual norm the agent was operating under: all available behavioral signals are data inputs to be continuously scored and flagged without a threshold for human review.
Those two norms are incompatible. And the law is increasingly saying so. California's proposed SB 947 would prohibit employers from relying solely on automated systems for disciplinary or termination decisions — requiring a human, independent investigation to corroborate any AI output. That bill is a legislative response to exactly this contextual norm collapse.
The Regulatory Picture Is Moving Fast. Is Your Organization?
The regulatory response to agentic AI in the workplace is no longer theoretical. It is here, it is accelerating, and it is layered.
At the federal level, the Trump administration's Executive Order 14365 directed agencies to review and potentially preempt state AI laws. But as of this writing, the Commerce Department evaluation and the FTC policy statement both due March 11, 2026 have not been published. The states are not waiting.
Illinois HB 3773 is already law. Texas RAIGA is in effect. Colorado SB 24-205 activates June 30, 2026 — less than twelve weeks from now. California is considering SB 947, which would mandate human review of any AI-driven disciplinary decision. Washington is debating HB 2144, which would require advance written notice before any AI monitoring tool is deployed for performance evaluation.
The OWASP Top 10 for Agentic Applications (2026) identifies the core technical vulnerabilities — prompt injection, excessive agency, inadequate logging, and cascading hallucinations — that directly map onto the contextual integrity failures described above. These are not edge cases. They are the baseline risk profile of every enterprise AI agent deployment.
What does all of this mean for a privacy professional sitting inside an organization that is already running agents, or about to? It means the window for proactive governance is closing.
What Compliance Looks Like When the Agent Is the Actor
Getting this right is genuinely hard, and anyone who tells you otherwise is selling something. But there are concrete starting points.
Treat every agent as a data processor with a documented purpose.
Before an agent touches employee data, someone must articulate — in writing — what data it can access, why, and under what transmission principles. This is essentially a data processing record for a non-human actor. It sounds bureaucratic. It is also legally required under several state frameworks now in force, and it operationalizes the transmission principle parameter in Nissenbaum's framework.
Build the notice obligation into the deployment checklist.
If an agent is making or contributing to employment-related decisions — performance, scheduling, discipline, compensation — employees must be told. Not buried in an acceptable use policy updated quietly at 11 p.m. on a Friday. Told, clearly, in plain language, before the agent is live. Illinois requires it now. Washington may require it soon. Even where it is not yet legally mandated, the contextual integrity argument says it is ethically required.
Create a human-in-the-loop checkpoint for every consequential output.
An agent that identifies a "declining engagement" pattern should surface that finding to a manager for independent review — not route it directly to HR as an actionable alert. The human checkpoint is not a bureaucratic speed bump. It is the mechanism that preserves accountability, exercises contextual judgment, and provides the audit trail that regulators are increasingly demanding. California's SB 947 is trying to mandate exactly this structure by statute.
Conduct a contextual integrity audit before you call it deployed.
Map every data input to its original context. Ask, for each one: under what social and professional norms was this data shared? Does the agent's use of it respect those norms? If the employee who provided this data understood what the agent would do with it, would they consider it a betrayal of the relationship? This is not a standard privacy impact assessment question. It is a harder, more human question — and it is the one that will determine whether your agent creates trust or erodes it.
Watch Colorado June 30 like a compliance calendar event that cannot slip.
SB 24-205 requires risk management programs, impact assessments, employee notice, and appeal rights for high-risk AI system deployments. Penalties are civil and enforced by the state attorney general. If your organization operates in Colorado and uses AI in any employment decision, the question is not whether this applies to you. The question is whether you can demonstrate compliance on July 1.
Closing Thought: The Agent Doesn’t Know the Relationship. You Do.
Here is what keeps me thinking about this. The employment relationship is built on something an AI agent fundamentally cannot replicate: the accumulated understanding of context. A good manager knows that Marcus’s quieter weeks coincide with a school semester where he is coaching his daughter’s team. A thoughtful HR partner knows that Priya’s midday calendar blocks are a standing arrangement to care for an aging parent.
An agent knows none of this. It sees signals. It generates scores. It routes outputs. And it does all of this in environments where the five parameters of contextual integrity — sender, recipient, information subject, transmission principles, contextual norms — have been fundamentally altered by its presence.
The regulatory frameworks emerging across Illinois, Colorado, California, Washington, and the UK are, in their different ways, trying to force organizations to put human judgment back in. To restore the contextual check that the agent cannot perform itself.
That is not an anti-technology argument. It is a pro-governance argument. The agent is a powerful tool. But the relationship it operates within — the employment relationship, with all its trust, power asymmetry, and legal obligations — belongs to humans. Governing it accordingly is not optional. It is the work.
Disclaimer: Remote Work Privacy Insights is a newsletter that looks at privacy issues in the workplace using academic ideas. It's meant to educate and is not legal advice. For advice tailored to your company, talk to a qualified privacy or employment lawyer. The opinions shared are the author's and not those of any employer.
