This website uses cookies

Read our Privacy policy and Terms of use for more information.

Let me give you a scenario that I use when I'm teaching privacy compliance to HR leadership cohorts, because it lands differently than a statute citation or a penalty figure. It's the kind of thing that actually keeps compliance officers awake.

Your company operates distribution centers in four states. On a Tuesday morning, three employees in three of those states each raise a privacy concern about the same fingerprint time clock system. The technology is identical. The employer is identical. The business purpose is identical.

In Illinois, your HR team receives a demand letter from a plaintiff's attorney alleging that two years of daily fingerprint scans were collected without proper written consent or a publicly available retention policy — a textbook BIPA violation. The potential exposure, even after the 2024 amendment that capped damages to one violation per worker, is $1,000 per employee for negligent violations and $5,000 for reckless ones. With 200 Illinois employees and daily scan events over two years, the class action math is painful.

In California, an employee who is also a job applicant for an internal transfer has submitted a formal CCPA data access request. Under the California Consumer Privacy Act, she wants to know every category of biometric data collected since 2022, every third-party vendor it's been shared with, how long it's retained, and the business purpose for collection. Your privacy team has 45 days to respond. Your HR vendor contract — which everyone assumed was fine — has no CCPA-compliant data processing terms.

In Virginia, the third employee has no enforceable privacy rights at all. Virginia's Consumer Data Protection Act explicitly exempts employee data from its scope. He works the same shift, scans the same machine, and has essentially no recourse under Virginia law.

This is the compliance environment American employers are operating in right now. Same technology. Same company. Three entirely different legal outcomes. And the variance is only growing.

How We Got Here — and Why It's Getting More Complex, Not Less

The fragmentary nature of U.S. privacy law isn't new, but what happened between 2024 and 2026 changed the calculus significantly for multi-state employers. Eight new states activated comprehensive privacy frameworks: Delaware, Iowa, Nebraska, New Hampshire, New Jersey, Minnesota, Tennessee, and Maryland. Combined with the ten states already enforcing privacy regulations, we now have 19 state frameworks — each with its own thresholds, exemptions, employee coverage rules, and enforcement mechanisms.

More importantly, what changed wasn't just the volume of laws. It was the enforcement posture.

In April 2025, eight state regulators — the CPPA and the attorneys general of California, Colorado, Connecticut, Delaware, Indiana, New Jersey, and Oregon — formally announced the Consortium of Privacy Regulators, a bipartisan enforcement coalition built around a memorandum of understanding committing members to coordinated investigations, shared expertise, and joint enforcement actions. By October 2025, Minnesota and New Hampshire had joined, bringing the consortium to ten states. As California AG Rob Bonta put it in the announcement: "Data knows no borders — state and nationwide coordination is vital for protecting consumers' rights."

What that means practically is that a violation identified in one consortium state can now trigger investigations in others. A single complaint in one jurisdiction — and I mean a single complaint, as we'll get to — can surface compliance failures that regulators in nine other states then have the legal basis and the institutional infrastructure to pursue simultaneously. The era of treating each state's privacy law as a discrete, siloed compliance obligation is over.

The One Complaint That Should Rewrite Your Compliance Program

I want to spend a moment on the Tractor Supply case because I think it's the most important data point in the 2025 enforcement landscape for employers, and it hasn't received the attention it deserves.

On September 30, 2025, the California Privacy Protection Agency issued a $1.35 million administrative fine against Tractor Supply Company — the nation's largest rural lifestyle retailer, operating 2,500 stores, including 85 in California. It was the largest fine in CPPA history. It was also the first CPPA enforcement action specifically addressing job applicant and employee privacy rights.

The investigation began with a single consumer complaint from one person in Placerville, California.

Let that sink in for a moment. One complaint triggered an investigation that uncovered four distinct categories of CCPA violations: failure to provide compliant privacy notices to job applicants; failure to honor opt-out requests through a mechanism that technically existed but didn't actually work; failure to process the Global Privacy Control opt-out signal; and failure to include required CCPA provisions in contracts with third-party data vendors. The company had updated its 2021 privacy policy — years later, only after learning it was under investigation.

Critically, when Tractor Supply initially challenged the CPPA's investigative authority, arguing its enforcement powers only extended back to 2023 when the first regulations were finalized, the CPPA fought back. As part of the settlement, the company acknowledged the agency's authority to investigate violations that occurred before January 2023. That precedent matters enormously. It means that historical non-compliance — the privacy notice you didn't update in 2022, the vendor contract you haven't touched since 2020 — is within the scope of the current investigation.

The broader enforcement picture from 2025 reinforces the same message. American Honda paid $632,500 for collecting excessive personal information beyond what was necessary. Todd Snyder paid $345,178 because its privacy request portal was misconfigured and didn't actually process opt-outs for 40 days — a vendor technology failure that the CPPA held the company responsible for, regardless. California's state AG separately settled with Healthline Media for $1.55 million for deficient third-party vendor contracts. The CPPA's enforcement division currently has hundreds of open investigations, with complaints arriving at roughly 150 per week.

The message from California regulators is now unmistakable: they are actively investigating employer compliance, they are taking job applicant privacy seriously, and a single aggrieved employee or applicant can set the whole process in motion.

Understanding the Fragmentation Through Contextual Integrity

I want to use this edition to do something a bit different — to analyze what's happening in the multi-state compliance environment through Helen Nissenbaum's framework of contextual integrity, which gives us a more precise analytical tool than the usual "patchwork of laws" framing. Because I think contextual integrity explains not just what is legally required in different states, but why employees experience geographic variation in their privacy rights as profoundly unfair — and why that unfairness is itself a trust problem for employers, separate from the legal risk.

Nissenbaum's framework rests on five interconnected principles for evaluating whether an information flow is appropriate:

Principle 1 — Context: Every information flow exists within a social context — healthcare, education, employment, commerce — that carries its own established norms about what information belongs and how it moves.

Principle 2 — Actors: Information flows involve senders (employees sharing their data), recipients (employers and vendors receiving it), and subjects (the employees the data is about). The legitimacy of a flow depends on who these actors are and what relationships exist between them.

Principle 3 — Attributes: The nature of the information itself matters. Biometric identifiers — permanent, biological, irreplaceable — carry fundamentally different privacy expectations than work schedule preferences or email addresses.

Principle 4 — Transmission Principles: Information should flow according to the norms appropriate to its original context. Health information shared in a doctor's office should flow within healthcare contexts. Biometric data collected for facility access should not migrate to performance management systems.

Principle 5 — Information Norms: Each context has developed, over time, a set of expectations about appropriate information flows. Violations of those norms — even when technically legal — erode trust and signal that the relationship has changed in ways the subject didn't consent to.

Now watch what happens when we apply these five principles to the multi-state compliance problems employers actually face.

Three Stories That Illustrate the Compliance Fault Lines

Story One: The Fingerprint Time Clock Across Four States

Consider our opening scenario again, this time through the contextual integrity lens.

The context is identical across all four distribution centers: an employment relationship where the employer has a legitimate interest in tracking work hours and securing facility access. The actors are the same: employer as recipient, employees as senders, and subjects. The attributes being collected — fingerprints, facial geometry — are permanent biological identifiers that can never be changed if compromised. The transmission principle that employees would reasonably expect: biometric data collected for authentication stays within authentication systems, is retained only as long as the employment relationship requires, and is never shared with third parties for other purposes.

The information norm violation isn't geography-specific. It's universal. When any employer collects permanent biological identifiers without clearly disclosing the purpose, the retention period, and the third-party sharing arrangements, they've violated the reasonable expectations of the employment context, regardless of which state the employee works in.

What the law does is assign consequences to those violations differently by state. Illinois BIPA gives employees a private right of action with statutory damages. California CPRA gives employees data subject rights and triggers a regulatory investigation. Texas CUBI prohibits capture without consent but limits enforcement to the Attorney General. Virginia gives employees nothing enforceable at all.

The trust problem this creates is separate from the legal problem. When employees in the same company, doing the same job, using the same technology, discover that their privacy rights are determined entirely by their zip code, the resulting sense of inequity is real and corrosive. "We only provide these protections where we're legally required to" is a position that signals to your workforce exactly how much weight their privacy interests carry in your organization's decision-making.

What compliance requires: A properly structured biometric data program, regardless of geography, means written notice before any collection begins; explicit, informed consent obtained independently of employment conditions; a publicly available retention and destruction schedule; contractual restrictions on vendor data use; and a strict prohibition on connecting authentication data to performance monitoring systems. The BIPA statutory framework and GDPR's proportionality test together give you the substantive standard that satisfies all jurisdictions simultaneously. Apply them everywhere, not just where litigation risk is highest.

Story Two: The AI Hiring Tool and Maryland's Algorithmic Gauntlet

An HR director at a mid-sized financial services company in Baltimore implements an AI-powered resume screening tool to manage high application volume. The system is marketed as "objective." The vendor's sales deck mentions bias testing. Nobody on the team asks for the documentation.

Let's run the contextual integrity analysis.

Context: Employment hiring — a high-stakes consequential decision context where applicants share professional qualifications, expecting them to be evaluated against stated job criteria. Actors: Applicants as senders and subjects, the employer as recipient, the AI vendor as an additional recipient whose role is not disclosed. Attributes: Resume data, employment history, educational credentials — plus whatever the AI system is inferring from those inputs, which may include proxies for protected characteristics never disclosed to applicants. Transmission principle: Applicants in a hiring context expect their materials to be evaluated by humans against stated criteria. They do not expect their data to be processed by an opaque algorithm whose methodology they cannot review, contest, or even be informed about. Information norm violation: Consequential employment decisions made through undisclosed algorithmic processes violate the reasonable expectations of what a job application is and how it works.

Maryland's Online Data Privacy Act (MODPA) — effective October 1, 2025, with enforcement beginning April 1, 2026 — is the statute that most directly addresses this violation, and it's doing something no other state law had previously done: requiring a documented data protection assessment for each algorithm used in processing activities that present heightened risk. Not for "AI systems" in the abstract — for each specific algorithm. A resume screening tool with three scoring components requires three assessments. A scheduling system that factors in performance history requires its own assessment.

MODPA's data minimization standard is also meaningfully stricter than other state frameworks. General personal data must be "reasonably necessary and proportionate" to deliver the requested service. Sensitive personal data — which includes biometric identifiers, health data, precise geolocation, and information revealing race, ethnicity, or sexual orientation — must be "strictly necessary," a standard that explicitly excludes "useful," "convenient," or "disclosed in your privacy notice" as justifications. Maryland also outright prohibits the sale of sensitive data regardless of consent — a prohibition with no override.

For employers using AI in hiring decisions affecting Maryland residents, the compliance obligations are specific: document the purpose and methodology of every algorithm used; assess each for potential algorithmic discrimination before deployment; provide applicants with notice that AI is being used; offer a right to appeal adverse decisions with human review; and ensure vendor contracts require the documentation necessary to complete your own assessment. The Colorado AI Act effective June 30, 2026 adds parallel requirements for Colorado employees. California's ADMT regulations effective January 1, 2026 add opt-out rights for California applicants.

The contextual integrity principle here is precise: algorithmic employment decisions that employees and applicants cannot see, understand, or contest violate the transmission principle that governs the hiring context. The law is now codifying what the framework predicted.

What compliance requires: Pre-deployment bias audits conducted by someone other than the vendor selling you the tool. Applicant notice at the point of collection that AI will be used in the evaluation. An actual human review pathway for adverse decisions. Vendor contracts requiring algorithmic impact documentation, not just a sales-deck assurance that the system is "fair." And, for the EEOC-related exposure that still exists regardless of what federal guidance has been withdrawn: ongoing adverse impact monitoring using the four-fifths rule of thumb as a baseline.

Story Three: The Wellness Program That Became a Privacy Liability

A hospital system in a mid-Atlantic state rolls out a wellness program offering wearable fitness trackers and meaningful insurance premium discounts — up to 25% — for employees who meet daily step and heart rate targets. A genetic testing component is added the following year, marketed as personalized health optimization. Data flows from the wearables to a third-party wellness platform, which aggregates it and shares summary reports with the insurer.

The contextual integrity analysis maps the problem with precision.

Context: Employment wellness programs exist at the intersection of the employment relationship and personal health — two contexts with very different, and very stringent, information norms. Health information shared with a physician flows within healthcare contexts under strict confidentiality expectations. Biometric and physiological data shared with an employer flows within a power-imbalanced relationship where genuine voluntariness is structurally compromised. Actors: The employee as sender and subject, the employer as recipient, the wellness platform as an undisclosed secondary recipient, the insurer as a third recipient whose ultimate use of the data may include actuarial decisions about the same employee. Attributes: Heart rate data, step counts, sleep patterns, and genetic markers — some of the most sensitive personal information that exists. Transmission principle: Health information shared in an ostensibly voluntary wellness program should not flow to parties who have the ability to make consequential employment or insurance decisions based on it.

The information norm violations here are layered. The financial pressure created by a 25% premium discount undermines the voluntariness that both GINA and ADA wellness rules require. The genetic testing component crosses a bright legal line: GINA prohibits employers from requesting or requiring genetic information from employees and from using it in employment decisions, full stop. The data sharing arrangement with the insurer raises HIPAA questions if the wellness platform is a business associate. And for California employees, the CPRA's sensitive personal information provisions require specific notice and consent for health data collection, with employee rights to limit use.

What makes this scenario particularly common is that wellness programs are often designed by benefits teams without meaningful privacy counsel involvement. The contextual integrity violation — treating health data as an extension of employment data — happens quietly, without anyone in the room flagging it as the kind of information flow that carries its own distinct legal and ethical obligations.

What compliance requires: A genuine voluntariness analysis before any premium incentive is set, benchmarked against the EEOC's 30% of employee-only coverage maximum for ADA compliance. Complete separation of genetic information from any system accessible to HR or management. A data processing agreement with the wellness vendor that specifies exactly what data can flow to the insurer and in what form. For California employees, a specific sensitive personal information notice at enrollment with a meaningful opt-out pathway. And an honest internal question: is the business purpose here employee wellbeing, or cost management through actuarial data collection?

The Compliance Strategy That Actually Holds Up

There's a version of multi-state compliance strategy that involves building separate programs for each jurisdiction — custom consent forms for Illinois, separate disclosure documents for California, Colorado-specific biometric policies, Maryland algorithm assessment workflows, and so on. In theory, this is most legally precise. In practice, it is operationally unmanageable, extraordinarily prone to error when employees relocate, and increasingly fragile as the Consortium of Privacy Regulators coordinates cross-state investigations that blur the lines between jurisdictional boundaries anyway.

The approach that privacy-mature organizations are converging on is what compliance professionals call the high-watermark standard: apply the strictest applicable requirements from across the relevant jurisdictions to your entire workforce, regardless of which state each employee is in.

What that looks like in practice: California-standard transparency for all employees — not just California residents — meaning detailed privacy notices, disclosed business purposes, disclosed vendor sharing, and active employee data rights. Colorado-standard consent for any biometric data collection — explicit written consent, a documented purpose limited to legitimate authentication or safety uses, and a strict firewall between authentication data and performance systems. Maryland-standard algorithmic assessments for any AI tool used in employment decisions affecting any employee — documented impact assessments for each algorithm, pre-deployment bias testing, and the notice and appeal infrastructure that MODPA requires.

The contextual integrity logic supports this approach, independent of the legal argument. When your entire workforce operates under the same privacy framework, you're creating consistent information norms that match employee expectations. You're not creating a two-tiered system where the value of someone's privacy rights depends on their assigned distribution center. And you're building the documentation and governance infrastructure that regulators in any jurisdiction will want to see, rather than scrambling to produce jurisdiction-specific records after an investigation begins.

The Vendor Contract Problem No One Is Talking About Enough

I want to flag something that the Tractor Supply, Honda, and Healthline enforcement actions all share as a contributing factor, because it keeps appearing in enforcement outcomes, and I don't think most employers have fully absorbed the implications.

In each of those cases, a significant part of the compliance failure involved third-party vendor contracts that didn't include the privacy provisions required by the applicable law. In Tractor Supply's case, the CPPA found that the company had shared personal information with advertising partners and data vendors without contracts limiting how those parties could use the data or requiring them to process opt-out signals. The CPPA's enforcement head said it plainly in the Todd Snyder action: "Using a consent management platform doesn't get you off the hook for compliance."

This matters for employers in a specific way. Your HR technology stack — your ATS, your performance management platform, your payroll processor, your wellness vendor, your background screening service — all receive employee personal information. Under California CPRA, contracts with service providers and contractors must include specific provisions: a description of the processing activity, the business purpose, prohibitions on using the data for any purpose outside the stated one, and requirements to assist with employee rights requests. Under GDPR, data processing agreements with vendors are a legal requirement that specify controller-processor responsibilities. Under the Colorado AI Act, you need documentation from AI vendors sufficient to complete your own impact assessments.

Most organizations' vendor contracts were not designed with these requirements in mind. Many are years old and predate current legal obligations. A vendor audit — reviewing each contract that involves employee data against current applicable law — is one of the highest-leverage compliance activities available right now, and one of the most commonly deferred.

What This Week Should Actually Look Like

Rather than another implementation checklist, I want to offer three specific questions that I think every multi-state employer should be able to answer right now — not in six months after a working group finishes a report, but this week.

Can you locate and produce your CCPA-compliant job applicant privacy notice? Not your general privacy policy. A specific, current notice designed for job applicants that discloses what personal information is collected at application, the business purpose, the categories of third-party vendors it's shared with, and how to exercise CCPA rights. If you receive applications from California residents — for any position, anywhere — this notice is legally required. The Tractor Supply fine was triggered by a complaint from a job applicant. One complaint.

Do you have written consent documentation for every biometric identifier currently being collected from employees in Illinois? This means a signed consent form for each employee — not a checkbox in an onboarding portal, but documented written consent obtained before the first biometric scan — and a publicly available retention and destruction policy. If you can't produce these records, your BIPA exposure is active and ongoing.

For any AI tool used in hiring, performance evaluation, or scheduling decisions, can you produce the vendor's bias testing methodology and any disparate impact analysis conducted before deployment? If the answer is "the vendor told us it was fair," that is not a compliance answer. It is not a defense under EEOC disparate impact doctrine. It is not a defense under Colorado's AI Act. And it is not a defense under MODPA's algorithmic assessment requirement.

If you can answer all three questions confidently and with documentation in hand, you're ahead of most organizations I see. If you can't, those are your three immediate priorities — before the consortium regulators, before the plaintiff's attorney, and before the next annual compliance cycle puts them back on a list.

The Larger Point About What This Compliance Environment Is Teaching Us

The state-by-state divergence in privacy law is frustrating for employers, and I understand that frustration. But I think it's worth stepping back and asking what the regulatory convergence we're seeing actually represents.

The Consortium of Privacy Regulators is bipartisan. It spans states with very different political cultures — California, Indiana, Oregon, New Jersey, Colorado, and Connecticut. What they share is not a political agenda. What they share is a recognition that the collection of employee and consumer data has grown so pervasive, so opaque, and so consequential that existing legal frameworks need structural updates.

The contextual integrity framework predicts exactly the pattern we're seeing in enforcement. The violations drawing the largest fines and the most regulatory attention are all cases where information collected in one context — employment authentication, job application, consumer purchase — was flowing in ways that violated the reasonable expectations of the people whose data it was. Fingerprint data is shared with third parties without disclosure. Job applicant data used for tracking purposes without notice. Algorithm-generated scores are producing consequential employment decisions without transparency or appeal rights.

These aren't technical compliance failures. They're failures to honor the informational norms of the relationships in which the data was collected. The law is now compelling the honoring of those norms through financial consequences significant enough that they can no longer be deferred.

The organizations that understand this — that treat the legal requirements not as obstacles to be navigated but as codifications of the trust relationships their workforce relationships require — are the ones that are building programs that will hold up in the next enforcement cycle and the one after that.

Everyone else is just waiting for their version of the Placerville complaint.

If this edition surfaces questions specific to your compliance situation — multi-state employee population, AI tools you're evaluating, vendor contract review — reply directly. These are exactly the conversations worth having before an investigation opens.

Disclaimer: Remote Work Privacy Insights is a newsletter that looks at privacy issues in the workplace using academic ideas. It's meant to educate and is not legal advice. For advice tailored to your company, talk to a qualified privacy or employment lawyer. The opinions shared are the author's and not those of any employer.

Primary Sources Referenced in This Edition

Statutes and Regulations

Enforcement Actions and Regulatory Guidance

Enforcement Coordination

State-Specific Compliance Guidance

Foundational Theory

Reply

Avatar

or to participate

Recommended for you