Data Governance & Management March 17, 2026 · 20 min read

How to Build a Privacy Program in the Age of AI

A practitioner's framework for building a privacy program that treats AI data as a first-class concern. Covers Data Classification for training data, retention schedules for ML pipelines, consent architecture, third-party transparency, cross-border transfers, EU AI Act Article 10, NIST AI RMF, privacy-enhancing technologies, and governance operating models.

By Vikas Pratap Singh
#data-privacy #data-governance #ai-governance #eu-ai-act #gdpr #privacy-engineering #data-classification #machine-learning

Data Privacy Guide: Overview | Part 1 | Part 2 | Part 3 | Part 4 | Part 5 | Part 6 | Part 7 | Part 8 | Part 9 | Part 10

After Two Teardowns, a Framework

After tearing down Netflix’s and Apple’s privacy policies, one question kept surfacing: what would a privacy program built from scratch in 2026 actually look like?

The Netflix teardown exposed the gaps. “As long as necessary” retention language. Generic third-party categories that a Dutch DPA investigation found insufficient. A 23.7 readability score that turns a privacy policy into a compliance artifact rather than a communication tool. The Apple teardown showed the opposite: privacy as a product differentiator, on-device processing, and differential privacy techniques that let Apple train AI without collecting personal data.

Both analyses raised a harder question than “who does privacy better?” The harder question is: if you are the person who has to build a privacy program today, what do you actually build?

The answer in 2026 is fundamentally different from the answer five years ago. The EU AI Act takes effect for high-risk AI systems in August 2026. The NIST AI Risk Management Framework has established Data Governance as a cross-cutting function. The EDPB ruled in December 2024 that AI models trained on personal data cannot automatically be considered anonymous. California’s AB 2013 now requires training data transparency disclosures.

A privacy program that does not account for AI data from day one is already outdated. Here is a framework for building one that does.

The framework has eight components across four layers. The diagram below shows how they connect: foundations inform controls, controls feed compliance, and governance ties everything together.

Privacy program framework showing eight components across four layers: foundation, control, compliance, and governance

1. Data Classification: Add AI as a First-Class Category

Most enterprise Data Classification frameworks use four tiers: Public, Internal, Confidential, Restricted. This worked when the primary concern was protecting customer PII and financial records. It breaks down the moment an organization starts building or deploying AI.

Training data does not fit cleanly into any traditional tier. A dataset used to train a recommendation model might contain aggregated behavioral patterns (Internal), personally identifiable interaction logs (Confidential), and biometric signals (Restricted), all in the same pipeline. Classifying at the dataset level misses the point. You need to classify at the data element level, and you need categories that account for AI-specific risks.

What to build. Extend your classification taxonomy with four AI-specific categories:

CategoryDescriptionExamplesHandling Requirements
Training DataData used to train or fine-tune ML modelsUser interaction logs, labeled datasets, web-scraped corporaProvenance documentation, consent verification, retention tied to model lifecycle
Model ArtifactsTrained model weights, embeddings, parametersNeural network weights, vector embeddings, feature storesMay contain encoded personal data; subject to deletion/unlearning obligations
Inference DataInputs and outputs of deployed modelsPrompts, completions, predictions, confidence scoresLogging requirements, automated decision-making protections (GDPR Article 22)
Synthetic DataAI-generated data used for training or testingGenerated tabular data, augmented images, simulated user profilesValidation against source data bias, documentation of generation method

This is not theoretical. California’s AB 2013, effective January 1, 2026, requires developers of generative AI systems to disclose whether training datasets include copyrighted material, personal information, or synthetic data. If your classification framework does not distinguish these categories, you cannot comply with the disclosure requirements.

What “good” looks like. Every dataset entering an ML pipeline has a classification label. Every model card documents the classification of its training data. Your Data Catalog treats training data, model artifacts, and inference logs as searchable, governable assets with the same rigor as your customer database.

2. Retention Schedules That Account for ML Pipelines

If your retention policy says “as long as necessary,” you do not have a retention policy. Netflix’s privacy policy uses exactly this language, and the Dutch DPA cited vague retention periods as part of its EUR 4.75M fine.

Traditional retention schedules are built around regulatory requirements: seven years for financial records, three years for tax documents, specific periods defined by sector regulations. ML pipelines introduce three categories that most retention schedules ignore entirely.

Training Data Retention

How long do you keep the data used to train a model? If you delete the source data but the model weights retain learned patterns from that data, have you actually deleted it? The EDPB’s December 2024 opinion addresses this directly: AI models trained with personal data cannot in all cases be considered anonymous, and anonymization must be assessed case by case. This means retaining model weights after deleting training data is not a clean separation; regulators may still treat the model as containing personal data.

Model Version Lifecycle

When you retrain a model, what happens to the previous version? If a user exercises their right to erasure under GDPR Article 17, does that obligation extend to every model version trained on their data? The answer is increasingly yes, and the infrastructure to support this is machine unlearning: selective removal of a data subject’s influence from trained models without full retraining.

Inference Log Management

If your model makes a prediction about a user (credit scoring, content recommendation, ad targeting), how long do you keep the input-output pair? GDPR Article 22 gives users the right not to be subject to decisions based solely on automated processing. To honor that right, you need to keep inference logs long enough to explain decisions. But keeping them indefinitely creates its own privacy risk.

What to build. A retention schedule that covers the full ML lifecycle:

Data TypeRecommended RetentionRationale
Raw training data (personal)Delete after model training + validation periodMinimize exposure; retain only what is needed for reproducibility
Aggregated/anonymized training dataRetain per standard data retention policyLower risk, but validate anonymization per EDPB guidance
Model weights (current version)Retain while model is in productionOperational necessity
Model weights (previous versions)90 days post-replacement, then deleteRollback window; reduces erasure obligation surface
Inference logs (with personal data)30-90 days, depending on decision significanceBalances explainability with minimization
Synthetic training dataRetain per standard policyLower risk if properly validated

What “good” looks like. Your retention policy is specific enough that an engineer reading it knows exactly when to delete training data, how to handle model version retirement, and how long inference logs persist. No “as long as necessary.” No ambiguity.

The core problem: AI severs the link between data collection and data use. The consent model most organizations use today was designed for a world where data collection and data use happened in the same context. You visit a website, it collects cookies, it shows you ads. The consent flow is straightforward because the use case is immediate and visible.

AI breaks this model. Data collected for one purpose (providing a streaming service) gets repurposed for a different purpose (training a recommendation model, targeting ads, selling audience segments to third parties). The gap between collection consent and downstream use is where most privacy violations occur.

The Italian Data Protection Authority fined OpenAI EUR 15 million in December 2024 for processing personal data to train ChatGPT without an adequate legal basis and without sufficient transparency. The fine confirmed what privacy practitioners had been warning about: blanket consent through general terms of service is insufficient for ML training.

What to build. A layered consent architecture with three distinct tiers:

Tier 1: Service consent. Covers data processing necessary to deliver the core product. This is typically the easiest to justify under GDPR’s “contractual necessity” basis. If you run a streaming service, processing viewing data to play content and maintain the account falls here.

Tier 2: Improvement consent. Covers using data to improve the product through analytics, A/B testing, and model training where the output directly benefits the user. This is where most recommendation systems sit. Legitimate interest may apply, but only with a documented balancing test and a genuine opt-out mechanism.

Tier 3: AI training consent. Covers using personal data to train models whose outputs extend beyond the user’s direct service experience. Ad targeting models, audience segmentation, third-party data sharing, and foundation model training all require explicit, informed, specific consent. This is where Netflix’s ad-tier data practices and OpenAI’s training data practices both face regulatory pressure.

The CNIL (France’s data protection authority) has published specific guidance on this: consent for AI training must be freely given, specific, informed, and unambiguous, with an explicit affirmative action from users. Gathering consent is often impossible in practice for web-scraped or open-source data, which means other legal bases (legitimate interest, with appropriate safeguards) become the fallback.

The visual below shows the relationship between the three tiers. The key design principle: withdrawing from a higher tier never breaks a lower one. A user who opts out of AI training still gets the full product.

Three-tier consent architecture showing Service, Improvement, and AI Training consent tiers with escalating requirements

What “good” looks like. A user can see exactly which tiers of consent they have granted. Tier 3 consent is never pre-checked. Withdrawing consent from Tier 3 does not break the core service (Tier 1). Your consent management platform tracks which data was collected under which tier, so when a user withdraws AI training consent, you can identify and remove their data from training pipelines without affecting their service experience.

4. Third-Party and Sub-Processor Transparency

Netflix’s privacy policy shares data with “advertising companies” and “service providers.” The Dutch DPA found this insufficient under GDPR transparency requirements, contributing to the EUR 4.75M fine. The specific failures: not clearly explaining which data is shared with which third parties, not disclosing the purposes for each sharing relationship, and not explaining safeguards for international transfers.

The problem is worse in AI. An organization deploying AI might use a foundation model provider (OpenAI, Anthropic, Google), a cloud infrastructure provider (AWS, Azure, GCP), a data labeling service, a vector database provider, and a monitoring platform. Each one touches personal data at some point in the pipeline. If your privacy policy describes these as “technology partners,” you have the same gap Netflix has.

What to build. A sub-processor registry that is:

  • Public or easily accessible. Not buried in a legal document. A dedicated page listing every third party that processes personal data, updated within 30 days of any change.
  • Purpose-specific. For each sub-processor, state what data they receive, why, and what legal basis applies.
  • AI-pipeline aware. Include model providers, annotation services, vector databases, and any service that processes prompts, completions, or training data. These are sub-processors under GDPR whether or not your legal team has categorized them that way.

Clearview AI accumulated approximately EUR 100 million in GDPR fines across multiple EU jurisdictions for scraping billions of facial images without consent or transparency. The Dutch DPA alone imposed EUR 30.5 million and considered holding executives personally liable. This is the extreme case, but the principle applies at every scale: if you cannot name the third parties processing personal data in your AI pipeline, you have a transparency gap.

What “good” looks like. A subscriber, customer, or user can visit a single page on your site and see every third party that touches their data, what data they receive, and why. When you add a new AI model provider or switch annotation services, the registry updates within 30 days. Your Data Protection Impact Assessment for any new AI feature includes a sub-processor mapping as a required artifact.

5. Cross-Border Transfer Documentation

Netflix operates in over 190 countries but does not specify in its privacy policy whether it is certified under the EU-US Data Privacy Framework, which countries process subscriber data, or what supplementary measures are in place. The Dutch DPA cited this as a violation.

AI makes cross-border complexity worse because it introduces three distinct dimensions of data residency that most transfer documentation ignores:

Training data residency. Where was the data that trained your model stored and processed? If you use a foundation model from a US-based provider, the training happened on US infrastructure using globally sourced data. If your fine-tuning data contains EU personal data, GDPR’s transfer rules apply to that fine-tuning step.

Inference residency. Where does computation happen when a user sends a query? A prompt sent to an API endpoint may process in Virginia or Amsterdam depending on routing. If the prompt contains personal data from an EU user, the processing location matters under GDPR.

Model artifact residency. Where are the trained weights stored and served? If a model trained on EU data is deployed on infrastructure in a non-adequate country, the weights themselves may constitute a transfer of personal data, per the EDPB’s 2024 opinion.

Only 36% of organizations have full visibility into where their data is processed, according to InCountry’s 2025 survey. For AI workloads, that number is likely lower.

What to build. A transfer mapping document for every AI system that specifies:

Transfer DimensionDocumentExample
Training data originCountries where data was collectedEU (GDPR), US (CCPA), India (DPDP Act)
Training infrastructureWhere model training compute runsUS-East (AWS), EU-West (Azure)
Fine-tuning data flowWhere custom training data is processedSent from EU to US provider via DPF certification
Inference routingWhere production queries are processedEU users routed to EU endpoint; US users to US endpoint
Model artifact storageWhere weights are stored and servedMulti-region with EU primary
Legal mechanismTransfer basis per jurisdiction pairEU-US: Data Privacy Framework. EU-UK: Adequacy decision. EU-India: SCCs

What “good” looks like. An auditor or regulator can ask “where does personal data from an EU user go when they interact with your AI system?” and you can answer with specifics: which regions, which legal mechanisms, which supplementary measures. No vague references to “contractual agreements and technical protections.”

6. AI-Specific Regulatory Requirements

Three regulatory frameworks are reshaping what privacy programs must cover. If your privacy program was built before 2023, it was not designed to handle any of them.

EU AI Act Article 10: Data Governance for High-Risk AI

Article 10 applies to high-risk AI systems (healthcare, credit scoring, employment, law enforcement, and others listed in Annex III) with full compliance required by August 2, 2026.

The requirements are specific and operationally challenging:

  • Training, validation, and testing datasets must be subject to Data Governance and management practices appropriate for the intended purpose. This includes design choices, data collection processes, data preparation, and bias detection and mitigation.
  • Datasets must be relevant, sufficiently representative, and to the best extent possible free of errors and complete.
  • Datasets must account for the geographical, contextual, behavioral, and functional settings where the AI system will be used. A model trained on US healthcare data and deployed in Germany does not meet this standard unless the training data reflects the German context.
  • Paragraph 5 creates an exception: providers may process special categories of personal data (race, health, biometrics) for bias detection and correction, subject to appropriate safeguards. This is significant because it resolves a tension between the GDPR’s restrictions on sensitive data and the practical need to audit for bias.

For privacy teams, Article 10 means Data Governance for training data is no longer optional or aspirational. It is a legal requirement with penalties reaching 7% of global annual turnover.

Where the EU AI Act focuses on what organizations must do with training data, NIST addresses how to build the risk management infrastructure around it.

NIST AI RMF: GOVERN as the Foundation

The NIST AI Risk Management Framework organizes AI risk management into four functions: GOVERN, MAP, MEASURE, and MANAGE. GOVERN is the cross-cutting function that flows through all the others.

For privacy programs, the critical subcategories are:

  • GOVERN 1: Organization-wide processes for mapping, measuring, and managing AI risks, including standards for Data Quality and model training.
  • GOVERN 6: Policies and procedures for AI risks arising from third-party software, data, and supply chain dependencies.

NIST explicitly calls for aligning AI governance with broader Data Governance policies and practices, particularly the use of sensitive or otherwise risky data. If your privacy program and your AI governance program operate as separate workstreams, you are building duplicate structures with gaps between them.

Both the EU AI Act and NIST focus on the organizational and data governance side. GDPR Article 22 addresses a different surface: what happens when AI makes decisions about individuals.

GDPR Article 22: Automated Decision-Making

Article 22 gives individuals the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects. The threshold for “significant” is high: a content recommendation does not qualify, but automated credit decisions, employment screening, and insurance pricing do.

The practical challenge for ML teams is multi-stage pipelines. Your model might score applicants, a human reviewer might see the score, and the final decision might follow the score 98% of the time. Is that “solely automated”? Regulators are increasingly looking at the practical effect rather than the technical architecture. If downstream discretion exists in theory but not in practice, Article 22 protections still apply.

What to build. An AI regulatory mapping that identifies which AI systems fall under which regulatory requirements:

AI SystemEU AI Act Risk LevelGDPR Art. 22NIST AI RMFAction Required
Content recommendationLimited riskNot applicable (no significant effect)MAP, MEASURETransparency disclosure
Credit scoring modelHigh risk (Annex III)ApplicableFull frameworkArticle 10 compliance, human oversight, explainability
Ad targetingLimited riskPossibly applicable (profiling)MAP, MEASUREOpt-out mechanism, transparency
HR screening toolHigh risk (Annex III)ApplicableFull frameworkFull Article 10, bias audit, human review
Customer chatbotLimited riskNot applicableMAPTransparency that user is interacting with AI

7. Privacy-Enhancing Technologies as Implementation Tools

Privacy-enhancing technologies (PETs) are moving from research papers to regulatory expectations. 79% of compliance officers believe privacy-preserving computation will become a regulatory standard by 2028. Gartner predicts that 75% of businesses will use generative AI to create synthetic customer data by 2026. The PET market is projected to grow from approximately $3-4 billion in 2024 to $12-28 billion by 2030-2034.

These are not optional tools. They are becoming the expected implementation mechanism for the regulatory requirements described above.

TechnologyWhat It DoesWhen to Use ItMaturity Level
Differential privacyAdds calibrated noise to data or model outputs so individual records cannot be reverse-engineeredAggregate analytics, model training on sensitive data, usage reportingProduction-ready. Apple uses it across iOS for keyboard suggestions, health trends, and Siri improvements. NIST SP 800-226 provides implementation guidance.
Federated learningTrains models across distributed datasets without centralizing the dataMulti-institution healthcare models, cross-company collaboration, on-device personalizationProduction-ready for specific use cases. Google uses it for Gboard. Apple uses it for on-device model improvement.
Synthetic data generationCreates statistically representative but non-real dataTesting environments, model pre-training, sharing datasets externallyMaturing. Requires validation that synthetic data does not reproduce source data biases.
Homomorphic encryptionEnables computation on encrypted data without decrypting itThird-party model inference on sensitive data, secure analyticsEarly production. Performance overhead remains significant for complex operations.
Secure multiparty computationMultiple parties jointly compute a function without revealing their individual inputsCollaborative model training, benchmark comparisons, joint analyticsProduction-ready for structured use cases. Slower than plaintext computation.

Apple’s approach provides a concrete reference model. Apple Intelligence does not use personal data to train foundation models. Instead, Apple creates synthetic data representative of aggregate trends, uses on-device differential privacy to learn patterns without accessing raw data, and processes sensitive tasks on-device rather than in the cloud. In April 2025, Apple expanded its use of differential privacy to Image Playground, Memories, Writing Tools, and Visual Intelligence.

This is not charity. Apple’s hardware margins fund the additional engineering investment. But it demonstrates that privacy-preserving AI is technically feasible, commercially viable, and increasingly expected by regulators and users.

What to build. A PET assessment for each AI use case:

  1. Does this use case involve personal data? If yes, identify which PET reduces the privacy surface.
  2. Can the model be trained on synthetic or anonymized data instead of raw personal data? If yes, validate that synthetic data preserves the statistical properties needed for model quality.
  3. Can inference happen on-device or in a trusted execution environment? If yes, reduce the data that flows to your servers.
  4. For third-party model providers, does the provider support confidential computing or encrypted inference? If not, document the privacy risk in your DPIA.

What “good” looks like. Every DPIA for an AI system includes a PET assessment. “We considered differential privacy but chose not to use it because [specific reason]” is acceptable. “We did not consider PETs” is not.

8. Governance Operating Model: Who Owns What

The framework described above is useless without clear ownership. In most organizations, privacy lives in Legal, AI lives in Engineering, Data Governance lives in a central team, and nobody owns the intersection. That gap is where compliance failures happen.

The emerging model, based on IAPP research and industry practice, is a hub-and-spoke structure with three layers:

Central privacy office. Led by the DPO or Chief Privacy Officer. Owns the framework: Data Classification taxonomy, retention schedules, consent architecture, sub-processor registry, cross-border transfer documentation, PET standards. Sets policy. Does not implement.

AI governance function. May be a dedicated team or a sub-function of the central office. Owns the AI-specific overlay: EU AI Act compliance mapping, model risk assessments, Article 22 reviews, training data provenance documentation. Works with Engineering to implement Article 10 requirements.

Embedded privacy champions. Data Stewards or Privacy Engineers embedded in product and engineering teams. Execute the framework within their domain. Classify training data, implement consent flows, maintain inference log retention, run PET assessments. Report into both their business unit and the central privacy office.

A RACI matrix makes ownership explicit:

ActivityCentral Privacy OfficeAI GovernanceEngineering/MLLegalExecutive
Data Classification policyA (Accountable)C (Consulted)R (Responsible)CI (Informed)
Training data provenanceCARCI
Consent architecture designACRCI
Sub-processor registryACRCI
Cross-border transfer mappingCARAI
EU AI Act complianceCARCI
DPIA for AI systemsARCCI
PET assessmentCARII
Incident response (AI)CRRAI

The titles are evolving. Organizations are creating hybrid roles like “Chief Trust Officer,” “Chief Privacy and AI Governance Officer,” and “AI Governance Lead.” These are not lawyers; they are technologists and program managers who understand how to build scalable systems. The IAPP salary reports confirm the trend: the fastest-growing privacy roles are in privacy engineering and AI governance operations.

What “good” looks like. When a new AI feature ships, there is a defined process for who reviews the Data Classification, who signs off on the consent tier, who verifies the sub-processor registry is current, and who runs the PET assessment. No feature launches without these gates. The process adds days, not months.

The Enforcement Reality

If the framework above seems like overhead, the enforcement landscape provides the cost-of-inaction counterargument.

OrganizationFineYearRoot Cause
Clearview AI~EUR 100M (cumulative)2022-2024Scraping biometric data without consent, no legal basis, no transparency
OpenAI (ChatGPT)EUR 15M2024No adequate legal basis for training data, transparency failures, insufficient age verification
NetflixEUR 4.75M2024Vague third-party disclosures, unclear retention, insufficient cross-border transfer documentation
Meta AIRanked most privacy-invasive AI platform (2025)2025Aggressive data collection, no opt-out for model training, complex privacy policies
Google (various)$1.38B (Texas settlement, 2025)2025Location tracking without consent, biometric data harvesting, Incognito mode misrepresentation

Every fine in this table maps to a specific gap in the framework above. Clearview failed at consent and classification. OpenAI failed at consent and transparency. Netflix failed at third-party disclosure and retention. These are not edge cases. They are the predictable outcomes of building AI systems without a privacy framework that accounts for AI data.

The EDPB selected right to erasure as its 2025 enforcement priority, investigating 764 controllers across 32 European data protection authorities. Their finding: companies rely on anonymization as a substitute for actual deletion, and inconsistent practices around retention periods remain widespread. Machine unlearning and retention schedules are not abstract concerns; they are active enforcement targets.

Do Next

PriorityActionWhy It Matters
This weekAudit your Data Classification taxonomy for AI-specific categories (training data, model artifacts, inference logs)If your classification does not distinguish between a customer database and an ML training dataset, you cannot comply with EU AI Act Article 10 or California AB 2013
This monthMap every third party in your AI pipeline and publish or document a sub-processor registryNetflix’s generic “advertising companies” language cost them EUR 4.75M. OpenAI’s lack of transparency cost them EUR 15M. Name your sub-processors.
This monthReplace “as long as necessary” retention language with specific periods per data category, including training data and model versionsThe EDPB’s 2025 enforcement priority is right to erasure. Vague retention is an audit finding waiting to happen.
This quarterBuild a consent architecture that separates service consent from AI training consentThe Italian Garante’s ChatGPT fine established that blanket ToS consent is insufficient for model training. Layered consent is becoming the standard.
This quarterConduct a PET assessment for each AI use case and document the decision in your DPIA79% of compliance officers expect privacy-preserving computation to be a regulatory standard by 2028. Starting the assessment now gives you a two-year implementation runway.
By August 2026Complete EU AI Act Article 10 compliance for any high-risk AI systemsFull enforcement begins August 2, 2026. Penalties reach 7% of global annual turnover.

This framework addresses what a modern privacy program should contain. But the regulatory landscape underpinning it deserves its own analysis. GDPR, the EU AI Act, the emerging state-by-state patchwork in the United States: these regulations interact, conflict, and create prioritization dilemmas that a framework alone cannot resolve. The regulatory landscape analysis maps that terrain.

The common thread across the Netflix teardown, the Apple teardown, and this framework is the same observation. Privacy is not a legal function. It is not a compliance function. It is a data architecture decision that shapes what you can build, how you can build it, and what it costs when you get it wrong. The organizations that treat privacy as an engineering discipline, not a checkbox, will have a structural advantage. Everyone else will keep paying fines and rewriting policies after the fact.

Sources & References

  1. EU AI Act - Article 10: Data and Data Governance(2024)
  2. NIST AI Risk Management Framework (AI RMF 1.0)(2023)
  3. ISO/IEC 42001:2023 - AI Management Systems(2023)
  4. GDPR Article 22 - Automated Decision-Making(2016)
  5. EDPB Opinion 28/2024 on AI Models and Personal Data(2024)
  6. California AB 2013 - AI Training Data Transparency(2024)
  7. Italian Garante Fines OpenAI EUR 15M for GDPR Violations(2024)
  8. Clearview AI - Dutch DPA EUR 30.5M Fine(2024)
  9. noyb WIN: Dutch Authority Fines Netflix EUR 4.75M(2024)
  10. NIST AI RMF Playbook - GOVERN Function(2023)
  11. Apple - Learning with Privacy at Scale(2017)
  12. Apple Intelligence - Differential Privacy for AI Training(2025)
  13. The Right to Be Forgotten Is Dead: Data Lives Forever in AI(2025)
  14. Gartner - 80% of Enterprises Will Use Generative AI by 2026(2023)
  15. Designing the AI Governance Operating Model and RACI(2025)
  16. AI Data Residency Requirements by Region(2025)
  17. EDPB 2025 Coordinated Enforcement - Right to Erasure(2025)
  18. Gen AI and LLM Data Privacy Ranking 2025 - Incogni(2025)
  19. Consent in AI Applications - GDPR Local(2025)
  20. CNIL Recommendations for AI System Development(2024)
  21. Data Privacy Trends in 2026(2026)
  22. Crowell & Moring - California AB 2013 Disclosure Requirements(2025)
  23. Data Foundation - Data Provenance in AI(2024)

Stay in the loop

Get new articles on data governance, AI, and engineering delivered to your inbox.

No spam. Unsubscribe anytime.