Privacy in Practice: From Compliant to Operationally Ready
Meridian Analytics completes its privacy transformation. This walkthrough covers cross-border transfer documentation, EU AI Act compliance mapping, PET assessments, the governance operating model, and what the company looks like six months later when Allianz asks the same DPIA questions.
Data Privacy Guide: Overview | Part 1 | Part 2 | Part 3 | Part 4 | Part 5 | Part 6 | Part 7 | Part 8 | Part 9 | Part 10
Where We Left Off
In Part 5, Meridian built four foundation components: AI-specific Data Classification, ML-aware retention schedules, three-tier consent architecture, and a sub-processor registry. Those solved the transparency problem. But the Allianz DPIA asked harder questions: where does EU data flow, what legal mechanisms cover those flows, which AI systems qualify as high-risk, and who is accountable for AI Governance decisions?
This article covers how Sarah Chen built the Compliance and Governance layers to close those gaps.
Building the Compliance Layer
Component 5: Cross-Border Transfer Documentation
The Part 3 framework described cross-border transfer documentation as mapping three dimensions: where training data is stored and processed, where inference computation happens, and where model artifacts reside. Only 36% of organizations have full visibility into where their data is processed, according to InCountry’s 2025 survey. Meridian was about to discover it was in the other 64%.
Sarah pulled Meridian’s engineering team into a two-week mapping exercise. The goal was simple: for every data flow in the Copilot pipeline, document where data originates, where it moves, and what legal mechanism authorizes the movement.
The initial assumption was that EU data stays in the EU. Meridian’s primary infrastructure runs on AWS eu-west-1 (Ireland) for EU clients and us-east-1 (Virginia) for US clients. The engineering team expected the transfer mapping to be straightforward.
It was not.
The Populated Transfer Mapping
| Transfer Dimension | Meridian Copilot | Legal Mechanism |
|---|---|---|
| Training data origin | EU (GDPR), US (CCPA/state laws) | N/A (local processing) |
| Training infrastructure | AWS us-east-1 (fine-tuning compute) | EU data: SCCs + encryption in transit |
| Inference routing | EU users: AWS eu-west-1. US users: AWS us-east-1 | EU users: local processing. No transfer. |
| Anthropic API calls | Anonymized queries routed to Anthropic US endpoint | DPF certification. Anonymization documented. |
| Pinecone vector storage | US-hosted | SCCs via DPA. Embeddings, not raw data. |
| Model artifact storage | Primary: eu-west-1. Replica: us-east-1 | Adequacy for EU primary. SCCs for US replica. |
The mapping looks clean. The discoveries that produced it were not.
Discovery 1: The Anthropic Routing Problem
When Sarah’s team traced the inference path for EU users, they found that Copilot queries from EU clients were routed to Anthropic’s US endpoint. Anthropic’s own privacy documentation states that it may route customer traffic to select countries in the US, Europe, Asia and Australia, but data is stored in the US. For Meridian, this meant EU customer queries, potentially containing company names, data patterns, and analytical context, were crossing the Atlantic for every Copilot interaction.
The engineering team had configured Copilot to anonymize queries before sending them to the Anthropic API: stripping customer names, replacing specific data values with type tokens, and removing any directly identifying information. But the anonymization was undocumented. No one had validated whether the anonymization was effective against re-identification. No one had written a Transfer Impact Assessment covering this flow.
Sarah’s fix was two-pronged. First, document the existing anonymization process, including what gets stripped, what gets tokenized, and what residual re-identification risk exists. Second, route EU inference through AWS Bedrock in eu-central-1 (Frankfurt), which provides Claude models with genuine in-region processing. The Bedrock migration took six weeks, but it eliminated the transfer question entirely for EU inference.
Discovery 2: The DataPulse Legacy Problem
The second discovery was worse. When Sarah’s team mapped the DataPulse infrastructure acquired eight months earlier, they found a MongoDB Atlas cluster running in US-East. DataPulse had served EU customers. Their data, including customer analytics configurations, user profiles, and recommendation engine training data, had been sitting on US infrastructure since the acquisition.
MongoDB Atlas lets customers choose their deployment region, and MongoDB provides SCCs through its Data Processing Agreement. But DataPulse had never executed SCCs with MongoDB. EU customer data had been transferring to US infrastructure without a legal basis for eight months.
The remediation was immediate. Sarah’s legal team executed SCCs with MongoDB retroactively while the engineering team began migrating the DataPulse cluster to eu-west-1. They also conducted a notification assessment: did this unlawful transfer require breach notification under GDPR Article 33? After consulting outside counsel, Meridian determined that the data was encrypted at rest, no unauthorized access had occurred, and the transfer was to a company (MongoDB) with robust security controls. The risk to data subjects was low. They documented the finding, the remediation, and the risk assessment in an internal incident report without triggering formal notification.
That report joined the DPIA file. “We found an issue and fixed it” is far better positioning than “we had no idea where our data was.”
Component 6: AI Regulatory Compliance Mapping
With the transfer mapping complete, Sarah turned to the question Meridian had been avoiding: which of its AI systems fall under the EU AI Act’s high-risk classification?
Meridian’s CTO, David Park, had operated under the assumption that Copilot was a chatbot. Chatbots are limited risk under the EU AI Act. Limited risk means transparency obligations (tell the user they are interacting with AI) and not much else. David had already added a disclosure banner to Copilot’s interface. He considered the AI Act handled.
Sarah asked a different question: “What do our clients use Copilot for?”
The product team pulled usage data. Most clients used Copilot for general analytics Q&A: “Show me revenue by region,” “What drove the variance in Q3?” Standard business intelligence queries. Limited risk.
But twelve insurance clients used a specialized module called Copilot for Insurance Analytics. This module processed policyholder data, ran risk scoring models, and generated pricing recommendations that underwriters used to set premiums. Two clients had automated the underwriting workflow to the point where Copilot’s recommendation was the default, with human review happening only on exceptions.
That module was not limited risk. Annex III of the EU AI Act explicitly lists “AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life and health insurance” as high-risk. Meridian’s insurance analytics module fell squarely within this classification.
The Populated AI Regulatory Mapping
| AI System | EU AI Act Risk Level | GDPR Art. 22 | Action Required | Deadline |
|---|---|---|---|---|
| Meridian Copilot (general Q&A) | Limited risk | Not applicable | Transparency disclosure that user is interacting with AI | Already implemented |
| Copilot for Insurance Analytics | High risk (Annex III) | Applicable | Full Article 10 compliance, human oversight, explainability | August 2, 2026 |
| DataPulse recommendation engine | Limited risk | Possibly applicable (profiling) | Opt-out mechanism, transparency | Already required |
| Internal HR screening tool | High risk (Annex III) | Applicable | Full Article 10, bias audit, human review | August 2, 2026 |
The surprise was not that insurance analytics was high-risk. The surprise was what “high-risk” actually required.
Article 10 mandates that training, validation, and testing datasets be “relevant, sufficiently representative, and to the best extent possible free of errors and complete” in view of the intended purpose. It requires documented Data Governance and management practices covering design choices, data collection processes, data preparation, and bias detection and mitigation. It requires that datasets account for the “geographical, contextual, behavioral, and functional settings” where the system will be used. A model trained on US insurance data and deployed for German insurers does not meet this standard unless the training data reflects the German context.
For Meridian, this meant the insurance analytics module needed: documented provenance for every training dataset, bias detection and mitigation processes, validation that training data represented the geographic and regulatory contexts of the EU insurance clients using the module, human oversight mechanisms that went beyond “an underwriter can override,” and a conformity assessment before the August 2, 2026 deadline.
David’s reaction was candid. “We have four months. That module generates $2.3 million in ARR from twelve clients. What does compliance actually cost?”
Sarah’s estimate: one dedicated ML engineer for training data provenance (6 months), a bias audit from an external firm ($80,000-$120,000), and an AI Governance hire to own the compliance program ongoing. Total first-year cost: roughly $350,000 to $450,000. The alternative: penalties of up to 3% of global annual turnover or EUR 15 million for high-risk AI non-compliance, and potential loss of all twelve insurance clients when their procurement teams ask for the conformity assessment.
David approved the budget the same week.
The HR screening tool was a secondary discovery. Meridian used an internal AI tool to pre-screen engineering candidates, scoring resumes against job requirements before a recruiter reviewed them. Annex III classifies AI systems for recruitment and selection as high-risk. The tool was smaller in scope and used only internally, but the compliance obligations were identical. Sarah added it to the Article 10 workstream.
Building the Governance Layer
Component 7: PET Assessment
Part 3 of this series established the principle: every DPIA for an AI system should include a privacy-enhancing technology assessment. “We considered differential privacy but chose not to use it because [specific reason]” is acceptable. “We did not consider PETs” is not.
NIST finalized guidelines for evaluating differential privacy guarantees (SP 800-226) in March 2025, signaling that differential privacy is moving from research technique to auditable control. ISACA published its primer on privacy-enhancing technologies in 2024, exploring how PETs fit within the regulatory landscape. The direction is clear: documented PET assessments are becoming a compliance expectation, not a nice-to-have.
Sarah ran a PET assessment for each of Meridian’s AI use cases. The process was not about adopting every technology. It was about documenting the consideration.
| AI Use Case | PET Considered | Decision | Rationale |
|---|---|---|---|
| Copilot query analytics | Differential privacy on query logs | Adopted | Aggregated query analytics (popular topics, feature usage patterns) can use DP noise injection without quality loss. NIST SP 800-226 provides implementation guidance. |
| Copilot training (pre-training patterns) | Synthetic data generation | Adopted for non-client-specific patterns | Reduces personal data in the training pipeline. Gartner predicts widespread enterprise use of generative AI for synthetic data by 2026. Validated that synthetic data preserves statistical properties needed for model quality. |
| Copilot training (client-specific) | Federated learning | Rejected | Meridian’s single-tenant architecture makes federation unnecessary. Client data is already isolated by tenant. Federated learning adds complexity without reducing privacy risk in this architecture. |
| Insurance analytics | Homomorphic encryption for inference | Deferred (12-month reassessment) | Performance overhead is too high for real-time pricing queries. Latency increased 47x in testing. Reassess when hardware acceleration matures. |
| Insurance analytics training data | Bias-aware synthetic augmentation | Adopted | Generates synthetic policyholder profiles to balance underrepresented demographic segments. Addresses Article 10 requirement for representative training data. |
The federated learning rejection is worth noting. Sarah’s team spent two days evaluating it before concluding that it solved a problem Meridian did not have. Each client’s data already runs in an isolated tenant. Federated learning is designed for scenarios where multiple parties need to train a joint model without sharing raw data. Meridian trains models per tenant. The PET was technically interesting and architecturally irrelevant.
The homomorphic encryption deferral followed a similar pattern. The engineering team ran benchmarks on encrypted inference for the insurance analytics module. A query that took 200 milliseconds in plaintext took 9.4 seconds under homomorphic encryption. For a real-time pricing tool, that latency is disqualifying. Sarah documented the assessment, the benchmarks, and the decision to reassess in 12 months as hardware-accelerated HE becomes more accessible.
Each assessment went into the DPIA file with the same structure: what was considered, what was decided, and why. The documentation matters more than the outcome. An auditor who sees “we evaluated four PETs, adopted two, rejected one with rationale, and deferred one with a reassessment timeline” reads a mature privacy program. An auditor who sees no PET assessment reads a gap.
Component 8: Governance Operating Model
Everything Sarah had built so far was infrastructure: classification taxonomies, retention schedules, consent flows, transfer mappings, regulatory classifications, PET assessments. Infrastructure without governance is a set of documents that decay the moment they are published.
The IAPP’s 2025 Organizational Digital Governance Report found that only 17% of organizations have “aligned” governance models where privacy, AI governance, and cybersecurity operate as an integrated framework. 35% still operate in “analog” mode: reactive, siloed, firefighting. Meridian was somewhere in the middle. Sarah’s privacy work was coordinated, but it lived entirely inside her two-person team. No one in Engineering had formal privacy responsibilities. No one owned the AI Governance intersection.
Sarah proposed a hub-and-spoke governance structure with three layers.
Central Privacy Office (Sarah Chen, Head of Privacy + 2 analysts):
- Owns the framework: Data Classification taxonomy, retention policy, consent architecture, sub-processor registry, cross-border transfer documentation
- Sets policy and audits compliance
- Does not implement. Implementation belongs to the teams that own the systems.
AI Governance Function (new hire: Jordan Park, AI Governance Lead, reports to CTO with dotted line to Sarah):
- Owns the AI-specific overlay: EU AI Act compliance mapping, model risk assessments, Article 22 reviews, training data provenance documentation
- Works directly with Engineering to implement Article 10 requirements for the insurance analytics and HR screening modules
- Runs the PET assessment cycle and conformity assessment preparation
Embedded Privacy Champions (3 engineers designated across product teams):
- Execute within their domain: classify training data as it enters pipelines, implement consent flows in the product, maintain inference log retention schedules, flag new sub-processor relationships
- Dual reporting: to their engineering manager for day-to-day work, and to Sarah’s office for privacy governance alignment
- Weekly 30-minute sync with Sarah’s team to surface issues before they become findings
The hub-and-spoke model is not novel. What made it work at Meridian was the RACI matrix that made ownership explicit for every activity. Without a RACI, “hub-and-spoke” is just an org chart. With one, it is an operating model.
The Populated RACI
| Activity | Sarah (Privacy) | Jordan (AI Gov) | Engineering | Legal | CTO |
|---|---|---|---|---|---|
| Data Classification policy | A | C | R | C | I |
| Copilot training data provenance | C | A | R | C | I |
| Consent architecture design | A | C | R | C | I |
| Sub-processor registry | A | C | R | C | I |
| Cross-border transfer mapping | C | A | R | A | I |
| EU AI Act compliance | C | A | R | C | I |
| DPIA for AI systems | A | R | C | C | I |
| PET assessment | C | A | R | I | I |
Two design choices in this RACI deserve attention.
First, cross-border transfer mapping has dual accountability: Jordan for the AI pipeline transfers and Legal for the legal mechanism documentation. Sarah initially tried to make herself accountable for everything. Her outside counsel pushed back. Transfer mechanisms are legal instruments. Getting the SCCs right is a legal responsibility, not a privacy operations one. Sarah owns the data flow mapping. Legal owns the legal basis.
Second, DPIA authorship is split. Sarah is accountable (the DPIA is a privacy artifact), but Jordan is responsible (the content is AI-specific). Jordan writes the DPIA. Sarah reviews and signs off. This prevents the bottleneck where every DPIA sits in Sarah’s queue because she is the only person who can write them.
Jordan Park started in Month 4. His first deliverable was the EU AI Act conformity assessment roadmap for the insurance analytics module. His second was a model risk assessment framework adapted from the NIST AI RMF GOVERN function. By Month 5, the three Privacy Champions were embedded and operational, filing their first Data Classification reviews and flagging a new sub-processor (a logging tool one team had adopted without going through procurement).
The DPIA Response: Before and After
Six months after the Allianz DPIA request that started everything, Sarah tested the program. She pulled the original Allianz questionnaire and assigned it to Jordan and one Privacy Champion as a simulation. Could they complete the 47-question DPIA without escalating to Sarah for every answer?
They completed it in three business days. The first attempt had taken two weeks and ended with “we need more time.”
Here is what changed.
| Allianz DPIA Question | Before (Part 5) | After (6 months later) |
|---|---|---|
| Where does our data go when users interact with Copilot? | ”Our infrastructure is hosted on AWS" | "EU queries process in eu-west-1. Anthropic inference routes through AWS Bedrock eu-central-1. Anonymized embeddings sent to Pinecone (US, SCCs). No raw PII leaves the EU.” |
| What third parties process our data? | ”We work with industry-leading service providers" | "See our sub-processor registry at meridian.com/privacy/sub-processors. 5 processors listed with purpose, data categories, and legal basis for each.” |
| What is the legal basis for AI processing? | ”We process data in accordance with our Terms of Service" | "Tier 1 (service): contractual necessity. Tier 2 (improvement): legitimate interest with documented balancing test. Tier 3 (AI training): explicit consent, separately granted, withdrawable without service impact.” |
| How long is our data retained? | ”Data is retained as long as necessary for business purposes" | "Dashboard data: account active + 12 months. Inference logs: 90 days (insurance analytics), 30 days (general). Training data: deleted after training + 30-day validation window. Previous model versions: 90-day rollback period, then deleted.” |
| What safeguards exist for cross-border transfers? | ”We implement appropriate safeguards" | "EU-US: SCCs with supplementary measures for AWS us-east-1 model replica. Anthropic inference: EU-region processing via Bedrock. Pinecone: SCCs + encryption. Transfer mapping document available on request.” |
| Have you assessed privacy-enhancing technologies? | (Not asked because previous answers were too vague to reach PET questions) | “Differential privacy adopted for query analytics. Synthetic data used for pre-training. Federated learning assessed and rejected (single-tenant architecture). Homomorphic encryption deferred with 12-month reassessment. Full PET assessment in DPIA Annex C.” |
| Who is accountable for AI Governance decisions? | ”Our leadership team oversees all governance matters" | "Sarah Chen, Head of Privacy (DPIA accountability). Jordan Park, AI Governance Lead (EU AI Act compliance, training data provenance). Three embedded Privacy Champions across product engineering. RACI matrix available on request.” |
The difference is not just specificity. The before answers are defensive. They are written to avoid saying the wrong thing by not saying anything at all. The after answers are operational. They name systems, regions, legal mechanisms, and people. An enterprise procurement team reading these answers does not need a follow-up meeting to understand Meridian’s privacy posture.
Allianz’s procurement team approved Meridian for their expanded analytics contract. The DPIA review, which typically takes four to six weeks in their process, closed in twelve business days.
Six-Month Implementation Timeline
Meridian’s journey compressed a full privacy program build into six months. That pace was driven by the Allianz DPIA deadline, not by best practice. Most organizations will take nine to twelve months. The sequence matters more than the speed.
| Month | Milestone | Key Deliverable |
|---|---|---|
| 1 | Diagnostic + Classification | 8-component gap assessment against Part 3 framework. AI Data Classification taxonomy deployed. |
| 2 | Retention + Consent | ML-aware retention schedule published with specific periods per data category. Three-tier consent architecture designed and implemented. |
| 3 | Sub-processors + Transfers | Sub-processor registry published at meridian.com/privacy/sub-processors. Cross-border transfer mapping completed. DataPulse MongoDB migration initiated. Anthropic Bedrock EU migration started. |
| 4 | AI Governance hire + PET assessment | Jordan Park onboards as AI Governance Lead. PET assessments completed for all AI systems. Insurance analytics module flagged for high-risk compliance. |
| 5 | EU AI Act classification + DPIA process | Risk classification completed for all AI systems. DPIA template operationalized. Conformity assessment roadmap drafted for insurance analytics and HR screening. |
| 6 | Governance model operational | Three Privacy Champions embedded in product teams. First AI Governance Council review held. Allianz DPIA answered in three business days. |
Two implementation notes.
First, the DataPulse MongoDB migration (Month 3) and the Anthropic Bedrock migration (Month 3) ran in parallel with the privacy program build. These were engineering projects that required privacy to define the requirements and engineering to execute. Sarah did not manage the migrations. She defined the target state and held engineering accountable for delivery. That division of labor is the hub-and-spoke model in practice.
Second, Jordan Park’s hiring (Month 4) was later than ideal. Sarah spent her first three months doing both privacy and AI governance work. The PET assessments, the EU AI Act classification, and the DPIA process all slowed because Sarah was the only person who could do them. If Meridian had hired the AI Governance Lead in Month 1, the program would have moved faster. Organizations reading this should hire (or designate) the AI Governance function before starting the compliance layer, not after.
Do Next
| Priority | Action | Why It Matters |
|---|---|---|
| This week | Request a sample DPIA questionnaire from your largest EU client or find one online. Try to answer it. | If you struggle, your privacy program has the same gaps Meridian had. The DPIA is a diagnostic tool, not just a compliance artifact. |
| This month | Map every cross-border data transfer in your AI pipeline. Document the legal mechanism for each. | Only 36% of organizations have full visibility into where their data is processed. You need to be in that 36%. |
| This month | Classify your AI systems against the EU AI Act risk tiers. Flag any Annex III high-risk systems. | High-risk obligations take effect August 2, 2026. Conformity assessments take months, not weeks. |
| This quarter | Hire or designate an AI Governance lead. Establish the hub-and-spoke governance model with a published RACI. | Privacy and AI governance run as separate programs in most organizations. Only 17% have integrated governance models. That separation creates gaps regulators will find. |
| This quarter | Run a PET assessment for every AI use case. Document the decision (adopt, reject with rationale, defer with reassessment date). | ”We did not consider PETs” is the new “we did not consider encryption.” NIST SP 800-226 signals that differential privacy is moving from research to auditable control. Document the assessment now. |
What Comes Next
Meridian’s transformation is complete. Six months from a DPIA request they could not answer to a privacy program that handles enterprise procurement questionnaires in three business days. Sarah has a team, a governance structure, a compliance roadmap, and documentation that would survive a regulatory audit.
The final article in this series steps back from the implementation details and asks what all of it adds up to. Four teardowns, one framework, one regulatory landscape analysis, and two implementation walkthroughs. Part 10 connects the findings across all nine preceding articles, identifies what surprised me, and provides a consolidated action plan for practitioners building or modernizing their own privacy programs.
The common thread will be familiar by now: privacy is not a legal function or a compliance function. It is a data architecture decision. Meridian’s story shows what happens when an organization treats it that way.
Sources & References
- EU AI Act - Article 10: Data and Data Governance(2024)
- EU AI Act - Annex III: High-Risk AI Systems(2024)
- EU AI Act Implementation Timeline(2025)
- IAPP Organizational Digital Governance Report 2025(2025)
- IAPP Privacy Program Framework(2025)
- Centralized Privacy Office: The New Model for AI & Risk Governance - TrustArc(2025)
- Data Privacy Trends in 2026(2026)
- NIST Finalizes Guidelines for Evaluating Differential Privacy Guarantees(2025)
- ISACA Primer on Privacy Enhancing Technologies(2024)
- AI Data Residency Requirements by Region(2025)
- InCountry - Cross-Border Data Regulations and AI(2025)
- Standard Contractual Clauses - European Commission(2021)
- Anthropic Privacy Center - Server Locations(2025)
- MongoDB GDPR Compliance - Atlas Architecture Center(2025)
- Pinecone Data Processing Addendum(2025)
- Designing the AI Governance Operating Model and RACI(2025)
- EDPB Opinion 28/2024 on AI Models and Personal Data(2024)
- Freshfields - Rising Risks for International Data Transfers 2026(2026)
- Gartner - 75% of Businesses Will Use GenAI for Synthetic Data by 2026(2023)
- EU-US Data Privacy Framework Adequacy Decision(2023)
Stay in the loop
Get new articles on data governance, AI, and engineering delivered to your inbox.
No spam. Unsubscribe anytime.