Transfer Impact Assessment
This Transfer Impact Assessment (TIA) evaluates the risks associated with transferring personal data from the European Economic Area (EEA) to the United States in connection with the qualcode.ai service.
1. Purpose and Scope
Following the CJEU's Schrems II judgment, organizations transferring personal data to third countries must assess whether the legal framework of the destination country provides adequate protection. This assessment evaluates our transfers to:
- OpenAI, Inc. - AI classification provider (Rater A)
- Anthropic, PBC - AI classification provider (Rater B)
2. Transfer Details
2.1 Data Categories
| Category | Description |
|---|---|
| Survey Responses | Free-text answers which may contain personal information |
| Classification Context | Coding guide categories and training examples |
| Metadata | Request identifiers (no personally identifying information) |
2.2 Recipients
| Recipient | Purpose | Location |
|---|---|---|
| OpenAI, Inc. | AI text classification (GPT-4 series) | USA |
| Anthropic, PBC | AI text classification (Claude series) | USA |
3. US Legal Framework
The following US surveillance laws are relevant to this assessment:
3.1 FISA Section 702
Authorizes collection of foreign intelligence information from non-US persons. Applicability: Our AI providers could be compelled to provide data, but survey coding is unlikely to constitute "foreign intelligence information."
3.2 Executive Order 14086
Signed October 2022, implementing new safeguards for US signals intelligence including proportionality requirements and a Data Protection Review Court for EU individuals.
4. Contractual Safeguards
Both AI providers have executed EU Standard Contractual Clauses (Commission Decision 2021/914) with us, including:
- Data Processing Agreements prohibiting training on customer data
- Commitment not to use API data for model training
- Commitment to notify us of government access requests (where legally permitted)
- Commitment to challenge overbroad requests
Note: While these contractual commitments provide additional protection, they cannot override US surveillance laws such as FISA Section 702. The practical enforceability of commitments to challenge or notify is limited by legal constraints.
5. Technical Safeguards
- Encryption: TLS encryption for all API communications
- Transient processing: Data is processed transiently and not used for model training. OpenAI and Anthropic may retain request logs for abuse detection (up to 30 days) per their API terms.
- Request isolation: No cross-customer data exposure
- Data minimization: Only survey text and necessary context transmitted
6. Risk Assessment
6.1 Likelihood of Government Access
We assess the likelihood as Low to Medium based on:
- Data is processed transiently (though FISA Section 702 applies to communications in transit)
- Survey responses unlikely to constitute foreign intelligence targets for most use cases
- Limited intelligence value of typical academic/market research data
Important: For sensitive research topics (political opinion surveys, health data, research involving specific nationalities), the risk may be higher. Researchers should conduct their own assessments for such use cases.
6.2 Impact Assessment
| Data Type | Impact Level |
|---|---|
| Standard market research | Low |
| Academic opinion surveys | Low-Medium |
| Surveys with identifiable data | Medium |
| Sensitive topic surveys | Medium-High |
6.3 Overall Assessment
Based on low-to-medium likelihood of access and variable (typically low) impact for standard use cases, we assess the overall transfer risk as Acceptable with current safeguards for typical academic and market research applications. Supplementary measures provide limited additional protection given the legal authority of US government agencies; our risk assessment is based primarily on data characteristics (transient processing, academic focus) rather than contractual enforceability.
7. Conclusion and Recommendations
- Transfers to OpenAI and Anthropic are lawful under EU SCCs with supplementary measures in place
- Risk of problematic government access is low to medium for typical research data, though contractual safeguards have limited enforceability against US surveillance laws
- Supplementary measures provide partial mitigation but cannot fully address Schrems II gaps due to the legal authority of US agencies
- Customers should minimize personal data in uploads and consider anonymization for sensitive surveys
- For sensitive research topics, customers should conduct their own transfer impact assessments
8. Review Schedule
This assessment is reviewed:
- Annually: Next review January 2027
- Upon material changes: US law changes, provider policy changes, EU guidance updates
Document version: 1.0 | Assessment date: January 2026
For questions about this assessment, contact legal@qualcode.ai.