⚠ Test Instance — not production data
Back to Blog

Fundamental Rights Impact Assessments Under the EU AI Act: The Article 27 Obligation Most Enterprises Are Ignoring

Team KI-Akte
Team KI-Akte
March 9, 2026 · 10 min read
Fundamental Rights Impact Assessments Under the EU AI Act: The Article 27 Obligation Most Enterprises Are Ignoring

There is a provision in the EU AI Act that receives far less attention than risk classification or conformity assessments, yet applies to virtually every bank, insurance company, large employer, and public-service provider in Europe. It is Article 27 — the obligation to conduct a Fundamental Rights Impact Assessment (FRIA) before deploying certain high-risk AI systems.

Most compliance teams are either unaware of this requirement, conflating it with the GDPR's Data Protection Impact Assessment (DPIA), or assuming it will be addressed by existing processes. All three assumptions are wrong. The FRIA is a distinct legal obligation with its own scope, methodology, and notification requirement. And the European Commission has not published a template, leaving enterprises to figure it out themselves.

This article explains what a FRIA is, who must conduct one, what it must contain, and how to implement it in practice.

What Is a FRIA and Why Is It Different from a DPIA?

A Data Protection Impact Assessment under GDPR Article 35 focuses on a specific question: what are the risks to data protection rights arising from a particular processing activity? Its scope is limited to privacy and personal data.

A Fundamental Rights Impact Assessment under EU AI Act Article 27 asks a fundamentally broader question: what are the risks to all fundamental rights arising from the deployment of a high-risk AI system? This includes privacy, but also encompasses non-discrimination, human dignity, freedom of expression, the right to an effective remedy, the right to a fair trial, workers' rights, consumer protection, and the rights of the child.

The differences are structural:

DimensionDPIA (GDPR Art. 35)FRIA (AI Act Art. 27)
Legal basisGDPR Art. 35EU AI Act Art. 27
ScopeData protection rightsAll fundamental rights
TriggerHigh-risk processing of personal dataDeployment of specific high-risk AI systems
Who conductsData controllerAI system deployer
NotificationDPA (only if high residual risk)Market surveillance authority (always)
RelationshipStandalone obligationComplements the DPIA (Art. 27(4))

Article 27(4) explicitly states that the FRIA "shall complement" the DPIA — it does not replace it. If your AI system triggers both obligations, you need to conduct both assessments. You may build on the DPIA's findings (especially around data protection risks), but you must separately address the broader fundamental rights that the DPIA does not cover.

Who Must Conduct a FRIA?

The FRIA obligation applies to three categories of deployers:

Public Sector Deployers

Any body governed by public law that deploys a high-risk AI system listed in Annex III must conduct a FRIA. This covers government agencies, municipalities, public universities, public hospitals, and all other entities operating under public law. In the DACH region, this includes Bundesbehörden, Landesbehörden, Kommunalverwaltungen, and their equivalents in Austria and Switzerland.

Private Entities Providing Public Services

Private companies that provide services of public interest — outsourced government functions, public transport operators, privatized utilities, contracted social services — also fall under the FRIA obligation when they deploy Annex III high-risk AI systems.

Employment and Essential Services Deployers (Public or Private)

This is the category that catches the most organizations by surprise. Any deployer — public or private — must conduct a FRIA when deploying high-risk AI for:

  • Employment, workers management, and access to self-employment (Annex III, point 4): This covers AI-powered recruitment tools, CV screening, interview analysis, performance evaluation, promotion decisions, task allocation, monitoring, and termination support.

  • Access to and enjoyment of essential private services — specifically creditworthiness assessment and credit scoring (Annex III, point 5(b)) and risk assessment and pricing in life and health insurance (Annex III, point 5(c)).

The practical implication for the DACH market is significant. Every bank using AI for credit decisions, every insurance company using AI for risk pricing, and every large employer using AI in their HR processes must conduct a FRIA. This is not limited to companies that consider themselves "AI companies" — it applies to any organization that deploys these specific types of high-risk AI systems, including off-the-shelf vendor tools.

The Six Required Components of a FRIA

Article 27(3) specifies six elements that every FRIA must contain:

(a) Description of the Deployer's Processes

Document the specific business processes where the AI system will be used. This is not a generic system description — it must describe how your organization uses the system, in your specific context, for your specific purposes.

For example, for an AI-assisted recruitment tool: "The system is used in the first stage of our graduate recruitment process to screen approximately 3,000 applications per intake cycle. Applications are uploaded in PDF format. The system extracts key data points and produces a shortlist of approximately 300 candidates, ranked by predicted job-fit score. A human recruiter reviews the shortlist and makes final interview invitations."

(b) Period of Time and Frequency of Use

Specify when and how often the AI system operates. Is it used continuously, periodically, or for one-time batch processing? How long will the deployment last? Is it a pilot or permanent deployment?

This matters because the frequency and duration of use directly affect the number of people impacted and the cumulative risk of harm.

(c) Categories of Affected Natural Persons and Groups

Identify, in the specific context of your deployment, which categories of people will be affected by the AI system's outputs. This goes beyond the obvious direct users to include indirect impacts.

For the recruitment example: job applicants (directly affected by ranking), existing employees (indirectly affected if the tool changes team composition), and people who never apply because the job ad was targeted away from them by an AI-powered advertising tool (affected by upstream AI decisions).

Pay particular attention to vulnerable groups: people with disabilities, ethnic minorities, older workers, non-native speakers, people with non-standard career paths. The FRIA must specifically assess whether the AI system poses disproportionate risks to these groups.

(d) Specific Risks of Harm

For each identified group, assess the specific risks of harm to their fundamental rights. This must be based on the information provided by the AI system's provider under Article 13 (transparency obligations), combined with your own knowledge of the deployment context.

Key rights to assess:

  • Non-discrimination (Art. 21 EU Charter): Does the system produce different outcomes for different demographic groups? Has the provider disclosed bias testing results?
  • Human dignity (Art. 1): Does the system reduce people to scores or categories in ways that affect their dignity?
  • Freedom of expression (Art. 11): Could the system suppress or discourage certain types of communication?
  • Right to an effective remedy (Art. 47): Can affected persons challenge the system's outputs? Is there a meaningful appeal process?
  • Workers' rights (Art. 31): Does the system affect working conditions, surveillance levels, or autonomy?
  • Consumer protection (Art. 38): Does the system affect access to products or services?
  • Rights of the child (Art. 24): Are minors among the affected persons?

(e) Human Oversight Measures

Document the human oversight measures in place, per the AI system provider's instructions for use (Article 14). Specify who oversees the system, what training they have received, what authority they have to override or disregard the system's outputs, and how frequently they exercise meaningful review.

Be honest in this section. If your human oversight is a rubber stamp — if the human reviewer accepts the AI's recommendation 98 percent of the time without independent evaluation — that is not meaningful oversight, and documenting it as such will not satisfy a regulator.

(f) Remediation and Complaint Mechanisms

Describe the measures to be taken if the identified risks materialize. This includes:

  • Internal governance arrangements for detecting and responding to harm
  • Complaint mechanisms accessible to affected persons
  • Escalation procedures and timelines for resolution
  • Processes for suspending or modifying the AI system if harm is identified
  • Communication protocols for informing affected persons

This component often requires establishing new processes. Most organizations do not have a dedicated complaint channel for "the AI system treated me unfairly." Under the FRIA, they need one.

The Notification Obligation You Cannot Skip

After completing the FRIA, Article 27(5) requires the deployer to notify the relevant market surveillance authority of the results. This is mandatory, not conditional. Unlike a DPIA under GDPR — where you only consult the supervisory authority if you cannot mitigate high residual risks — the FRIA notification happens in all cases, regardless of the risk level identified.

In Germany, the primary market surveillance authority for the AI Act is the Bundesnetzagentur (BNetzA), supported by the KoKIVO coordination center. However, sector-specific authorities may also be involved depending on the use case — BaFin for financial services, the relevant Landesbehörde for employment, or the applicable medical device authority for health-related AI.

The notification is not an approval process. The authority does not "sign off" on your FRIA. But the submission creates a regulatory record, and authorities can and will use FRIA filings to prioritize enforcement activities and identify patterns of non-compliance.

Practical Implementation: Seven Steps

Step 1: Identify which AI systems require a FRIA. Cross-reference your AI inventory with the three categories of deployers above. Any high-risk system deployed for employment decisions, credit scoring, insurance pricing, or by a public-sector entity needs one.

Step 2: Gather provider information. Request the Article 13 transparency documentation from your AI system provider. This should include intended purpose, capabilities and limitations, performance metrics, known biases, and instructions for human oversight.

Step 3: Map affected persons and groups. Go beyond the obvious. Consider direct users, subjects of decisions, and people indirectly affected. Identify vulnerable subgroups.

Step 4: Assess risks to specific fundamental rights. Use the EU Charter of Fundamental Rights as your framework. For each affected group, evaluate risks to each relevant right. Be specific about severity, probability, and reversibility of harm.

Step 5: Document oversight and remediation. Describe your actual human oversight practices and complaint mechanisms honestly. If they are inadequate, strengthen them before completing the FRIA.

Step 6: Compile and submit the FRIA. Assemble the six required components into a single, structured document. Notify the market surveillance authority.

Step 7: Update when circumstances change. A FRIA is not a one-time exercise. When the AI system is updated, when deployment context changes, when new risks emerge, or when affected groups change, the assessment must be revised. Article 27(3) requires that the FRIA be performed "before deploying" the system, and Article 27(1) requires ongoing compliance.

Building on Your Existing DPIA Work

If you have already conducted a DPIA for the same AI system under GDPR Article 35, you do not need to start from scratch. Article 27(4) explicitly permits building on existing DPIAs. The privacy risk assessment, data flow mapping, and processing descriptions from your DPIA can serve as inputs to the FRIA.

What you need to add:

  • Broader fundamental rights assessment beyond data protection (non-discrimination, dignity, workers' rights, consumer protection)
  • Specific identification of affected groups in your deployment context (not just data subjects in general)
  • Human oversight assessment per the AI provider's instructions
  • Dedicated remediation and complaint mechanisms for AI-related harm
  • Mandatory notification to market surveillance authority (not conditional on residual risk)

The FRIA is not a checkbox. It is a genuine assessment of whether your use of an AI system respects the fundamental rights of the people it affects. Conducted properly, it serves as both a compliance safeguard and a practical tool for identifying and mitigating real harms before they occur. Conducted as a paper exercise, it satisfies neither the regulation's intent nor its letter — and it will not withstand regulatory scrutiny when enforcement begins in August 2026.