Article 6(3) of the EU AI Act: How to Classify Your AI System as Non-High-Risk — And the Profiling Trap


Risk classification is the single most consequential compliance decision you will make under the EU AI Act. Get it wrong in one direction, and you spend millions on unnecessary conformity assessments for a system that never needed them. Get it wrong in the other direction, and you face EUR 15 million in fines and a forced market withdrawal.
Article 6 of the EU AI Act establishes the rules for determining whether an AI system is high-risk. The default is straightforward: if your system falls into one of the eight Annex III categories, it is high-risk. But Article 6(3) creates a narrow exemption that, when properly applied, can relieve significant compliance burden. The catch is that the exemption conditions are strict, the documentation requirement is mandatory, and one specific clause — the profiling override — catches organizations that thought they were in the clear.
The Default Rule: Annex III Means High-Risk
Before discussing the exception, it helps to understand the baseline. An AI system is classified as high-risk through two pathways:
Annex I pathway: The system is a safety component of a product covered by existing EU harmonisation legislation (medical devices, machinery, toys, aviation, automotive, etc.) or is itself such a product. These systems must undergo third-party conformity assessment.
Annex III pathway: The system falls into one of eight use-case categories regardless of what product it is part of. This is the pathway most relevant to enterprise software.
Here are the Annex III categories with concrete examples from DACH enterprises:
| Category | Examples |
|---|---|
| Biometrics | Facial recognition for building access, emotion detection in call centers |
| Critical infrastructure | AI-controlled energy grid balancing, predictive maintenance in water treatment |
| Education | Automated exam grading, university admission ranking algorithms |
| Employment & HR | AI-powered CV screening, interview analysis tools, performance scoring |
| Essential services | Automated credit scoring, insurance risk pricing, loan eligibility engines |
| Law enforcement | Predictive policing tools, evidence analysis software |
| Migration | Visa risk assessment, border surveillance analytics |
| Justice & democracy | Legal research AI, case outcome prediction tools |
If your AI system performs any function within these categories, the default classification is high-risk — with all corresponding obligations under Articles 8 through 15, conformity assessment under Article 43, and registration under Article 49.
The Exception: Article 6(3) Explained
Article 6(3) provides that an Annex III system is not considered high-risk if it "does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision-making."
This exemption is available when the AI system meets at least one of four conditions:
Condition 1: Narrow Procedural Task
The system performs a narrow procedural task — for example, converting unstructured data into structured data, classifying incoming documents into predefined categories, or detecting duplicates. The key word is "narrow." If the system's output feeds directly into a decision about a natural person, it is unlikely to qualify.
Qualifies: An AI tool that automatically extracts key dates from employment contracts and populates a database. The extraction is mechanical and does not evaluate the person.
Does not qualify: An AI tool that extracts key dates from employment contracts and flags contracts nearing probation period end, triggering a performance review. The flagging influences a decision about a person.
Condition 2: Improving Completed Human Activity
The system improves the result of a previously completed human activity. The human decision is already made; the AI merely refines or formats it. Think of spell-checking, grammar correction, or layout optimization after a human author has written a document.
Qualifies: An AI that polishes the language of a human-written job rejection letter after the hiring manager has already made the decision.
Does not qualify: An AI that suggests edits to the hiring criteria themselves, which would change future decisions.
Condition 3: Pattern Detection Without Influence
The system detects decision-making patterns or deviations from prior patterns without replacing or influencing human assessment. This covers analytical dashboards and anomaly detection where a human still makes the actual decision.
Qualifies: A dashboard that shows hiring managers their historical rejection rates by demographic group, without making recommendations.
Does not qualify: A system that flags specific candidates as "high-risk hires" based on pattern analysis, even if a human makes the final call — because the flag influences the assessment.
Condition 4: Preparatory Task
The system performs a preparatory task to an assessment relevant to the Annex III use cases. This is the broadest condition, but "preparatory" means the AI output is one input among many, and a human performs the substantive evaluation.
Qualifies: An AI that summarizes the key points of 200 credit applications into two-page summaries for a human credit officer to review and decide.
Does not qualify: An AI that scores the 200 credit applications on a 1-100 scale and ranks them. The scoring is the substantive assessment, even if a human rubber-stamps it.
The Profiling Trap: The Clause That Overrides Everything
Here is where most organizations get caught. Article 6(3) contains one sentence that overrides all four exception conditions:
"The exemption... shall not apply where the AI system performs profiling of natural persons."
The definition of profiling comes from GDPR Article 4(4): "any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person's performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements."
This definition is extremely broad. Consider the following systems that many enterprises assume are low-risk:
- HR analytics that scores employee engagement based on email response times and meeting attendance — this evaluates "performance at work"
- Customer segmentation that groups users into "high-value" and "low-value" tiers based on purchase history — this evaluates "economic situation" and "preferences"
- Churn prediction that identifies customers likely to cancel based on behavioral patterns — this evaluates "reliability" and "behaviour"
- Employee wellbeing tools that detect stress indicators from communication patterns — this evaluates "health"
- Location-based recommendations that personalize content based on where a user typically works — this evaluates "location or movements"
Every one of these constitutes profiling. Even if the system meets one of the four exception conditions above, the profiling override applies. The system is high-risk. Period.
The practical implication is significant: if your AI system processes personal data and produces any kind of assessment, score, prediction, or categorization about individual people, you should assume the profiling override applies unless you can demonstrate otherwise with documented legal analysis.
The Mandatory Documentation Requirement
Whether you conclude your system is high-risk or not, Article 6(3) requires providers who claim the exemption to document the assessment before placing the system on the market or putting it into service. This documentation must be provided to national competent authorities and market surveillance authorities on request.
The documentation should include:
- System description — What the AI system does, what inputs it takes, what outputs it produces
- Annex III mapping — Which category the system falls under and why
- Exception condition analysis — Which of the four conditions applies, with specific reasoning
- Profiling assessment — Explicit analysis of whether the system performs profiling as defined in GDPR Article 4(4), with a clear conclusion
- Risk evaluation — Why the system does not pose a significant risk of harm to health, safety, or fundamental rights
- Conclusion and rationale — The final classification decision with supporting evidence
Providing incorrect information in this documentation is itself a violation, carrying a penalty of up to EUR 7.5 million or 1 percent of global annual turnover (Article 99(4)).
A Decision Flowchart
For each AI system in your inventory, work through these questions in order:
Step 1: Is the system a safety component of a product covered by Annex I EU harmonisation legislation? If yes → high-risk (Annex I pathway). Stop.
Step 2: Does the system fall into any of the eight Annex III categories? If no → not high-risk under Annex III. Consider limited-risk transparency obligations (Article 50) if the system generates synthetic content, interacts with humans, or is used for emotion recognition or biometric categorization.
Step 3: Does the system perform profiling of natural persons? If yes → high-risk. No exception is possible. Stop.
Step 4: Does the system meet at least one of the four Article 6(3) exception conditions? Analyze each condition carefully with documented reasoning. If yes → the system may be exempt from high-risk obligations.
Step 5: Even with an exception condition met, does the system still pose a significant risk of harm? Consider the severity and probability of harm, the number of affected persons, and whether vulnerable groups are involved. If significant risk remains → high-risk despite meeting an exception condition.
Step 6: Document the assessment comprehensively before deployment. Retain the documentation and make it available to authorities on request.
Getting the Classification Right
Risk classification is not a box-ticking exercise. It requires genuine engagement with what your AI system does, who it affects, and how its outputs are used in practice. The most common mistake is underestimating the profiling override — assuming that because a human is "in the loop," the system is not high-risk. That assumption is wrong. The profiling clause does not care about human oversight. It cares about what the AI system itself does with personal data.
The second most common mistake is confusing "the system does not make the final decision" with "the system does not materially influence the decision." If your AI produces a risk score, a ranking, a recommendation, or a flag that a human decision-maker sees before making their choice, the system materially influences the outcome — even if the human could theoretically override it.
A structured AI use case register that systematically captures what each system does, what data it processes, whether it profiles, and what classification applies is the only reliable way to manage this at scale. Ad hoc spreadsheets and one-time legal opinions do not survive personnel changes, system updates, or regulatory audits.