The moment you start using AI in your recruitment process, without considering and understanding its legal implications is no longer an option under the EU regulatory framework.
The EU has officially classified recruitment and selection as a 'High-Risk' use case under the new AI Act. Simultaneously, the Pay Transparency Directive is shifting the burden of proof. It’s no longer just about asking "Was our process fair?". Regulators are now demanding, "Can you prove it was fair?"
For HR leaders and Talent Acquisition teams, these overlapping regulations create a massive shift in how hiring must be documented and executed.
In this article, we’ll break down exactly what a "high-risk hiring AI" is, why it matters, and the operational changes your TA team needs to make right now to stay compliant.
Defining “High-Risk Hiring AI” Under the EU AI Act
In the eyes of the EU AI Act, "High-Risk" doesn't mean your AI is overly complex or inherently dangerous. The classification is based entirely on its intended purpose and its potential impact on people's fundamental rights and career opportunities.
The golden rule for TA teams lies in Annex III of the Act: the official list of predefined high-risk AI use cases. Any AI systems applied to the areas listed here are considered high-risk by default.
Annex III explicitly flags "Employment, workers management, and access to self-employment," highlighting:
Publishing targeted job advertisements.
Analyzing and filtering applications (CV parsing).
Evaluating and ranking candidates.
This means the vast majority of your automated hiring stack, from ad targeting and screening to shortlisting, likely falls into the high-risk category.
Why is Recruitment AI in the Crosshairs?
The regulation doesn't hate AI. They are simply recognizing that hiring decisions have a profound legal and socioeconomic impact on individuals' lives. The high-risk designation enforces strict obligations to mitigate three specific threats:
Inherent Bias: Recruitment historically accumulates bias (e.g., favoring certain schools or exhibiting gender bias via language). The AI Act demands that training data meets strict quality criteria and that bias detection and mitigation are built into your data governance.
Automation Bias: We all want to screen faster, but the EU is worried about "automation bias", the tendency for humans to blindly trust AI outputs. The law mandates human oversight. A human must be able to interpret the AI's results and have the authority to override or ignore them.
Lack of Traceability: High-risk systems must have automatic logging. You need to provide clear instructions, limitations, and intended purposes. In short: You must be able to reconstruct exactly how a hiring decision was made.
Actionable Steps: What Your TA Team Must Change Today
"High-risk" is a signal to upgrade your hiring operations. Because TA teams are the "deployers" of AI, the burden of operational compliance falls on your organization, not just your software vendors.
Here is what you need to initiate right away:
1. Audit your entire hiring stack
Map out exactly where AI intervenes. This isn't just your Applicant Tracking System (ATS). Include ad targeting algorithms, automated shortlisting, programmatic job posting, and internal mobility recommendations.
2. Lock in your criteria early
The Pay Transparency Directive requires that initial salary bands and hiring decisions be based on objective, gender-neutral criteria. You must define your evaluation criteria before the process starts and apply them consistently to every candidate. (This is where utilizing gender-neutral language in your job ads from day one becomes critical).
3. Stop relying on "after-the-fact" logs
Comments scattered across Slack, email, or post-it notes will fail an audit. The EU AI Act requires automated logging. Your system must capture the "Chain of Custody" of a decision as it happens.
4. Document your "Human Oversight"
Claiming "a human made the final call" isn't enough. You must structurally record who overrode the AI, at what stage, with what authority, and based on what rationale.
5. Be careful with customization
If you heavily customize a vendor's AI model, build your own scoring logic on top of an ATS, or rebrand a tool, the EU AI Act may legally reclassify your company from a "deployer" to a "provider." This dramatically increases your legal liability.
The Pay Transparency Directive: Making Hiring "Audit-Ready"
The Pay Transparency Directive boils down to one reality: Fairness is not something you claim; it is something you prove.
When combined with AI regulations, the burden of proof shifts heavily onto the employer. If a candidate challenges a hiring decision, authorities can demand evidence. You will be asked:
Why was this candidate ranked higher?
Were the criteria objective and gender-neutral?
If AI suggested a shortlist, what did the human recruiter review to finalize it?
BlindStairs, the Audit-Ready Intelligence
For organizations that already use an ATS, the challenge is rarely a lack of tools.
The challenge is whether existing hiring decisions can be explained, reconstructed, and defended under new EU regulations.
BlindStairs is designed precisely for this scenario.
It does not replace your ATS or disrupt your workflows.
Instead, it integrates directly into your existing hiring infrastructure to ensure that:
evaluation criteria are defined upfront,
decisions are applied consistently,
and every hiring outcome can be traced and explained if reviewed.
In practice, BlindStairs acts as a compliance and explainability layer on top of the systems teams already rely on. We turn your existing hiring platform into audit-ready evidence without adding manual work.
What if you don’t have an ATS or are just starting to use AI in hiring?
Not all organizations are at the same level of hiring maturity.
Some teams are:
hiring without an ATS,
early in their adoption of AI,
or looking to improve efficiency without introducing compliance risk.
For these teams, Merified AI provides a plug-and-play, standalone solution that enables AI-powered recruitment while remaining fully aligned with EU regulatory requirements.
Merified AI supports organizations by:
helping define objective and role-relevant evaluation criteria before hiring begins,
enabling merit-based screening focused on skills and responsibilities,
and embedding explainability and traceability into the process from the start.
This allows teams to benefit from AI efficiency without creating opaque or non-defensible hiring decisions.
Compliance is not just about the tools. It’s about the process
Whether organizations use an ATS or not, the regulatory question remains the same:
Can we clearly show how a hiring decision was made, using objective and documented criteria?
The EU AI Act and the Pay Transparency Directive do not assess technology in isolation.
They assess whether hiring processes are structured, neutral, traceable, and defensible.
With the right infrastructure in place, compliance becomes a by-product of good process design and not an operational burden.
👉 Ready to future-proof your hiring process?
Book a demo or talk to our team today to see how BlindStairs can help you create a clear, compliant, and high-converting hiring process.