AI with Michal

Adverse impact

A statistically significant gap in selection rates between a protected group and others that results from a hiring procedure, test, or AI tool, even when no bias was intended.

Michal Juhas · Last reviewed May 3, 2026

What is adverse impact?

Adverse impact happens when a selection procedure produces outcomes that are statistically worse for a protected group than for others, even when no discriminatory intent exists. In AI-assisted hiring that means a resume ranker, assessment platform, or screening algorithm that produces different pass rates across protected categories can create legal and reputational risk regardless of what the vendor promised in the demo.

Illustration: Adverse impact as funnel outcomes compared across groups with a four-fifths style threshold line and a compliance checklist

In practice

  • After a TA ops team ran a 90-day analysis on their AI screen, they found female applicants passed at 68% versus 85% for male candidates, a ratio right at the legal edge, and nothing in the algorithm was labeled "gender."
  • A hiring manager hears "the AI flagged this resume" and approves a no without reading it. When fifty such decisions stack up across a quarter, the pattern is adverse impact even if no individual decision was intentional.
  • Legal or DEIB partners may ask "did you run a disparate impact analysis on that tool before we signed the vendor contract?" and the answer reveals whether the team has operational controls or only policy slides.

Quick read, then how hiring teams use it

This is for recruiters, sourcers, TA, and HR partners who need the same vocabulary in debriefs, vendor calls, and policy reviews. Skim the first section when you need a fast shared picture. Use the second when you are deciding how it shows up in the ATS, sourcing tools, or candidate communications.

Plain-language summary

  • What it means for you: If your screen passes one group at 85% and another at 65%, that gap may be illegal even if you never thought about group membership. The rule measures outcomes, not intentions.
  • How you would use it: Count who passes each stage, split by protected group, and check whether every ratio stays above 0.80. Flag it before anyone else does.
  • How to get started: Pull your last 90 days of funnel data. Ask IT or the vendor for a group-rate breakdown. If they cannot produce one, that is already a risk signal worth documenting.
  • When it is a good time: Before you go live with any new AI screening tool, then quarterly after that, and any time rejection rates spike on a particular req type.

When you are running live reqs and tools

  • What it means for you: Every AI ranker, filter, or score that moves candidates in or out of the funnel is a selection procedure under EEOC and Title VII. Group-rate monitoring is not optional for tools that touch protected class in any proxy.
  • When it is a good time: Before vendor contract signature, before each new model version reaches production, and after any spike in rejection rate for a specific req type or demographic segment.
  • How to use it: Log candidate IDs, stage decisions, and model version together. Run 4/5ths ratios by protected group quarterly at minimum. Keep results with owner names and audit dates. Cross-link to structured output logging patterns if you are automating the export.
  • How to get started: Ask your current AI vendors for their most recent bias audit results and which protected classes they tested. If no audit exists, that is your first compliance conversation and a useful vendor differentiator.
  • What to watch for: Vendors who use proxy features (zip code, university name, employment gap) that correlate with protected class without disclosing it. Low-sample-size results that look clean but are statistically meaningless. Silent model updates that change scoring without notifying your compliance team.

Where we talk about this

On AI with Michal live sessions, adverse impact comes up in the AI in recruiting track because it is one of the first objections hiring managers and legal partners raise when evaluating AI tools. We walk through the 4/5ths calculation, a vendor audit question checklist, and the memo format that gets legal sign-off. If you want the full room conversation with real funnel data, start at Workshops and bring your vendor contracts.

Around the web (opinions and rabbit holes)

Third-party creators move fast. Treat these as starting points, not endorsements, and double-check anything before you wire candidate data.

YouTube

  • Search "adverse impact analysis 4/5ths rule" on YouTube for spreadsheet walkthroughs from HR practitioners and employment law educators. Several law firm channels (Seyfarth, Ogletree) post accessible explainers when new state rules pass.
  • The EEOC official YouTube channel publishes public-sector guidance useful for building shared vocabulary before vendor calls.
  • Search "NYC Local Law 144 AI hiring" for compliance-focused updates from employment lawyers walking through the audit requirements and candidate notification rules.

Reddit

  • r/humanresources threads on "adverse impact" and "AI screening" mix practitioner experience with compliance questions; the comment sections surface the real objections legal teams raise.
  • r/recruiting has recurring threads on AI resume screeners where bias risk comes up organically alongside vendor comparisons.
  • r/legaladvice covers candidate-side adverse impact questions that train recruiter empathy for what applicants actually experience.

Quora

  • Searching "adverse impact hiring AI" on Quora surfaces a mix of HR practitioners, employment lawyers, and researchers explaining the gap between statistical and legal significance (read critically; quality varies).

Adverse impact versus adverse treatment

Adverse impactAdverse treatment
Also calledDisparate impactDisparate treatment
Intent requiredNoUsually yes
Legal triggerStatistical outcome gap (below 4/5ths)Evidence of intentional bias
AI exampleResume ranker passes one group at a lower rateRecruiter explicitly filters by protected trait
Primary fixAudit and validate or replace the toolPolicy, discipline, training

Related on this site

Frequently asked questions

What is adverse impact in plain language for a recruiter?
Adverse impact means your selection process produces outcomes that are noticeably worse for a protected group (race, gender, age, disability, national origin) compared with another group, regardless of whether you intended any discrimination. The standard US legal trigger is the 4/5ths rule from EEOC Uniform Guidelines: if the pass rate for the lower-scoring group is below 80% of the rate for the highest-scoring group, adverse impact is indicated. It applies to any selection tool: written tests, phone screens, AI resume rankers, video assessments, structured scoring. Intent is irrelevant; only the outcome pattern matters. Auditors and plaintiffs look at the same numbers your ATS holds today.
How can AI tools increase adverse impact risk in hiring?
AI models trained on historical hire or promotion data inherit any patterns those decisions contained. A ranker trained on years of approved candidates can score equivalent profiles differently by name, address, school, or language style without flagging anything visible to the recruiter. Resume parsers silently drop formatting common in some regions. Video-assessment algorithms calibrated on small panels can penalize speech cadence or eye-contact norms that differ by culture. The problem compounds at scale: one biased scoring function fires thousands of times before anyone runs a group-rate comparison. Audit your inputs, sample outputs by protected group, and log model version with every decision so you can trace problems back.
What is the 4/5ths rule and how do I apply it to an AI screen?
Take the group with the highest pass rate through your screen, then check whether every other protected group reaches at least 80% of that rate. If female candidates pass an AI phone-screen at 60% and male candidates at 80%, that ratio is 0.75, below the 0.80 threshold, and adverse impact is indicated. Apply the same math to each stage: resume filter, assessment, interview invite, offer. Collect demographic data (self-reported EEO fields are cleanest), set a minimum sample size per group before drawing conclusions (at least 40 total applicants per group is a common practical floor), and rerun quarterly so drift appears before litigation does. See human-in-the-loop for where human review fits.
What does a practical adverse impact audit look like?
Pick one funnel stage, export the decision log with candidate IDs, collect or infer protected-class indicators (self-reported EEO fields are cleanest), and compute pass rates by group. Flag ratios below 80%. Then ask two questions: is the selection criterion job-related and consistent with business necessity, and was the least discriminatory alternative considered? Document both answers with owner names and dates. If you cannot answer either question your defense is weak before you open the legal door. Automate the calculation in a shared dashboard so recruiters see the numbers before a quarterly review, not only when HR or legal asks for them after a complaint lands.
What legal requirements should teams know about AI hiring audits?
New York City Local Law 144 (effective July 2023) requires employers using automated employment decision tools to conduct annual bias audits by an independent auditor, post results publicly, and notify candidates before use. Several US states (Illinois, California, Maryland) have similar or pending rules. EEOC technical guidance (2023) signals that AI tools fall under Title VII and the ADA when they screen by proxy (zip code for race, typing style for disability). In the EU, GDPR Article 22 adds the right to human review of purely automated decisions. Build your audit program before a complaint arrives; retrofitting after is far more expensive than a quarterly spreadsheet run.
How do AI in recruiting workshops cover adverse impact?
Sessions cover the concept early in the legal and ethics module so participants can name the risk before they open any tool. We walk through a simplified 4/5ths calculation on anonymized funnel data, discuss which AI vendor contracts include bias-audit results, and practice writing the one-paragraph risk memo that compliance needs before piloting a new screening tool. The aim is operational literacy, not legal certification: recruiters who can read a pass-rate table ask better vendor questions and push back on "the algorithm said so" in debrief. Pair this with human-in-the-loop practices so review points are designed into your process before contracts are signed.
Where can teams build deeper knowledge on bias and fairness in AI hiring?
Start with the EEOC Uniform Guidelines on Employee Selection Procedures (1978, still authoritative) and the EEOC's 2023 AI technical guidance, both free public documents. SHRM and the National Center for Civil and Human Rights publish plain-language breakdowns for HR practitioners. For technical depth, the AI Now Institute and Algorithmic Justice League track emerging audit methods and publish accessible reports. LinkedIn Legal, IAPP, and HR Dive run useful explainers when city and state rules update. Pair reading with live practice: bring a real funnel export to a workshop session, anonymize it, and run the 4/5ths math with peers. Shared calculation is faster than solo reading for building team norms around risk tolerance.

← Back to AI glossary in practice