AI with Michal

Artificial intelligence in recruitment and selection

The application of AI tools across both hiring phases: using language models and automation to source candidates and write job briefs (recruitment), and using scoring engines, transcript analysis, and decision support to compare and advance or reject applicants (selection).

Michal Juhas · Last reviewed May 10, 2026

What is artificial intelligence in recruitment and selection?

Artificial intelligence in recruitment and selection applies AI tools across two connected hiring phases. The recruitment phase uses language models and automation to source candidates, write job briefs, and fill a pipeline. The selection phase uses scoring engines, transcript analysis, and decision-support tools to compare applicants, score interviews, and inform advance or reject decisions.

The distinction matters because the two phases carry different risk profiles. Recruitment AI speeds up high-volume tasks at the top of the funnel where mistakes are recoverable. Selection AI assists consequential decisions where a rejected candidate can ask why and, in many jurisdictions, has the legal right to a human-reviewed explanation. Human-in-the-loop gates, audit logs, and AI bias audit checks are not optional extras when AI touches selection.

Illustration: artificial intelligence in recruitment and selection showing a recruitment phase spanning sourcing and outreach on the left, a transition node, and a selection phase spanning interview scoring, ranking, and a human review gate before the advance or reject decision on the right

In practice

  • A TA team at a 500-person scale-up uses an AI tool to summarise interview transcripts into scorecard notes after each call. The recruiter reads the summary alongside the raw transcript, corrects any gaps, and approves the entry before it becomes the official ATS record. That is AI in selection used responsibly.
  • A vendor demo claims the platform "automates selection." In the debrief it turns out the tool ranks CVs and flags the top 20 for recruiter review. That is AI-assisted screening with a human advance gate, not fully automated selection, and the distinction matters for GDPR compliance conversations with legal.
  • An HRBP is asked by a rejected candidate why they were not shortlisted. The answer "the model scored you lower" is not compliant under GDPR Article 22. The correct answer requires the human reviewer's documented rationale and the criteria applied.

Quick read, then how hiring teams use it

This is for recruiters, sourcers, TA leads, and HRBPs who need a shared definition before buying a tool, writing a policy, or running a pilot. Skim the first section for a fast shared picture. Use the second when you are deciding which phase to start with and what review gates to wire in.

Plain-language summary

  • What it means for you: Recruitment AI drafts and finds; selection AI evaluates and ranks. The second phase is where candidates get rejected, so it carries more legal and ethical weight than the first.
  • How you would use it: Start with a recruitment task (sourcing notes, job description first draft) before adding any selection-phase AI (scoring, ranking). The recruitment phase is a safer place to build confidence in AI outputs before you rely on them for consequential decisions.
  • How to get started: Pick one high-volume selection task with clear success criteria: interview transcript summaries added to scorecards, or CV ranking compared to a written job criteria card. Run it alongside your manual process for two weeks before removing the manual step.
  • When it is a good time: After your scorecard is written, agreed, and in use; after a vendor DPA is signed; and after at least one recruiter and one hiring manager have calibrated the AI output together on a real role.

When you are running live reqs and tools

  • What it means for you: Selection-phase AI produces output that directly affects whether a candidate advances. That means model version, input data, and reviewer name need to be logged per decision, not just per tool deployment.
  • When it is a good time: After recruitment-phase AI is stable, after an AI bias audit has been run on selection tools you plan to use at scale, and after legal has reviewed the vendor's conformity documentation under the EU AI Act if you operate in the EU.
  • How to use it: Keep AI outputs as a starting point, not a final answer. A transcript summary is a draft; a fit score is a hypothesis. Route every AI selection output through a named human reviewer before stage advance or reject actions are logged in the ATS. Log the review decision and date alongside the AI output.
  • How to get started: Map your selection process in five steps: CV review, phone screen, interview, debrief, offer. Identify which step has the clearest criteria and the most volume. Add AI there first. Use structured output when writing scores and summaries back to ATS fields so the data is parseable later.
  • What to watch for: Automated ranking models that assign a numeric score without disclosing the features used. Adverse impact patterns that stay invisible until someone samples declined profiles by demographic group. Transcript analysis that misses candidates who pause to think, speak a second language, or use different sentence structures than the training data assumed.

Where we talk about this

On AI with Michal live sessions, the AI in recruiting track covers both phases: recruitment tools for sourcing and outreach, and the selection-phase decisions that arrive when you move from drafting to deciding. The sourcing automation track goes deeper on workflow automation and ATS API patterns. Start at Workshops and bring your current tool stack, a sample role, and your biggest compliance question so the conversation is grounded in your real situation.

Around the web (opinions and rabbit holes)

Third-party creators move fast. Treat these as starting points, not endorsements, and double-check anything before you wire candidate data to a new tool.

YouTube

  • Search "AI in recruitment and selection" on YouTube filtered to the past year for practitioner comparisons and vendor demos. Prefer videos that show what happens when the AI output is wrong, not only the happy path.
  • SHRM (Society for Human Resource Management) publishes HR tech sessions and compliance-focused content covering AI in hiring, including selection-phase obligations under US and EU law.
  • Search "AI bias in hiring" for academic and advocacy perspectives on what selection-phase models get wrong and how audits surface group-level gaps that individual reviews miss.

Reddit

  • r/recruiting has practitioner threads on what AI in selection looks like day to day: scoring tools that worked, ones that broke, and how hiring managers react when they see AI-ranked shortlists.
  • r/humanresources surfaces HR leader and HRBP perspectives on selection compliance and what governance obligations arrive the moment a scoring tool enters the process.

Quora

  • Search "AI in recruitment and selection" or "automated candidate screening compliance" on Quora for practitioner and legal perspectives. Vendor-authored answers often skip the GDPR and adverse impact sections, so read critically.

Recruitment phase versus selection phase: where AI changes the picture

AspectRecruitment phaseSelection phase
Main AI tasksSourcing, outreach drafting, JD writingInterview scoring, ranking, assessment analysis
Consequence of errorCandidate not found or contactedCandidate wrongly advanced or rejected
Legal framework riskGDPR data processing, outreach consentGDPR Article 22, EU AI Act high risk, bias law
Audit depth neededTool version, prompt, human send reviewModel version, input data, output, reviewer name, date
Bias check timingBefore outreach at scaleBefore deployment and quarterly after

Related on this site

Frequently asked questions

What does "selection" mean here, and how is it different from recruitment?
Recruitment and selection are distinct hiring phases. Recruitment covers the front end: writing job descriptions, sourcing candidates, generating outreach, and building a pipeline. Selection covers the evaluation end: comparing applicants against criteria, assessing skills, conducting structured interviews, scoring responses, and deciding who advances or is rejected. AI in recruitment speeds up high-volume, repetitive tasks. AI in selection assists consequential decisions where rejected candidates can ask for an explanation or lodge a legal challenge. Scoring engines, interview transcript analysis, automated scorecard entries, and ranking models all sit in the selection phase. Audit trails and human-in-the-loop gates matter more here because each output links directly to an individual employment decision.
What AI tools assist the selection phase specifically?
Selection-phase AI tools fall into four groups. First, assessment scoring: cognitive tests, situational judgment exercises, and work sample evaluations scored against a benchmark automatically. Second, interview assistance: real-time transcription, AI-generated scorecard summaries from transcripts, and debrief synthesis. Third, ranking engines: models that assign a fit score to each candidate against a job criteria card. Fourth, structured comparison tools that surface side-by-side candidate signals for the hiring team. Ranking engines and automated assessment scoring carry the highest legal risk under GDPR Article 22 and the EU AI Act because they directly influence who advances or is rejected. Read explainable AI in hiring for what disclosure obligations look like in practice.
What legal risks concentrate in the selection phase?
Three frameworks apply most often. GDPR Article 22 restricts solely automated decisions with significant employment effects unless an exception applies, such as explicit candidate consent or contractual necessity. Any selection step where AI output advances or rejects a candidate without a human reviewing the evidence is likely in scope. The EU AI Act classes hiring and worker management AI as high risk, requiring conformity assessments, transparency to candidates, and human oversight before deployment. NYC Local Law 144 mandates annual bias audits for automated employment decision tools used in New York City. Log the model version, input data, output, and reviewer name for every AI-assisted selection decision so you can answer a candidate or regulator within the statutory window.
How does AI affect fairness and adverse impact in selection?
Selection decisions accumulate bias faster than sourcing steps because each advance or reject moves a real candidate closer to or further from a hire. An AI scoring engine that produces systematically different scores for one demographic group multiplies that gap across every selection gate. Run an AI bias audit against pass-rate data by gender, race, and age before scaling any selection-phase AI tool. Bias in selection AI can come from training data that reflected past discriminatory decisions, benchmark roles that were historically homogeneous, or features that correlate with protected characteristics. Testing the overall accuracy of a model is not enough: you need pass rates compared across subgroups before the tool touches a live candidate population.
When does a human-in-the-loop gate matter most in AI-assisted selection?
Human review gates matter most at three selection moments: advancing a candidate to a live interview from an automated screening score, using an AI-generated scorecard entry to reject someone, and ranking candidates before an offer decision. At each point a model output can be plausible but wrong: a transcript summary may omit a key answer, and a fit score may weight proxy features that correlate with protected characteristics. The human step is not a rubber stamp. It means reading the raw evidence, not only the AI summary, and asking whether the output matches the actual role criteria and the candidate you are reviewing. Document who reviewed, what they read, and when the decision was made.
How should TA teams document AI-assisted selection decisions?
Document four things for every AI-assisted selection step: the model version and prompt used, the candidate input data the model processed, the output it produced (score, summary, or ranking), and the name and date of the human reviewer who confirmed or overrode it. Store this alongside ATS stage notes, not in a separate tool that gets cleaned up on a vendor schedule. If a candidate asks how a decision was made, produce this record within the statutory window (one month under GDPR). Link the model version to vendor release notes so you can show what changed if a bias audit flags a drift. This log is also your internal defence if a rejection dispute reaches legal.
Where can hiring teams learn to apply AI to selection responsibly?
Selection is where AI mistakes affect real candidates directly, so peer learning beats solo experimentation. AI with Michal workshops cover selection-phase scenarios: positioning AI scoring relative to structured interviews, deciding when to disable automated ranking, and what a compliant audit log looks like when a regulator asks. The Starting with AI: the foundations in recruiting course builds human-in-the-loop habits before you add any scoring layer. Membership office hours work well for jurisdiction-specific questions about GDPR Article 22 or pre-employment assessment bias audits. Bring your current tool stack, a sample role, and the question your legal team asked so feedback is grounded in your actual risk profile.

← Back to AI glossary in practice