AI with Michal

Recruitment and selection analytics

The practice of measuring both pipeline health (sourcing, applications, stage conversion) and selection quality (assessment validity, interview consistency, offer outcomes) so TA teams can trace whether the people they chose actually performed as predicted.

Michal Juhas · Last reviewed May 15, 2026

What is recruitment and selection analytics?

Recruitment and selection analytics covers two distinct but connected measurement practices. The recruitment half tracks the funnel: where candidates come from, how many move through each stage, how long things take, and where the pipeline breaks. The selection half tracks the decision quality: whether the people chosen through interviews and assessments turn out to perform well once hired.

Most TA teams have the first half inside their ATS. The second half requires linking ATS outcomes to HRIS performance data, which is why selection analytics stays underdeveloped even at teams that run mature pipeline dashboards. The gap matters because it is entirely possible to have a fast, efficient recruiting funnel that consistently selects the wrong people.

Illustration: recruitment and selection analytics as two connected measurement zones, a sourcing and pipeline funnel on the left and a selection quality panel on the right, joined by a data link showing how ATS scores and HRIS performance ratings feed a cohort review dashboard

In practice

  • A TA ops lead runs a quarterly cohort review: she pulls interview scores from the ATS for everyone hired in Q1, joins them to 90-day manager ratings from the HRIS, and finds that candidates scored highest by one specific interviewer have consistently lower manager satisfaction scores. The panel calibration conversation that follows changes how that interviewer scores technical competency.
  • A recruiter notices that offer acceptance rate for a specific hiring manager is 54 percent compared to a team average of 78 percent. That is a selection analytics finding, not a sourcing one. The conversation that follows is about closing skills and compensation framing, not pipeline volume.
  • "We run analytics" often means the team has a time-to-fill dashboard and a source-of-hire chart. Selection analytics starts when someone asks whether the people at the top of the interview scoreboard are the ones who succeeded six months later.

Quick read, then how hiring teams use it

This is for recruiters, sourcers, TA leaders, and HR partners who need the same vocabulary in debrief conversations, tool evaluations, and reporting to People leadership. Skim the first section for a fast shared picture. Use the second when you are deciding what data to capture, what to join, and what to bring to a leadership review.

Plain-language summary

  • What it means for you: Recruitment analytics measures whether your funnel is working. Selection analytics measures whether your decisions are working. You need both to know if your hiring process is actually good or just fast.
  • How you would use it: Pick one role family where you have structured interview scores stored in the ATS. Pull 90-day manager ratings from the HRIS for everyone hired in that role over the last six months. Ask whether the interview scores predicted the ratings.
  • How to get started: Before you build a dashboard, check two things: are interview scores actually stored in your ATS (not just text notes), and does your HRIS have a 90-day review field that is populated consistently? If either is missing, fix the data source before building the analysis.
  • When it is a good time: When leadership asks whether your hiring process is improving quality of hire, and you want an answer grounded in data rather than anecdote.

When you are running live reqs and tools

  • What it means for you: Selection analytics requires a data join your current tooling may not support out of the box. Plan which system holds interview scores, which holds post-hire performance ratings, and how candidate IDs map between them before you commit to a reporting cadence you cannot deliver.
  • When it is a good time: After you have structured scoring in place for at least one role family and a quarterly cohort large enough to show patterns (typically at least 20 hires).
  • How to use it: Start with inter-rater reliability as the first selection metric: pull the last three months of structured panel scores for the same competency and check whether two panelists ever scored the same candidate more than one full level apart. That single check tells you whether your selection data is worth analysing further.
  • How to get started: Document the data join you need: ATS candidate ID, interview score by competency, offer accepted or declined, HRIS employee ID, 90-day manager rating. Then find out who owns access to both systems and whether a join is technically possible without violating data segmentation rules. See recruiting webhooks for how real-time data triggers between ATS and HRIS can keep the join current without manual exports.
  • What to watch for: Small cohort sizes producing misleading correlations, ATS score fields that are filled in for some roles and blank for others, and HRIS performance data that uses a different employee identifier than the ATS candidate ID. Each of these silently breaks the analysis while the charts still render. See funnel drop-off analysis for how to diagnose what is breaking upstream.

Where we talk about this

AI with Michal Workshops cover recruitment and selection analytics in the AI in recruiting track: which ATS fields to structure for downstream analysis, how to use AI-assisted interview summaries to produce scoreable data from unstructured notes, and what the cohort review conversation with a People leader actually sounds like. The sourcing automation track covers the data infrastructure that makes analytics trustworthy rather than decorative.

Around the web (opinions and rabbit holes)

Third-party creators move fast. Treat these as starting points, not endorsements, and double-check anything before you wire candidate data.

YouTube

  • Talent Analytics and Predictive Hiring (YouTube search) surfaces AIHR and LinkedIn Talent Solutions content on linking selection outcomes to post-hire performance, useful for TA teams starting the quality-of-hire conversation with leadership.
  • How to Measure Quality of Hire covers practical approaches to the ATS-to-HRIS data join and which metrics leaders actually track in the first year.
  • Structured Interviewing and Selection Science walks through the industrial-organizational psychology side of selection validity in plain language, useful context before presenting inter-rater reliability findings to a skeptical hiring manager.

Reddit

Quora

Recruitment analytics versus selection analytics

DimensionRecruitment analyticsSelection analytics
Primary questionIs the funnel working?Are the decisions working?
Data sourceATS pipeline and stage dataATS scores joined to HRIS performance
Key metricsTime-to-fill, stage conversion, source-of-hireInter-rater reliability, assessment-to-hire correlation, quality of hire
Review cadenceWeekly or monthlyQuarterly or semi-annual cohort review
Who owns itTA ops or recruiterTA leader plus People analytics or HRBP
Common blockerBlank source fields in ATSNo structured interview scores or no HRIS join

Related on this site

Frequently asked questions

What is the difference between recruitment analytics and selection analytics?
Recruitment analytics covers the funnel from sourcing to offer: how many candidates entered, where they dropped off, how long each stage took, which channel produced the most interviews. Selection analytics covers the decision layer: did the interview scores predict 90-day retention, did the assessment results correlate with hiring manager satisfaction six months later, were candidates scored consistently across panels? Most teams have decent recruitment analytics inside their ATS. Selection analytics require linking hiring data to performance data in the HRIS, which is the gap that makes the full practice harder. See talent acquisition metrics for the framework underneath both.
What metrics belong in selection analytics specifically?
Selection analytics tracks five things that recruitment dashboards rarely surface: inter-rater reliability (do two interviewers score the same candidate similarly?), assessment-to-hire correlation (do high assessment scores predict manager ratings at 90 days?), offer acceptance rate by hiring manager (a proxy for how compelling the closing conversation was), interview-to-offer ratio segmented by panel type (structured versus unstructured), and offer decline reason codes. Most of these require a data join between ATS outcomes and HRIS performance fields, which means someone has to own both systems or extract from both into a shared analytics layer. See interview-to-offer ratio for the funnel conversion piece.
How do teams measure whether their selection process predicts job performance?
The practical version of predictive validity is a quarterly cohort review: take everyone hired in a given quarter, pull their interview scores from the ATS, pull their 90-day manager ratings from the HRIS, and run a simple rank comparison in a spreadsheet. Did the candidates scored in the top third by interviewers end up with stronger manager ratings six months later? If not, your panel is not selecting on the right signals. This analysis requires that interview scores are structured, stored in the ATS, and accessible alongside HRIS performance data. Most teams discover they have one but not the other. See scorecard for how to build the structured scoring layer that makes this possible.
What does inter-rater reliability mean and why does it matter for selection quality?
Inter-rater reliability measures whether two interviewers assessing the same competency in the same candidate reach similar scores. When it is low, hiring decisions reflect interviewer personality more than candidate skill. The practical test: pull the last 20 cases where two panelists scored the same competency, calculate the average gap, and flag anything above one full rating level as a calibration problem. The fix is not more interviews. It is a debrief where panelists discuss scoring criteria before the next round, not scores first. Teams that run this audit in AI with Michal workshops typically find that unstructured interviews produce almost no inter-rater agreement on technical competencies, which changes how they design the selection process.
How do GDPR and privacy rules affect recruitment and selection analytics?
Recruitment and selection analytics sits in a legally sensitive zone because it involves processing candidate data for purposes beyond the original application. Under GDPR, you need a lawful basis for retention, a documented purpose for the analytics use, and limits on how long candidate scores are kept after a decision. Linking assessment scores to post-hire performance data requires additional care: you need documented consent or legitimate interest that is reviewable. The practical risk is not a regulator audit on your first cohort review, but normalising loose data practices that compound when the dataset grows. Ask your DPA advisor specifically about retention periods for structured interview notes, because those are the records most teams keep indefinitely without a policy.
What failure modes show up when teams try to close the recruitment-to-performance loop?
Four breakdowns appear consistently. First, ATS interview scores are blank because interviewers never completed structured ratings and only wrote text notes. Second, HRIS performance data uses different employee IDs than the ATS, making the join fail or require manual matching. Third, the cohort is too small for any pattern to be statistically meaningful, so the analysis produces confident-looking charts from twelve data points. Fourth, the team presents findings to leadership and gets asked about causation when they only have correlation. Fix: start with one role family, one quarter, and structured scoring already in place before you attempt the cohort review. The analysis is only as good as the inputs, and most teams overestimate how clean their interview data actually is.
Where can TA teams learn to build a recruitment and selection analytics practice?
Join a workshop where TA teams work through the data join problem with real ATS and HRIS field names on the table, debate which selection metrics leadership will fund the work to track, and audit whether their current interview scoring produces data worth analysing. The Starting with AI: the foundations in recruiting course covers how to structure ATS data for downstream analytics and how AI-assisted summaries of interview notes can produce structured fields when scorecards were not completed. Come with a specific hiring cohort in mind, your ATS name, and your HRIS field for 90-day performance rating. Those three inputs make the session immediately practical. Membership office hours help pressure-test an analytics configuration before you present to a People leader.

← Back to AI glossary in practice