AI with Michal

Time in stage reporting

Time in stage reporting measures the average number of days candidates spend in each hiring pipeline stage, giving TA teams a stage-by-stage bottleneck map instead of a single time-to-fill number.

Michal Juhas · Last reviewed May 8, 2026

What is time in stage reporting?

Time in stage reporting measures how long candidates spend in each distinct step of the hiring pipeline, from recruiter screen through debrief to offer. Where time-to-fill gives you one number for the whole process, time in stage gives you a row for each decision point so you can name the bottleneck instead of debating whether hiring is slow.

The data lives inside your ATS. Every time a recruiter moves a candidate from one stage to the next, the system logs a timestamp. Most ATS platforms can export or display the difference between those timestamps as average days per stage. You do not need a data team or a custom BI tool; a CSV export and a spreadsheet calculation is enough to start on your most important reqs.

Teams that track stage time stop having the wrong conversation. Instead of asking why time-to-fill is 45 days, you get a specific answer: hiring manager review is averaging 12 days, which is where most of the delay lives. That is a conversation with a named owner, not a team-wide complaint.

Illustration: time in stage reporting showing a hiring pipeline with stage lane cards each carrying an elapsed-time indicator, one bottleneck stage highlighted in amber with an alert flag, and a digest card on the right surfacing the delay for TA action

In practice

  • A TA leader's time-to-fill report shows 38 days average but hides the fact that candidates are spending 11 of those days parked in "hiring manager review." Stage-level data surfaces this in seconds; a single time-to-fill number does not.
  • A sourcer at a 200-person scale-up reviews their ATS pipeline velocity report every Monday. Any candidate stuck in the same stage for more than three business days triggers a short Slack message to the recruiter with the candidate name and stage. Within a month, silent drops fall by roughly half.
  • Vendors and ATS providers use different names for the same idea: "stage aging," "pipeline velocity," "time in funnel." The underlying metric is always the same: average days per decision bucket, not total days across the whole process.

Quick read, then how hiring teams use it

This is for recruiters, sourcers, TA leads, and HR business partners who need the same vocabulary in pipeline reviews, vendor conversations, and hiring manager syncs. Skim the first section for a shared picture. Use the second when you are setting up alerts, pulling reports, or building a stage dashboard.

Plain-language summary

  • What it means for you: Instead of one big "days to hire" number, you get a row for each stage showing how long candidates are sitting in each bucket. That tells you exactly where to push.
  • How you would use it: Pull a stage duration report from your ATS once a week. Flag any stage averaging over your target, for example two business days for recruiter screen or five for hiring manager review. Talk to the owner of that stage.
  • How to get started: Check whether your ATS has a "pipeline velocity" or "stage duration" report. Run it on your three most active reqs. If the report does not exist, export the candidate log as a CSV and calculate the timestamp differences manually.
  • When it is a good time: When you have a time-to-fill problem but cannot name where the process breaks. Stage data converts a vague complaint into a specific conversation.

When you are running live reqs and tools

  • What it means for you: Stage data is only as reliable as your ATS hygiene. If recruiters move candidates in batches on Fridays or delay logging rejections by a week, the averages become noise before they become signal. Audit stage movement frequency before you trust the numbers.
  • When it is a good time: A weekly cadence is enough for most teams. Daily stage reports are only useful when you have the ops capacity to act on them immediately.
  • How to use it: Connect ATS stage timestamps to a shared dashboard or a weekly AI digest prompt. Set a simple alert rule: if any candidate has been in the same stage for more than a defined threshold, surface a flag with the candidate name and the stage owner. Cross-link with sourcing funnel metrics to see whether slow stages correlate with specific sources or role types.
  • How to get started: Map your ATS stages to the decision points that actually matter; not every status update is a meaningful stage. Run a 90-day lookback on three recently closed reqs and calculate average days per decision stage manually. That baseline tells you what normal looks like before you wire an alert.
  • What to watch for: Candidates parked in "screen scheduled" because the recruiter forgot to advance them after the call, candidate withdrawals inflating average stage time for completed pipelines, and weekend or holiday effects that distort business-day calculations if your ATS logs calendar days.

Where we talk about this

On AI with Michal live sessions, time in stage analysis comes up in both the AI in recruiting and sourcing automation tracks. Sourcing automation sessions show how to wire ATS exports into a weekly digest prompt; AI in recruiting sessions connect stage data to hiring manager communication cadence and candidate experience. If you want the full room discussion on how to set stage-level SLAs that hiring managers will actually respect, start at Workshops and bring your current ATS reporting setup.

Around the web (opinions and rabbit holes)

Third-party resources move quickly. Treat these as starting points, not endorsements, and double-check anything before wiring candidate data into a new tool or pipeline.

YouTube

Reddit

Quora

Time-to-fill versus time in stage

MetricWhat it measuresWhat it misses
Time-to-fillDays from open req to accepted offerWhere the delay actually sits
Time in stageDays per pipeline decision pointHow stages interact and compound
Source of hireWhere accepted hires originatedHow long each source takes per stage

Related on this site

Frequently asked questions

What is time in stage reporting and why does it matter?
Time in stage reporting measures the average number of calendar or business days a candidate spends in each hiring pipeline stage before moving forward, being declined, or withdrawing. Unlike time-to-fill, which collapses the entire process into one number, stage-level data tells you exactly where the bottleneck lives: whether it is the recruiter screen queue, the hiring manager's calendar, or the debrief meeting that never gets scheduled. Teams running five or more open reqs simultaneously use it to prioritise firefighting before a candidate accepts a competing offer. Most modern ATS platforms expose stage timestamps in their reporting module or API.
How do TA teams pull time in stage data without a custom BI tool?
Most ATS platforms store a stage-entered and stage-exited timestamp for every candidate movement. If your ATS has a native reporting module, look for pipeline velocity, stage duration, or funnel analysis reports. When the built-in reports are too coarse, export the candidate stage log as a CSV and calculate timestamp differences in a spreadsheet. For recurring analysis, a lightweight workflow automation tool can pull the data on a schedule and write it to a shared dashboard. AI with Michal workshops cover how to wire these exports into readable weekly hiring funnel reports using tools already in your stack, without a dedicated BI team.
Which stages should be tracked and what benchmarks are realistic?
Track every stage where a human makes a meaningful decision: sourced, applied, recruiter screen, hiring manager review, first interview, technical or assessment step, final panel, and offer extended. Skip purely administrative status flags that do not reflect a real waiting period. Recruiter screen queues running longer than two business days tend to lose candidates in competitive markets; debrief-to-offer gaps over three days reduce acceptance rates. Industry benchmarks from LinkedIn Talent Insights or iCIMS can anchor your numbers, but start by comparing within your own reqs across role types. Your own 90-day baseline is more actionable than generic market data.
How does AI help analyse time in stage data?
AI can flag stages running significantly above their rolling average, identify which requisitions or recruiters are consistent bottlenecks, and draft a weekly digest summarising where action is needed. You still need clean ATS stage timestamps as the input; AI does not fix sparse or inconsistent movement logs. Where AI adds clear value is aggregating across dozens of open reqs and surfacing the two or three patterns worth human attention, rather than asking a TA leader to scan forty rows of a spreadsheet. Pair AI analysis with the weekly hiring funnel report cadence to turn a raw export into a decision brief rather than a status update.
What GDPR and privacy obligations apply to stage timing data?
Stage timing data is operational HR data and carries the same GDPR obligations as other candidate records. Processing timestamps to manage your hiring pipeline is generally covered under Article 6(1)(b) or legitimate interest, but map it in your Record of Processing Activities alongside other ATS fields. Aggregated, anonymised stage metrics carry minimal individual risk. If you feed stage data into an AI tool outside your existing data processing agreement, check your DPA before connecting the export. Delete individual-level stage records when you delete the candidate's application under your standard retention policy, regardless of whether the timing data ever left your ATS.
How do recruiters and hiring managers use stage data differently?
Recruiters use stage data to manage their own queue: which reqs have candidates sitting idle, which hiring manager has not returned feedback in four days, and which stage is consuming the most calendar time across open roles. Hiring managers rarely want granular pipeline tables; they respond better to a one-line note saying three candidates are waiting on their debrief feedback from last Tuesday. TA leaders use aggregated stage data across teams to set service-level targets and catch systemic delays before they affect sourcing funnel metrics or candidate experience. The most effective use is an alert wired to ping the stage owner when a threshold is crossed, not a weekly report.
Where can I build a time in stage reporting workflow with peers?
Time in stage analysis comes up in the AI in recruiting and sourcing automation tracks at AI with Michal workshops, where participants build ATS export pipelines and wire them into dashboards or weekly digest prompts. The talent acquisition metrics term covers the broader TA measurement landscape, and weekly hiring funnel report shows how to turn raw stage counts into a recruiter-readable summary. For hands-on practice with real ATS constraints, membership office hours are the right venue. The Starting with AI: the foundations in recruiting course covers measurement cadences before you wire automations that depend on clean stage data.

← Back to AI glossary in practice