AI with Michal

Time to fill

The elapsed calendar time from an approved requisition opening until a candidate accepts an offer for that role, used to diagnose funnel speed, capacity, and hiring manager responsiveness.

Michal Juhas · Last reviewed May 3, 2026

What is time to fill?

Time to fill counts how long an open role stays open from the moment leadership approves hiring until someone accepts the offer. Teams use it to spot bottlenecks in approvals, interviews, and paperwork, not only sourcing speed.

Illustration: a requisition card flowing through abstract hiring stages to an accepted offer, with a clock suggesting elapsed days without readable numbers

In practice

  • TA reviews "median time to fill by department" slides every Monday even when recruiters feel the real story is interview load, not sourcing.
  • Hiring managers say "your time to fill is too high" when they mean offers sat in comp approval, not when the first screen was slow.
  • Vendors promise "reduce time to fill with AI"; practitioners ask which stage timestamp changed after the pilot.

Quick read, then how hiring teams use it

This is for recruiters, sourcers, TA, and HR partners who need the same vocabulary in debriefs, vendor calls, and policy reviews. Skim the first section when you need a fast shared picture. Use the second when you are deciding how it shows up in the ATS, sourcing tools, or candidate communications.

Plain-language summary

  • What it means for you: It is the calendar stopwatch from "req approved" to "offer accepted" for one role.
  • How you would use it: You compare sites, quarters, and role families using the same start and stop rules.
  • How to get started: Export last quarter with stage timestamps. Write your definitions on one page everyone signs.
  • When it is a good time: Before QBRs, after process changes, or when a new executive asks why hiring feels slow.

When you are running live reqs and tools

  • What it means for you: Your ATS configuration decides what counts as open, paused, or filled. Garbage in, leadership panic out.
  • When it is a good time: When you change approval chains, add AI stages, or merge ATS instances after acquisition.
  • How to use it: Pair headline numbers with stage medians and a few anonymized stories so the team trusts the chart.
  • How to get started: Align with finance on whether internal transfers stop the clock. Document the answer beside the metric.
  • What to watch for: Averages hiding long tails, inconsistent tags, and vendors who redefine metrics each quarter.

Where we talk about this

AI in recruiting workshops use time to fill when discussing capacity models and realistic AI wins. Bring your stage dictionary to Workshops so the room can sanity-check it.

Around the web (opinions and rabbit holes)

Third-party creators move fast. Treat these as starting points, not endorsements, and double-check anything before you wire candidate data.

YouTube

  • Search "time to fill vs time to hire" for TA ops explainers that walk ATS timestamps and executive slides.
  • Search "recruiting funnel metrics dashboard" for examples of pairing speed with quality indicators.

Reddit

  • r/recruiting and r/humanresources regularly debate metric definitions when vendors publish benchmarks; read dates and sample sizes critically.

Quora

  • Search "time to fill recruiting definition" for mixed practitioner answers; verify any claim against your own ATS export.

Related on this site

Frequently asked questions

How is time to fill different from time to hire?
Time to fill usually starts when the req is approved and opened for sourcing, while time to hire often starts at first human touch or first interview. The gap between those clocks explains fights between TA and finance when dashboards disagree. Pick one definition per report, document it in the metric dictionary, and freeze stage names in your ATS so exports stay comparable quarter to quarter. When you add AI screening, log whether clock pauses during vendor outages or continues, because models inherit whichever rule your data pipeline encodes. Review monthly with hiring managers so nobody optimizes the wrong endpoint.
What makes time to fill spike even when sourcing is healthy?
Slow approvals, interview panel availability, and offer bureaucracy inflate calendars even when top-of-funnel conversion looks fine. Recruiters see this as "we had great candidates but calendar Tetris killed us." Instrument each stage with median hours and 90th percentile waits, not only averages, because a few extreme delays dominate executive perception. Pair metrics with qualitative tags ("hiring manager travel", "comp revision loop") so retrospectives name owners, not only tools. If workflow automation nudges exist, verify they fire on business days in each region so you do not accidentally train teams to ignore alerts.
How should AI-assisted steps appear in the timeline?
Treat model scoring like any other stage: timestamp when the review queue received work and when a human released it. If you skip that, AI looks instant while humans look slow, which wrecks trust. Log model version IDs beside decisions so drift investigations have anchors. When candidates appeal, you need a defensible story about when automation ran versus when a person acted. Cross-link logging guidance from structured output if you export machine rationales. Publish internal SLAs that separate service-level goals for automation from human review so nobody hides behind vague "AI did it" language.
Which cohort definitions matter for global teams?
Split metrics by region, role family, and employment type before you compare sites, because holiday calendars and contractor rules differ. Normalizing everything to UTC without labeling local business days misleads APAC leaders every quarter. If you run evergreen reqs, decide whether partial hires reset clocks or roll into a separate stream so product and TA ops align. Document how transfers and internal mobility count, because they can either deflate or inflate time to fill depending on ATS configuration. Finance usually wants a mapping table they can audit without opening your ATS admin console.
What is a sensible executive narrative when time to fill rises?
Lead with demand shifts and approval latency, not recruiter headcount alone. Show stage-level deltas versus last quarter, highlight new reqs opened without matched capacity, and name two remediation bets with owners and dates. If comp bands lagged market, say so plainly instead of hiding behind generic "market conditions" language. Pair the story with quality metrics from your scorecard culture so leaders do not assume speed fixes always help candidate experience. Close with what you will stop doing to fund the fix, because executives respect trade-offs more than laundry lists.
Where can we rehearse these metrics with peers?
Bring anonymized stage data to an AI in recruiting workshop and compare notes on how others tag approvals. Use Talent acquisition metrics as the companion glossary when you build the leadership slide. The foundations course (Starting with AI: the foundations in recruiting) helps recruiters speak credibly about measurement limits before tense QBRs. Office hours through membership are useful when your ATS vendor suggests a schema change that would break historical comparisons.

← Back to AI glossary in practice