AI with Michal

Requisition funnel reporting

Per-requisition breakdown of how candidates move from first source or application through each hiring stage to offer, showing stage-by-stage conversion rates so TA leads can pinpoint where individual reqs are stalling.

Michal Juhas · Last reviewed May 8, 2026

What is requisition funnel reporting?

Requisition funnel reporting is the practice of tracking stage-by-stage candidate conversion rates and volumes for each individual open role. It shifts attention from blended pipeline health metrics to the specific conversion story inside one req: how many candidates entered at each stage, how many advanced, and where the flow stalled.

Unlike pipeline coverage reporting, which asks whether a req has enough active candidates to close by the target date, req funnel reporting explains why candidates stopped progressing in the first place. A req with consistent drop-off at the hiring manager review stage tells a different story than one where candidates stall between offer and acceptance.

Illustration: requisition funnel reporting showing multiple open reqs each with a stage-by-stage conversion strip, an anomaly flag on one stalled req, and a TA lead digest card routing alert-level reqs to action

In practice

  • A TA lead reviewing the weekly req report sees that one engineering role has a 22% screened-to-HM-review conversion versus a 58% baseline for the same role family, which tells her the hiring manager is declining most submissions and an intake recalibration is overdue.
  • A recruiter pulls a one-page req funnel summary before a hiring manager sync to show exactly where the last 14 candidates dropped out, replacing a vague "the pipeline is slow" conversation with a specific ask.
  • An automated Friday digest uses ATS stage-count exports processed through a structured output prompt to generate per-req conversion tables, flagging any req where a stage conversion is more than 20 points below the role-family baseline.

Quick read, then how hiring teams use it

This is for recruiters, TA ops leads, and HR partners who need the same vocabulary in standup reviews, vendor evaluations, and hiring manager calibrations. Skim the first section when you need shared language fast. Use the second when configuring ATS reports or automating weekly digests.

Plain-language summary

  • What it means for you: Req funnel reporting tells you the conversion story inside each individual open role, not just whether the overall pipeline looks busy. It shows where candidates are stopping and helps you pinpoint the fix.
  • How you would use it: Pull stage counts from your ATS weekly, calculate the conversion rate at each stage for each active req, and compare to your historical baseline for that role type.
  • How to get started: Start with your two hardest-to-fill open reqs. Export stage counts, map them to a simple six-row table, and share it at your next hiring manager sync. Refine the stage definitions over the next three hires.
  • When it is a good time: Any time a req is aging beyond your typical time-to-fill baseline and you need a data-backed explanation for why.

When you are running live reqs and tools

  • What it means for you: At scale, req funnel reports replace gut-feel standup updates with a structured per-req conversion view. When integrated with ATS API exports and an automation layer, stall patterns surface before they become urgent.
  • When it is a good time: Weekly for all active reqs. Daily for roles with hard deadlines or conversion rates more than 15 points below your role-family baseline. Pair with time-in-stage reporting to see both where volume is dropping and how long candidates sit at each stage before a decision.
  • How to use it: Pull ATS stage counts, apply your conversion baselines by role family, and route amber and red reqs to a Slack alert or TA ops digest. Use structured output from an LLM to parse messy stage-count exports into clean Markdown tables if your ATS does not expose clean API data.
  • How to get started: Build the simplest version first: one spreadsheet with stage counts per req, one column with conversion rate, one column flagging anything below threshold. Add automation once the stage definitions are stable and hiring managers understand what the numbers mean.
  • What to watch for: Inconsistent ATS stage labels across recruiters making the funnel meaningless; AI summaries that invent pattern narratives from sparse data; and sharing candidate-level funnel data with stakeholders without checking your GDPR disclosure scope.

Where we talk about this

On AI with Michal live sessions, requisition funnel reporting comes up in both the AI in recruiting and sourcing automation tracks, because stall patterns at different funnel stages require different interventions: sourcing at the top, process or calibration in the middle, comp at the offer stage. If you want to build a live req funnel report from a real ATS export and hear which thresholds teams actually act on, start at Workshops and bring your most challenging open req with its stage history.

Around the web (opinions and rabbit holes)

Third-party creators move fast. Treat these as starting points, not endorsements, and double-check anything before you wire candidate data.

YouTube

Reddit

Quora

Req funnel report vs sourcing funnel metrics

Requisition funnel reportingSourcing funnel metrics
Unit of analysisPer open requisitionPer sourcing motion or channel
Key questionWhere are candidates stalling inside this req?Is outreach landing and converting?
Who acts on itRecruiter, TA lead, hiring managerSourcer, TA ops
Primary data sourceATS stage counts per reqOutreach tool, ATS source field
DirectionRetrospective (why did conversion drop?)Forward-looking (is the input working?)

Related on this site

Frequently asked questions

What is requisition funnel reporting and how does it differ from pipeline coverage reporting?
Requisition funnel reporting shows the conversion story inside one open role: how many candidates entered each stage, how many advanced, and where volume dropped. It answers "is this specific req converting well?" rather than "does this req have enough candidates to close on time?" Pipeline coverage reporting asks whether each req has enough active candidates to expect a hire; req funnel reporting explains why candidates stopped advancing when they did. A low screened-to-HM-review rate points to a sourcing quality or alignment issue. A low offer acceptance rate points to compensation gap or candidate experience. Both views are needed: coverage tells you the forward-looking risk; funnel tells you the historic pattern causing it.
What stages should every req funnel report include?
At minimum: applied or sourced, phone screen, first interview, final round, offer extended, and offer accepted. Stage labels should mirror your ATS exactly so the report is reproducible without manual reclassification. Some teams add a hiring manager review stage between sourcing and first interview depending on their workflow. Each stage needs two numbers: volume entering and conversion rate to the next stage. Track where volume is lost, not just where it lands. Teams that only report "submitted to hiring manager" miss the most common leak: profiles that enter the ATS but never reach a phone screen because no one owns the initial disposition. Start simple and add stages only when you can maintain consistent tagging across all recruiters.
How does AI help with requisition funnel reporting?
AI can generate a plain-language summary of each req funnel from raw ATS stage count exports, flag reqs with abnormal conversion rates for their role family, and draft a weekly digest for hiring managers in minutes instead of hours. More sophisticated setups pipe ATS webhook data into a structured output node that formats stage counts into a Markdown table the team posts to Slack or a shared doc. The key limit is data quality: hallucination risk climbs when AI summarizes sparsely tagged ATS data and invents pattern narratives from noise. Validate any AI-generated funnel summary against the raw counts before sharing with a hiring manager. AI speeds the reporting cycle; it does not fix broken stage tagging upstream.
What causes conversion drop-offs that req funnel reports surface first?
Three patterns show up most consistently. Sourced-to-screened drop-offs usually signal ICP or targeting problems: candidates being sourced do not match the role criteria well enough for a screen to make sense. Screened-to-HM-review stalls often mean the recruiter and hiring manager are misaligned on what qualified looks like after intake. HM-review-to-final-round delays almost always trace to hiring manager availability or indecision, not candidate quality. Offer acceptance drops point to a compensation gap, role clarity issues, or a competing offer nobody saw coming. Each pattern requires a different fix. Req funnel reports make the pattern visible; the recruiter still has to diagnose the root cause, often through a structured debrief. See scorecard for the calibration tool that prevents HM misalignment from the start.
How do you use req funnel data in a hiring manager briefing?
Show two numbers, not fifteen: the stage where volume is dropping and the conversion rate at that stage versus your baseline for this role type. "We submitted eight candidates two weeks ago; three advanced; your advance rate is 38% versus a 60% baseline for similar roles" is more actionable than a spreadsheet. Pair the observation with a clear ask: "Do the remaining five need a decision, or do criteria need to change?" Hiring managers respond to specifics tied to their open role, not portfolio averages. Use a weekly one-line status per req rather than a monthly deep-dive; small signals caught early are easier to fix than accumulated backlogs. See talent acquisition metrics for the broader KPI picture around these conversations.
What GDPR and data quality limits should TA ops teams know?
Req funnel reports aggregate candidate progression data, which is personal data under GDPR. Ensure any export, dashboard, or AI-generated summary is restricted to roles with a current legitimate interest basis and that retention limits match your data processing agreement. Funnel data shared across business units or geographies may cross data transfer restrictions. On the data quality side, inconsistent ATS stage naming is the most common accuracy problem: if three recruiters label the first screen differently, funnel numbers are meaningless. Run a monthly stage-label audit and standardize definitions in an SOP before building any automated report. Candidate-level funnel data shared with hiring managers should be anonymized unless your disclosure covers it. See GDPR first touch outreach for related data handling patterns.
Where can TA teams learn to build req funnel reports with real ATS data?
Join an AI in recruiting workshop where TA teams build req funnel reports from real ATS exports, map stage labels to consistent definitions, and debate which conversion thresholds actually trigger action versus which get ignored. The Starting with AI: the foundations in recruiting course covers data visibility and structured reporting alongside prompt governance so teams understand how to wire AI to funnel alerts responsibly. Bring an ATS export with stage counts, source fields, and candidate entry dates for your most challenging open reqs; the group surfaces labeling gaps you would not find in slides alone. After the session, assign one TA ops owner to maintain stage definitions so funnel numbers mean the same thing when the TA director, HRBP, and hiring manager read the same weekly digest.

← Back to AI glossary in practice