AI with Michal

Async screening

Collecting structured answers from candidates on their own time (forms, voice prompts, short video tasks, or chat-style bots) before a live recruiter call, so high-volume funnels stay fair and schedulable.

Michal Juhas · Last reviewed May 2, 2026

What is async screening?

Async screening lets candidates finish steps on their own time, like short video answers or tasks in a form, before live interviews. Teams use it to handle more volume while humans still review finalists.

Illustration: Async candidate self-serve steps feeding reviewed results before live interviews

In practice

  • A candidate records two-minute video answers on their phone before anyone books a live call. Invites often say "async interview" or "one-way video" from HireVue-style tools or newer vendors.
  • High-volume hourly hiring uses short quizzes on availability before a recruiter phones ten people. Dispatchers still say "complete this step async" even without a brand name on the form.
  • Recruiters tell each other "we moved first screens async" when calendars were the bottleneck last season and the team needed air in the diary.

Quick read, then how hiring teams use it

This is for recruiters, sourcers, TA, and HR partners who need the same vocabulary in debriefs, vendor calls, and policy reviews. Skim the first section when you need a fast shared picture. Use the second when you are deciding how it shows up in the ATS, sourcing tools, or candidate communications.

Plain-language summary

  • What it means for you: The candidate records answers on their own time instead of booking a live phone call first. Your team watches later, often in a batch.
  • How you would use it: You reserve it for early funnel volume where the same questions matter for every applicant, not for final culture fit.
  • How to get started: Write the three questions you ask on every first screen. Record yourself answering them so tone matches what you expect from candidates.
  • When it is a good time: When calendars are the bottleneck and you still need consistent signal before humans invest live time.

When you are running live reqs and tools

  • What it means for you: Async is a trade: scale and consistency versus candidate experience and disability access. Legal and brand teams care about instructions, retention, and opt-out paths.
  • When it is a good time: When hiring managers ask for "more signal" but refuse calendar slots.
  • How to use it: Pair with structured rubrics (scorecard culture), cap length, and audit a sample weekly for bias tells.
  • How to get started: Pilot one role family, compare completion rates to live screens, and publish plain-language expectations to applicants.
  • What to watch for: Ghosting after one-way video, over-trusting "AI scoring" without human review, and skipping accommodations workflows.

Where we talk about this

Live AI in recruiting sessions use async as a case study in candidate respect versus throughput. If your policy fights are real, not theoretical, bring them to Workshops.

Around the web (opinions and rabbit holes)

Third-party creators move fast. Treat these as starting points, not endorsements, and double-check anything before you wire candidate data.

YouTube

Reddit

Quora

Async versus live phone screen

ModeStrengthRisk
AsyncSchedules breathe, more coverageDrop-off if friction is high
LiveNuance and rapportHarder to scale

Related on this site

Frequently asked questions

Is async screening the same as an AI phone screen?
Async only means the candidate completes a step on their own time; the medium might be typed answers, multiple choice, short audio, or video. AI might summarize, route, or score structured text, but the policy questions are the same: who reviews, what happens on failure, and how you document consent. Live sessions stress separating "scheduling convenience" from "automated decisioning," because regulators and candidates care about the latter. Pick vendors and prompts after you can describe the human path when someone needs an accommodation or opts out mid-funnel.
When is it worth adding?
Add it when reply volume from campaigns or workflow automation would swamp calendars yet you still need consistent early signal. Publish SLAs (when a human reviews, max time in stage) so candidates are not guessing in a black box. Pilot on one role family with hiring manager buy-in on what you will actually read versus auto-archive. Measure completion rate, drop-off reasons, and downstream offer quality, not only recruiter hours saved. Track accommodation requests and resolution time separately so accessibility gaps show up in dashboards, not only in angry social posts.
What should we avoid?
Opaque auto-rejects, long unpaid assignments unrelated to the role, and tasks that collect sensitive data without a clear use. Candidates should see how recordings or transcripts are stored and who can access them. Automated scoring without oversight invites fairness complaints and hallucination-style misreads of short answers. If your vendor cannot explain confidence and override paths, pause. Document accommodations paths (captions, extended timers, alternative formats) before you market efficiency. Run a legal review whenever you add biometric or video analysis, because consent copy that worked for text may not cover new modalities.
How does this relate to scorecards?
Feed the same scorecard traits into the async form so downstream interviewers see consistent signals instead of free-text vibes. When models summarize answers, use structured output so rationales map to rubric rows reviewers can audit. Misalignment between async questions and live interview probes trains candidates to game the funnel. Calibrate weekly with hiring managers until scoring anchors feel obvious, not political. Publish a bridge doc that maps each async prompt to the interview question it unlocks so panels stop asking redundant or contradictory probes.
What limits should TA communicate?
Publish accessibility commitments (captions, keyboard-friendly flows, language availability), retention windows for recordings, and who can request human review instead of automation. Align with legal on sensitive questions and cross-border storage before you launch in new regions. TA should own a plain-language FAQ candidates can read in two minutes, not only terms of service links. Train recruiters on how to explain the step in phone screens so trust stays high. Add a named escalation alias candidates can reach when the bot errors, because silence after a bug damages brand faster than the bug itself.
Where can we learn more?
Read AI candidate screening with your policy partners, tighten prompts with few-shot prompting, and rehearse funnel design in a workshop where peers poke holes in edge cases. Bring anonymized completion data so discussion stays grounded in your market, not generic blog advice. Before you change live traffic, capture baseline metrics (completion rate, time-to-human review, complaint themes) so you can show leadership whether the step helped candidates, not only recruiters. Pair that baseline with a thirty-day post-launch review that includes candidate verbatim feedback, not only funnel conversion charts.

← Back to AI glossary in practice