AI with Michal

AI drafting for candidate outreach

Using AI models to generate first drafts of sourcing messages, InMails, and follow-up sequences so recruiters spend time personalizing and reviewing rather than starting from a blank page.

Michal Juhas · Last reviewed May 5, 2026

What is AI drafting for candidate outreach?

AI drafting for candidate outreach means asking a model to generate the first version of a sourcing message so the recruiter edits rather than starts from a blank page. You supply the job brief, the candidate signal (a title, a company, a recent project), and your tone rules. The model returns a draft. You read it, sharpen the hook, cut the filler, check for invented details, then send.

The phrase shows up in TA team debriefs as "AI-assisted outreach," "AI-drafted InMails," or "model-generated first touch." The tool doing the drafting can be Claude, ChatGPT, or a purpose-built sourcing platform. What matters is not the vendor but the human review gate before the message leaves your queue.

Illustration: AI drafting node taking a job brief and candidate profile as inputs, generating a message draft that passes through a human review gate before the outreach message reaches the send channel

In practice

  • When a sourcer pastes a job brief and a LinkedIn title into a chat model and gets back a three-sentence InMail that needs only a name and one specific hook, that is AI drafting for outreach at its simplest.
  • Recruiting teams often call this "the first pass" internally; the operating norm is that no message leaves the queue without a human read, even if the draft took five seconds to generate.
  • A TA lead might say "we draft with AI now but reply rates went up because people actually edit instead of copy-pasting the same template" after switching from a shared template bank.

Quick read, then how hiring teams use it

This is for recruiters, sourcers, TA, and HR partners who need the same vocabulary in debriefs, vendor calls, and policy reviews. Skim the first section when you need a fast shared picture. Use the second when you are deciding how AI drafting fits your stack, your ATS workflow, or your GDPR posture.

Plain-language summary

  • What it means for you: Instead of staring at a blank reply box for five minutes, you paste the role context, get a draft back, and spend your time making it sound like you wrote it.
  • How you would use it: Write one short prompt (role, candidate signal, tone note), read the output, edit the first sentence to add a real hook, remove filler, then send.
  • How to get started: Take your best-performing past outreach message. Use it as a few-shot example in your prompt. Tell the model what tone and length you want. Compare reply rates after 20 sends.
  • When it is a good time: After your prompts are reviewed and stable. Before then, drafting from scratch is often faster than correcting hallucinated role details.

When you are running live reqs and tools

  • What it means for you: At sourcing scale, even a two-minute edit saves hours across a week of sends. AI drafting is leverage on the edit step, not a replacement for it.
  • When it is a good time: When you have a stable role brief, a clear ideal candidate profile, and a send-gate habit in place. Not when the job description still changes every Monday.
  • How to use it: Use system instructions to lock voice and ban filler phrases. Add a few-shot example of your top-performing past message. Keep a human read before any message sends. Log model version and prompt variant so you can A/B reply rates over time.
  • How to get started: Run a one-week pilot on a single req. Draft 20 messages with AI, edit manually, send, and compare reply rate to your baseline. Tune the prompt structure, not the send volume.
  • What to watch for: Hallucinated role or company details, generic hooks that could go to anyone, tone drift toward corporate filler, and unreviewed batches going out under volume pressure.

Where we talk about this

On AI with Michal live sessions the sourcing automation block walks AI drafting end to end: prompt structure, system instructions, send-gate design, and what reply-rate data actually shows across teams. The AI in recruiting block connects the same moves to hiring manager trust and GDPR first-touch rules. Both tracks run with real job briefs, not sandbox examples. Start at Workshops and bring your current outreach templates so the room can pressure-test whether AI drafting fits your workflow or whether your prompts need calibrating first.

Around the web (opinions and rabbit holes)

Third-party creators move fast on this topic. Treat these as starting points, not endorsements, and verify anything before you wire candidate data through an automation you found in a tutorial.

YouTube

Reddit

Quora

AI drafting versus template libraries

ApproachPersonalization ceilingFailure modeBest use
Template libraryLow (manual variable fill)Staleness, wrong variableHigh-volume commodity roles
AI draft, human editMedium to highHallucination if uneditedMost sourcing at scale
Fully manualHighestInconsistency, slowExecutive or niche roles
AI draft, no editUnpredictableHallucination, AI slop, brand damageAvoid

Related on this site

Frequently asked questions

What is AI drafting for candidate outreach?
AI drafting for candidate outreach means asking a model to generate the first version of a sourcing message, InMail, or follow-up sequence, so the recruiter edits rather than starts from scratch. You supply the role brief, the candidate signal (title, company, a recent project), and your tone guidelines. The model returns a draft. The recruiter personalizes a line or two, removes filler, checks for hallucinated role details, then sends. Done well, it compresses the blank-page problem from minutes to seconds and frees attention for the parts that require judgment. Done carelessly, it multiplies AI slop across your outreach queue and tanks reply rates. The human edit is the work, not the prompting.
How do I personalize AI-drafted messages without sounding generic?
Pull one specific signal from the profile before you prompt: a project the candidate shipped, a recent company shift, or a stack detail that matches the role. Include it as a short instruction in your prompt (for example: "mention their open-source library"). The model uses it; most template drafts skip it entirely. Read the output aloud before sending. If the first sentence could go to a hundred people, rewrite it. Keep system instructions that lock your company voice and ban filler phrases. A two-line human edit on a solid AI draft consistently beats a fully bespoke message written under time pressure without scaffolding.
What prompts or system instructions work best for outreach drafting?
Short, opinionated prompts outperform long vague ones. Include the role level, the candidate hook ("why this person, why now"), a character limit, and a tone note such as "direct, no preamble, end with one clear ask." Save that framing as a system instruction so every draft starts from the same baseline. Pair it with few-shot examples of your best-performing past messages so the model matches the rhythm that actually gets replies. Log which prompt variants earn responses and retire the ones that fall below your baseline reply rate. Prompts are not static: calibrate them monthly at the team level, not only per-send.
What are the risks of sending AI drafts without editing?
Hallucinations top the list: the model may invent a role detail, misread a profile, or write a hook about a project the candidate did not ship. Tone drift is the second failure: the draft sounds like every other InMail once the model defaults to polite filler. At scale, unedited drafts damage your employer brand faster than they fill pipelines. Add a send-gate habit where a human reads each message before it leaves the queue. If you are automating sequences, log which model version ran and keep a review step even when volumes are high. Speed without editing is not scale; it is noise.
How does AI drafting differ from a template library?
Templates are static and require the human to fill in variables manually. AI drafting is dynamic: the model responds to the specific profile and role signal you provide each time, so a well-prompted draft already contains a personalized hook rather than a placeholder. Templates do not learn from feedback; prompts can be refined as reply data accumulates. The risk of templates is staleness; the risk of AI drafting is hallucination and AI slop if prompts are not reviewed. A mature team uses both: system instructions that set voice and limits, plus AI generation that varies per candidate. Neither replaces the human edit.
What GDPR or compliance issues apply to AI outreach drafting?
Sending unsolicited messages to candidates is a first-touch data processing event. GDPR requires a lawful basis (often legitimate interest for professional sourcing) and a mechanism for candidates to opt out. Read GDPR first-touch outreach before you wire AI drafting into an automated send sequence. Do not let the model elaborate personal details it was not given; that can look like you hold data you are not entitled to. Log which profiles received which draft version so your DPA can answer questions about what data was used. Confirm your AI vendor's data-processing agreement covers candidate profile content passed as input.
Where can we practice AI outreach drafting safely?
Join an AI in recruiting workshop or the sourcing automation track where participants draft live against real job briefs, compare reply-rate data, and hear which prompt structures hold up under production volume. The Starting with AI: the foundations in recruiting course covers prompt design, tone calibration, and send-gate habits before you add automation. Membership office hours let you bring a live draft for peer critique without risking your actual outreach queue. Practice against a sandbox role first: draft 20 messages, edit them manually, track reply rates, and tune the prompt structure before scaling to a live requisition.

← Back to AI glossary in practice