AI with Michal

Team workshops

AI in Recruiting

Private cohort workshop for recruiting teams. These pages spell out format, sample exercises, tooling, logistics, optional post-workshop engineering, and how to book.

Agenda & samples

Example agenda and sample exercises

Timings shift with cohort size and depth. Most of the time is spent rebuilding real artifacts (anonymized) and writing guardrails your team can reuse next week.

Modules (typical half-day / full-day)

  1. 1

    Operating model: draft vs decide

    ~20 to 30 min

    Outcome: Same vocabulary for assisted steps vs steps that require a named human owner.

  2. 2

    Intake, spec, screen guide

    ~35 to 50 min

    Outcome: HM-ready questions, falsifiable must-haves, and screen probes mapped to outcomes.

  3. 3

    Sourcing coverage without chaos

    ~35 to 50 min

    Outcome: Parallel boolean and AI research plans merged into a weekly execution checklist.

  4. 4

    Screening memos hiring managers trust

    ~35 to 50 min

    Outcome: Evidence-linked summaries with unknowns, risks, and recommended next steps.

  5. 5

    Outreach that is specific, short, policy-aware

    ~30 to 45 min

    Outcome: Three variants plus red-team flags; fewer edits before send.

  6. 6

    Reporting leadership can act on

    ~25 to 35 min

    Outcome: Exec bullets, bottlenecks with owners, explicit missing-data questions.

  7. 7

    Team playbook handoff

    ~15 to 25 min

    Outcome: What to standardize Monday, what to measure lightly, what to revisit in 30 days.

Sample moments (marketing-safe)

Prompt frames are teaching scaffolding. Your team adapts them to your policies and templates. Do not paste real candidate data into a live classroom unless your security team signs off.

Intake: screener questions tied to outcomes

Context

Teams paste a messy hiring-manager email and a draft JD. The point is not polishing prose. The point is a spec you can defend in intake: clarifying questions, must-haves you can test in screen, contradictions flagged for the HM.

Prompt frame (template)

You are a senior recruiter. Given [HM notes] and [JD draft], produce: (1) five clarifying questions for the HM, (2) a must-have vs nice-to-have table, (3) five phone-screen questions tied to outcomes, not buzzwords. Flag contradictions.

Quality checks

  • Questions reference outcomes and constraints, not generic culture trivia.
  • Must-haves are falsifiable in an interview or work sample.
  • Contradictions are called out explicitly for HM resolution.

Before

A bullet list of adjectives (“humble, hungry, smart”) and ten generic screening questions copied from the internet.

After

A short HM reply-ready question set, a prioritized requirement table, and screen questions mapped to on-the-job scenarios for this specific role.

Boolean + AI: two plans, one weekly checklist

Context

Recruiters build a boolean-first plan and an AI-assisted discovery plan, then merge them into one weekly execution list with owners and quality checks.

Prompt frame (template)

Role: [title], location: [region], constraints: [visa/salary/tools]. Produce (A) three boolean strings with rationale, (B) ten non-obvious talent pools or communities to research, (C) five outreach angles that are specific and respectful. Avoid stereotypes.

Quality checks

  • Boolean strings are testable and not absurdly long.
  • Talent pools are specific enough to search tomorrow.
  • Outreach angles avoid protected-class proxies and sensational claims.

Before

One giant OR-string and a generic ‘personalized’ outreach template.

After

A compact boolean set, a research shortlist, and angle options a sourcer can adapt per candidate with minimal editing.

Screening memo hiring managers will read

Context

Replace long paragraphs with a fixed memo layout: evidence from notes, unknowns, recommended next step, and confidence level.

Prompt frame (template)

Summarize [anonymized notes] into a hiring-manager memo: strengths, risks, unknowns, 3 targeted follow-ups, and a recommended decision (advance/hold/reject) with confidence (low/med). Do not invent facts.

Quality checks

  • Every claim ties to a cited note or explicitly says ‘not evidenced’.
  • Risks are job-relevant, not subjective vibes.
  • Recommended decision includes what would change the call.

Before

A long narrative recap that forces the HM to re-read everything or trust vibes.

After

A one-page memo with explicit gaps and next probes: faster HM decisions and a clearer record of what was known at decision time.

Candidate outreach: short, specific, policy-safe

Context

Rewrite a weak outreach draft into three variants (direct, warmer, referral-led) with a red-team pass for overclaims.

Prompt frame (template)

Rewrite this outreach draft in three variants max 90 words each. Constraints: no salary promises, no ‘guaranteed’ language, include one verifiable company fact, include a single clear CTA. Flag anything that could violate a typical enterprise candidate-comms policy.

Quality checks

  • No unverifiable superlatives (“world-class”, “best AI team on Earth”).
  • CTA is one ask (book time / reply with X), not five links.
  • Policy flags are listed even if the draft looks ‘fine’.

Before

A bloated template with six links and vague ‘amazing opportunity’ language.

After

Three tight variants with one proof point, one ask, and an explicit policy flag list for the sender to fix.

Pipeline reporting leadership can use in a staff meeting

Context

Turn weekly bullet notes into a short exec read: bottlenecks, owners, and questions when numbers are missing. No invented pipeline math.

Prompt frame (template)

Given [weekly bullet notes], produce: (1) 5 bullet executive summary, (2) top 3 bottlenecks with suggested owner, (3) questions to validate any missing numbers. If numbers are not present, say so. Do not fabricate.

Quality checks

  • No fabricated metrics; placeholders are explicit.
  • Owners are roles, not blame assignments.
  • Asks are decision questions, not essays.

Before

Paragraph updates that hide whether the issue is sourcing, HM responsiveness, or offer stage.

After

A short exec read with unknowns and validation questions so the team picks the next fix instead of re-reading noise.