AI with Michal

Resources

Glossary of terms

Shared language for TA, HR, and AI workflows — kept short so you can skim before deeper guides on the blog.

AI-native
An operating style where teams assume models and automation exist: reusable prompts, skills, clean data, and feedback loops — not one-off chats.
Few-shot prompting
Giving the model two or more short examples of the output you want so it matches tone, structure, and level of detail.
Hallucination
When a model states facts or citations that are wrong or unsupported. Always verify names, employers, dates, and numbers against primary sources.
Large language model (LLM)
A model trained to predict text — useful for drafting, summarizing, and transforming content when given clear context and constraints.
Prompt chain
Splitting work into sequential steps (for example, outline → draft → critique) instead of one enormous prompt — often improves quality for complex tasks.
Retrieval-augmented generation (RAG)
Fetching trusted documents or database rows before generation so answers ground in your content — common for internal knowledge bases.
Scorecard (hiring)
A structured rubric of must-haves, nice-to-haves, and red flags used consistently across reviewers — ideal for pairing with AI-assisted notes.
Structured output
Asking for bullets, tables, or JSON so results paste cleanly into ATS fields, sheets, or downstream tools without manual reformatting.
Talent acquisition (TA)
The function that owns hiring strategy, sourcing channels, employer brand, and often recruiting operations — broader than filling single reqs.
Workflow automation
Connecting triggers and steps across tools (email, CRM, ATS exports, enrichment) so repeatable recruiting tasks run with explicit handoffs and QA.

Also see AI tools, guides by role, and free resources.