AI with Michal

System instructions

Persistent rules and context you attach to an assistant (Gem, custom GPT, Claude project, or API system role) so every turn inherits tone, format, must-nots, and CTAs without repeating them in each user message.

Michal Juhas · Last reviewed May 2, 2026

What are system instructions?

System instructions are standing rules you save in a chat product so every new thread starts with your company voice, limits, and format. They save you from typing the same setup bullets at the start of each conversation.

Illustration: Standing rules wrapped around every new chat thread for consistent tone and limits

In practice

  • In ChatGPT "custom instructions" or a Claude project "instructions" box, you paste tone rules once and every new chat starts there. Product tours call it "customize your assistant" on first-run screens.
  • Teams share a screenshot titled "our system prompt" in Slack even though the product UI says something friendlier.
  • When onboarding says "never promise visa sponsorship in outreach," that rule often lands in system instructions so it is harder to forget than a sticky note on one monitor.

Quick read, then how hiring teams use it

This is for recruiters, sourcers, TA, and HR partners who need the same vocabulary in debriefs, vendor calls, and policy reviews. Skim the first section when you need a fast shared picture. Use the second when you are deciding how it shows up in the ATS, sourcing tools, or candidate communications.

Plain-language summary

  • What it means for you: System instructions are the sticky note on the assistant that says who you are, what tone to use, and what you never promise.
  • How you would use it: You set it once per project or workspace, then your daily prompts stay short.
  • How to get started: Write five bullets: company voice, must-not phrases, regions you hire in, what "confidential" means here, who approves external sends.
  • When it is a good time: Before you invite five teammates to the same custom GPT or Claude project.

When you are running live reqs and tools

  • What it means for you: System prompts prime behavior across turns, share the same token budget as user messages, and should be versioned like code. They pair with few-shot prompting inside a single turn when you need exemplars.
  • When it is a good time: When you promote chat hacks into production artifacts.
  • How to use it: Store instructions in Git or a doc with owners, diff changes, and test with adversarial prompts.
  • How to get started: Read vendor docs for "system" versus "developer" message roles and mirror that in your internal templates.
  • What to watch for: Secret policies only in system text with no HR sign-off, and instructions that contradict your Markdown knowledge base.

Where we talk about this

AI in recruiting sessions compare system instructions to Gems and skills: packaging differs, hygiene does not. Sourcing automation sessions warn that API calls still need the same priming fields. Align at Workshops.

Around the web (opinions and rabbit holes)

Third-party creators move fast. Treat these as starting points, not endorsements, and double-check anything before you wire candidate data.

YouTube

Reddit

Quora

System instructions versus few-shot in one turn

LayerRole
System instructionsStable voice, policies, CTAs, channel limits
User messageTask today ("role X, candidate Y")
Few-shot examplesOptional fresh anchors when the req is new

Related on this site

Frequently asked questions

What should recruiting teams put in system instructions?
Agency or employer basics, role families, tone (short versus formal), disallowed phrases, channel rules for LinkedIn versus email, booking links, and how to cite uncertainty when data is thin. Pair with few-shot prompting when each req needs fresh exemplars beyond global rules. Revisit after brand, comp, or diversity language updates. Keep a changelog entry when legal approves a new must-not phrase so nobody reverts it accidentally. Include escalation copy (who to ping when comp or visa questions appear) so coordinators do not improvise jurisdiction-specific answers alone.
How is this different from a long chat prompt?
System instructions load before user messages and persist across turns in that assistant, reducing copy-paste errors and forgotten guardrails. A long one-off prompt is easy to lose when someone is rushing on mobile between interviews. The AI adoption ladder treats systemizing as the step after basic chat because it is how teams share quality without heroics. Still log major edits so you know which version produced a controversial draft. Admins should export a redacted snapshot of the system block quarterly for your Markdown for AI archive so contracted vendors and internal reviewers compare the same text.
Do system instructions reduce hallucinations?
They reduce style drift and missing sections, but they do not guarantee facts: models can still invent employers or dates not present in the profile. Keep verify-before-send habits from the hallucination entry, especially for multilingual titles and stealth startups. Use instructions to demand citations or "say I do not know" patterns when data is missing. Treat instructions as necessary, not sufficient, for trust. When DPO asks for proof, point to the instruction text plus logged outputs rather than relying on one recruiter's memory of what the bot was told.
Where should the source of truth live?
Many teams maintain a master file in Markdown for AI that assistants read, then copy slices into vendor UIs so diffs and reviews stay sane. That same text can feed workflow automation API nodes for consistency. Avoid duplicating policy in three places without sync rules; pick one canonical file per topic. When legal updates a paragraph, propagate the same day. Forbid "fix it in three tabs" workflows by naming one merge day per month when orphaned instructions get reconciled or deleted.
Which tools expose system-style fields?
OpenAI custom GPTs, Gemini Gems, Claude projects and skills, and model APIs with system or developer roles. Compare ChatGPT, Gemini, and Claude on SSO, logging, and data residency before you standardize. Pilot two tools on the same req to see which UI your sourcers actually maintain when busy. Tool churn is expensive; pick with governance, not only demo flair. Document which fields map to SOC2-relevant logging before you let assistants read executive reqs or M&A-sensitive pipelines. Capture screenshots of the system field in each finalist tool so security reviews compare apples to apples.
Who should approve changes?
Recruiting plus legal or HRBP for anything candidate-facing, retention-sensitive, or referencing comp and benefits. Treat updates like policy changes with named approvers and dates, not Slack experiments. Post a lightweight summary to hiring managers when tone shifts so they are not surprised in debriefs. Emergency fixes still need a retro note so audits stay coherent. Rotate a second approver when the primary is on leave so latency does not push teams back to personal accounts without oversight. Archive approved text in your Markdown for AI repo the same day so assistants and humans stop drifting apart.

← Back to AI glossary in practice