AI with Michal

Structured output

Asking a language model to return machine-parseable shapes (JSON, CSV columns, or rigid tables) instead of prose alone, so you can sort, filter, and automate the next step reliably.

Michal Juhas · Last reviewed May 2, 2026

What is structured output?

Structured output is when you ask the AI to answer in a fixed shape, like JSON fields or clear table columns. That makes it easy to sort, filter, or send the data to a sheet or another tool without parsing a long paragraph.

Illustration: Unstructured text converted into a neat table that connects to a spreadsheet

In practice

  • You ask the assistant for candidate notes as a small table with columns for name, fit note, and next step so you can paste into a tracker. Spreadsheet-friendly output is the daily face of structured output, even when nobody says JSON.
  • IT or ops tools sometimes show "export as CSV or JSON" for automations; recruiters meet that wording when someone wires a sheet to email alerts.
  • A hiring manager might say "three bullets max per answer" which is the same idea in lighter form, without any schema talk at the kickoff.

Quick read, then how hiring teams use it

This is for recruiters, sourcers, TA, and HR partners who need the same vocabulary in debriefs, vendor calls, and policy reviews. Skim the first section when you need a fast shared picture. Use the second when you are deciding how it shows up in the ATS, sourcing tools, or candidate communications.

Plain-language summary

  • What it means for you: You ask the computer to answer in a form with boxes (JSON or a table) instead of a rambling paragraph so the next step does not re-parse your email.
  • How you would use it: You want five bullets, each under twelve words, with a yes or no at the end.
  • How to get started: Copy a JSON example from your vendor docs, fill it with fake data, run ten real profiles, fix the schema.
  • When it is a good time: When spreadsheets, ATS fields, or automations consume model answers.

When you are running live reqs and tools

  • What it means for you: Structured output constrains decoding: JSON schema, function calling, or typed tools so downstream code validates. It lowers some hallucination modes by shrinking the output space.
  • When it is a good time: When prompt chains hand off between systems or when compliance wants machine-checkable fields.
  • How to use it: Version schemas, reject malformed passes, and never treat numeric "scores" as science without calibration.
  • How to get started: Pair with scorecard anchors and human review for anything candidate-facing.
  • What to watch for: Pretty JSON that hides wrong enums, and vendors implying "structured" equals "fair."

Where we talk about this

Sourcing automation workshops like structured passes because webhooks love JSON. AI in recruiting workshops warn that scores still need human meaning. See both at Workshops.

Around the web (opinions and rabbit holes)

Third-party creators move fast. Treat these as starting points, not endorsements, and double-check anything before you wire candidate data.

YouTube

Reddit

Quora

Prose versus structured handoff

OutputHuman readAutomation read
Long paragraphEasyFragile
Table in MarkdownMediumMedium
JSON / CSVHarder skimRobust

Related on this site

Frequently asked questions

Why bother with JSON for recruiting if we are not engineers?
Because downstream tools (Sheets, Make, n8n, ATS imports) need predictable columns, not paragraphs of vibes. A score, a short rationale, and three tags parse cleanly into filters and alerts humans can scan. Structured output also makes it obvious when the model drifted schema, which is cheaper to catch than rereading fifty prose summaries. Teach recruiters what each field means so they do not treat JSON as magic black boxes. Run a tabletop with finance and TA ops where you deliberately break one row; teams learn faster from a failed parse than from another slide on APIs. Tie field definitions to your scorecard rows so hiring managers see their language reflected in exports candidates never see until humans approve.
What is a minimal JSON shape for screening assist?
Fields like fit_score (bounded integer), confidence (enum: low, medium, high), must_have_hits, gaps, and next_question keep reviewers oriented. Keep enums small so humans spot nonsense fast and automation can branch safely. Version the schema in the same place you version prompts. Add source_excerpt pointers when policy requires showing why a label appeared. Document how each field maps to ATS picklist IDs so imports do not create orphan values on Monday morning. Add a reviewer_id and model_version column even if leadership thinks it is overkill; auditors and product teams will thank you after the first dispute.
How does this connect to workshop demos with scoring 0 to 10?
Live sessions show sheets with numeric scores plus rationales, then filter thresholds so hiring managers see how automation would behave before it touches candidates. Structured output is how you populate those columns without hand-typing every cell, but humans still own what happens after the filter fires. Always rehearse edge cases where a "7" should not auto-advance because context matters. Log thresholds and prompt hashes when you tune. Capture two anonymized counterexamples from your own reqs and replay them in front of the room so the group practices disagreeing with the model safely. Close with a written decision on who can move a threshold without waiting for the next workshop.
What breaks with structured output?
Schema drift when models invent new keys, overconfident scores without evidence, and silent truncation that cuts JSON in half on long inputs. Validate with a quick script or Sheets formula before you trigger workflow automation. Add human review for low-confidence enums instead of hoping the average looks fine. Treat validation failures as first-class metrics, not annoyances. Watch for PII accidentally landing inside JSON strings when recruiters paste full emails, and sanitize before logs hit SIEM tools your security team monitors. When vendors change temperature defaults, rerun golden-file tests so drift shows up before candidates receive broken templates.
Is structured output the same as a scorecard?
The scorecard rubric defines what to measure; structured output is the transport into tools. Pair them so model fields map to rows reviewers recognize. If fields drift from the rubric, you get fast, wrong decisions. Review quarterly with hiring managers to ensure labels still match how interviews run. Name a single schema owner who reconciles ATS exports, prompt JSON, and the live rubric after every req refresh. If assistants invent new trait names, block deployment until the mismatch is fixed, because speed without alignment ships bias faster.
Which tools support JSON mode well?
Check current vendor docs for ChatGPT, Claude, and any API you route through n8n; policies and JSON modes evolve. Pilot with ten rows, measure parse failures, then promote. Involve IT on key storage and logging before production. Keep a rollback plan when a vendor update changes defaults silently. Compare how each vendor handles strict JSON versus prose fallbacks when inputs get messy, and record which model powers each automation branch. Your security review should include sample payloads with redacted candidate data so approvers see realistic shapes, not toy examples.
When should we avoid automation on structured scores?
Until calibration is done, legal agrees how scores influence decisions, and you can explain outcomes to candidates in plain language. Structured fields accelerate whatever bias the rubric encodes, so diversity and fairness reviews stay mandatory. If leadership wants auto-advance, require human sign-off on thresholds first. Document who can change a threshold after hours. Pause when panels disagree more than usual on the same trait, when new geographies join the hiring mix, or when regulators publish fresh AI hiring guidance. Publish a kill switch checklist recruiters can execute without filing a ticket so experiments stop quickly when something feels off.

← Back to AI glossary in practice