AI with Michal

AI slop

Generic model output that reads long, vague, and obviously templated, usually because the prompt lacked context, examples, and channel-specific constraints.

Michal Juhas · Last reviewed May 2, 2026

What is AI slop?

AI slop is a casual name for bland, generic text that reads like a cheap template and turns people off. It usually happens when nobody adds real facts, edits, or a clear point of view.

Illustration: Generic duplicate drafts versus one polished message after human editing

In practice

  • Candidates joke in forums about identical outreach that opens with "I came across your profile" and three vague compliments. Twitter and Reddit use "AI slop" for that bland, samey tone in text and images.
  • Hiring managers forward a draft JD and write "this feels like ChatGPT" when it is padded, generic, and sounds like every other company.
  • Internal comms teams warn marketers not to ship "slop" when brand voice disappears behind bland blocks nobody edited.

Quick read, then how hiring teams use it

This is for recruiters, sourcers, TA, and HR partners who need the same vocabulary in debriefs, vendor calls, and policy reviews. Skim the first section when you need a fast shared picture. Use the second when you are deciding how it shows up in the ATS, sourcing tools, or candidate communications.

Plain-language summary

  • What it means for you: Slop is when text looks fine at a glance but feels the same as every other email: vague praise, fake warmth, and no proof anyone read your profile.
  • How you would use it: You use the label to coach teammates: "this reads like slop" means tighten facts and voice, not "never use AI."
  • How to get started: Collect three outbound messages you would not answer yourself. Highlight the fluffy lines. Replace them with one specific fact from the profile or posting.
  • When it is a good time: When response rates drop after you "scaled" outreach, or when hiring managers say candidates sound like bots.

When you are running live reqs and tools

  • What it means for you: Slop is a failure mode of cheap generation: low marginal cost per draft, weak system instructions, no structured output, and no human taste pass.
  • When it is a good time: When you review employer brand, recruiter scorecards, or vendor templates that ship to thousands.
  • How to use it: Add constraints (must quote two resume facts, ban words like "delve"), rotate examples in few-shot prompting, and keep a living "do not sound like this" folder.
  • How to get started: Re-read How to write better AI prompts and audit ten sends from last week.
  • What to watch for: Blaming the model instead of the prompt, and declaring victory when volume went up but quality signals went flat.

Where we talk about this

Workshops name slop when we compare drafts side by side: same hallucination guardrails apply whether the text came from ChatGPT or a tired human pasting a 2021 template. Join Workshops if you want live critique of real snippets.

Around the web (opinions and rabbit holes)

Third-party creators move fast. Treat these as starting points, not endorsements, and double-check anything before you wire candidate data.

YouTube

Reddit

Quora

Symptom checklist

SignalLikely missing input
Generic praiseRole and company specifics
Wall of bulletsLength cap in instructions
Wrong toneNo negative examples

Related on this site

Frequently asked questions

Why do candidates notice it fast?
They read dozens of inboxes a week and pattern-match instantly: stacked compliments, identical paragraph shapes, fake specificity ("your impressive journey"), and CTAs that ignore what they actually do. That friction costs reply rate and employer brand even when the underlying role is strong. Live cohorts compare side-by-side drafts so recruiters feel the difference between "fluent" and "credible." Treat slop as a quality metric you track alongside volume, not a joke you hide from leadership. Candidates assume laziness equals how they will be managed once hired.
What is the fastest fix?
Tighten inputs before you swap models: add real constraints (company size, stack, location policy), cap length, ban phrases you hate, and require two resume-sourced facts in every first touch. System instructions plus a rotating set of on-brand exemplars beat another paragraph of adjectives. Pair with few-shot prompting so new reqs inherit tone without retyping. Review ten sends weekly as a team; calibrate what "good" means so junior recruiters are not guessing alone. Log the top three banned phrases weekly so you see whether templates drift before reply rates crash.
Does a better model remove slop?
It can nudge tone slightly, but weak prompts produce weak text on any frontier model. Workshops keep returning to examples, checklists, and Markdown for AI hygiene over model shopping because the failure is usually missing context, not missing parameters. Spend budget on reviewer time and a shared knowledge base before you chase the next release note. If leadership wants a demo, show the same prompt on two models; the spread is rarely as wide as the spread between two recruiters with different habits.
How does automation change the risk?
Workflow automation scales whatever quality you feed it: one bland template becomes hundreds of bland sends before a human notices. Add hard gates (draft-only folders, approval queues, caps per domain) until drafts consistently pass your bar in sampling. Log which template version sent so you can roll back fast when a hiring manager flags tone. Automation should inherit hallucination checks, not bypass them because "it was automated." Add sampling dashboards that alert when median message length spikes, often the first sign slop is shipping at scale.
Is "slop" unprofessional language for executives?
Externally, prefer plain language like "generic AI drafts" if the shorthand grates; internally teams still use "slop" as a fast signal for "reads synthetic." Pair the concept with concrete rewrite guidance and before/after samples in enablement decks so finance sees ROI, not meme vocabulary. The goal is shared diagnosis, not shame. If executives only hear "do not use AI," you lose the chance to teach what good augmented writing looks like on real reqs. Offer a one-slide glossary appendix so boardrooms get precise language without diluting the operational signal your sourcers rely on.
Where can we practice better drafts?
Read How to write better AI prompts, build Markdown for AI packs hiring managers actually reference, then rehearse in a workshop or the Starting with AI: the foundations in recruiting course with anonymized examples. Bring redacted threads where a strong hire almost ignored you because the mail felt fake. Practice is social: reviewers catch tells lone writers miss. After each session, assign pairs to review one another's live drafts for a week so habits stick beyond the classroom. Close with a shared checklist of banned phrases your team actually retires, not only discusses.

← Back to AI glossary in practice