Stop Sloppypasta(stopsloppypasta.ai)
597 pointsby namnnumbrMar 15, 2026

44 Comments

namnnumbrMar 15, 2026
Tired of people at work pasting raw ChatGPT output into chats, I coined the term "sloppypasta" and have written this rant to explain why it's rude and some guidelines for what to do instead

sloppypasta: Verbatim LLM output copy-pasted at someone, unread, unrefined, and unrequested. From slop (low-quality AI-generated content) + copypasta (text copied and pasted, often as a meme, without critical thought). It is considered rude because it asks the recipient to do work the sender did not bother to do themselves.

ares623Mar 15, 2026
I'm glad that the term "slop" really caught on. It's such a succinct way to describe the phenomenon, and at the same time it's so malleable. Sloppypasta, Microslop, Workslop, Ensloppification, etc.
breakingcupsMar 16, 2026
Slopyright, transloptions, one-stop-slop...
MagicMoonlightMar 16, 2026
It’s just perfect. You take a bunch of output and you just slop it into git or slop it into teams. It’s the perfect verb/noun combo.
stabblesMar 15, 2026
I wouldn't call "ChatGPT says" an equivalent of LMGTFY. The former is people in awe with the oracle, the latter is people tired of having to look something up for others.
verdvermMar 15, 2026
I would say LMAAFY is like LMGTFY, where as the sloppypasta is more like pasting search results list without vetting them. That is, there are two phases to this phenomenon, query and results.
uniq7Mar 15, 2026
This article's proposal for stopping sloppypasta is to convince the people who does it to stop doing it, but I am more interested on what someone who receives sloppypasta can do.

How do I tell my colleagues to stop contributing unverified AI output without creating tension between us?

I've never did that so far because I feel like I am either exposing their serious lack of professionalism or, if I wrongly assumed it was AI, I am plainly telling them that their work looks like bad AI slop.

verdvermMar 15, 2026
I've had some luck pointing out where the AI is wrong in their sloppypasta, delicate as one can. Avoiding shame or embarrassment can be a powerful motivator.

The most interesting incident for me is having someone take our Discourse thread, paste it into AI to validate their feelings being hurt (I took a follow up prompt to go full sycophancy), and then posting the response back that lambasted me. The mods handled that one before I was aware, but I then did the same thing, giving different prompts, and never sharing the output. It was an intriguing experience and exploration. I've since been even more mindful of my writing, sometimes using similar prompts to adjust my tone or call me out. I still write the first pass myself, rarely relying on AI for editing.

sawsimilarMar 16, 2026
Ooh, I saw a very similar situation. User went on AI and asked "Which user was disrespectful first" to dunk on another.

The person being targeted just prompted the same AI with "Which user has thin skin" and instantly the AI turn on the other person. Then the moderators got involved and told the first guy to stop using AI as a genital pleaser.

verdvermMar 16, 2026
I asked Gemini what it thought, in one of the modes, it said bringing an Ai to a discussion is like bringing a gun to a knife fight, that using AI was like having a rhetorical weapon and advantage in what everyone thought was a human to human forum.
namnnumbrMar 15, 2026
I wrote this intending it to be directly sharable and/or to provide a framework for how to have that discussion, kind of like a nohello.net or dontasktoask.com.

I've found success having sidebar conversations with the colleague (e.g., not in the main public thread where they pasted slop), explaining why it was disruptive and suggesting how they might alter their behavior. It may also be useful to see if you can propose or contribute to a broader policy on appropriate AI use/contribution with AI, and leverage that policy as the conversation justification?

kace91Mar 15, 2026
>How do I tell my colleagues to stop contributing unverified AI output without creating tension between us?

Pattern rather than person? General team reviews or the like. As long as it's not tech leadership pressing for it..

userbinatorMar 15, 2026
How do I tell my colleagues to stop contributing unverified AI output without creating tension between us?

Make them realise they're replacing themselves if they continue down that path. "What value do you have if you're just acting as a pipe to the AI?"

uniq7Mar 15, 2026
If I tell someone literally "What value do you have if you're just acting as a pipe to the AI?", I'm pretty sure my manager will schedule a quick 1:1 to ask me why I'm telling peers that they have no value.
giantrobotMar 16, 2026
Schedule the 1:1 first to let your manager know your peers have no value.
userbinatorMar 16, 2026
Your manager should then have a meeting with those coworkers too, or their manager(s). Depending on whether the company's leadership position is "AI at all costs", they may reconsider if they realise blind trust in AI is creating problems.
MagicMoonlightMar 16, 2026
Yeah. Plus the sloppers tend to be highly ranked
mattbeeMar 16, 2026
"I'm sorry to ask, but have you forwarded me unedited output from an LLM? I'd rather hear what you think!"
causalMar 16, 2026
That's about as polite as you can get, and it's still risky: people get defensive, the output might NOT be from an LLM, etc.

That's the asymmetry of the problem: Writing with AI delegates the thinking to the reader as well as all the risk for correcting it.

JumpCrisscrossMar 16, 2026
> How do I tell my colleagues to stop contributing unverified AI output without creating tension between us?

You don’t. You keep these arguments handy for ignoring their output until it’s germane.

bagacrapMar 16, 2026
Yeah it's tough. I tend to take the path of just responding with one line to their wall of text. What are they going to do, send a second wall of text?
efilifeMar 16, 2026
And what do they usually do?
archagonMar 16, 2026
> How do I tell my colleagues to stop contributing unverified AI output without creating tension between us?

Embrace the tension. Tension is human.

uniq7Mar 16, 2026
Out of work I wouldn't mind, but I spend 8h/day there and I am forced to work with these people, so I'd prefer to keep the drama out so that I can focus on solving problems.

The other person already demonstrated a lack of professionalism by sharing unverified AI slop so, in case of conflict, I wouldn't be surprised if they continued acting unprofessionally by spreading false rumors, unnecessarily escalating the situation to higher ups, secretly sabotaging the project, etc.

incognito124Mar 15, 2026
namnnumbrMar 15, 2026
100% - was inspired by and quote "It's rude to show AI output to people" in this. Thanks for linking the discussions!
madroxMar 15, 2026
I find that I don't have a lot of sympathy for people angry at this type of behavior, even though I share the disdain for someone else's AI output. The people doing this kind of thing are not the kind of people to be reading this manifesto. We've been creating bait content for a long time, and humans have never been given the tools to manage this in any sophisticated fashion. The internet was not a bastion of high quality content or discourse pre-AI. We need better tools as content consumers to filter content. Ironically, AI is what may actually make this possible.

I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow.

I suspect the endgame of this is probably the fulfillment of Dead Internet Theory, where it's just AI creating content and AI browsing the internet for content, and users will never engage with it directly. That person who spent 10 seconds getting AI to write something will be consumed by AI as well, only to be surfaced to you when you ask the AI to summon and summarize.

And if that fills people with horror at the inefficiency of it all, well, like I said, it isn't like the internet was a bastion of efficiency before. We smiled and laughed for years that all of this technology and power is just being used to share cat videos.

valicordMar 15, 2026
> I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral.

Isn't it obvious? If I'd wanted to see AI response to my question, I'd ask it myself (maybe I already did). If I'm asking humans, I want to see human responses. I eat fast-food sometimes, but if I was served a Big Mac at a sit down restaurant I'd be properly upset.

madroxMar 15, 2026
> If I'm asking humans, I want to see human responses

I find this fascinating, honestly. It shouldn't matter as long as it addresses your ask, yet it does. I also wish I could filter social media on "it's not X. It's Y"

Because it's probably not actually about the content but the sense of connection. People want to feel like they're connecting to people. That they're being worthy of someone's else's time and attention.

And if that's what people are seeking, slack and social media are probably not the platforms for it (and, arguably, never were).

valicordMar 15, 2026
> It shouldn't matter as long as it addresses your ask

But it doesn't? I'm more than capable of using Google and chatgpt myself. If I was looking for a machine generated answer to my question I would have already found it myself and never made the post in the first place. If I went to the effort of posting the question, it means that either the slop answer is not sufficient for some reason or that I want to hear from actual humans that have subjective experiences that an LLM cannot.

Posting an AI response verbatim basically says "I think you're too stupid to click a couple of buttons, so let me show you how it's done". I think it's very reasonable to get upset at the implication.

madroxMar 16, 2026
I think it is reasonable, yes, but I don’t think it’s ever been reasonable to expect reasonableness on the internet. We have a difficult enough time showing each other decency.
YurgenJurgensenMar 16, 2026
Then why even have this discussion in the first place? You weren’t expecting any reasonable responses to it, after all.
coldteaMar 16, 2026
Do you only do stuff where you expect the outcome to be good?

Perhaps they did it for the off chance of a good response.

torawayMar 16, 2026
As an example of this, I am currently comparing two different models of Android e-readers, from a Chinese brand where the tech specs are all published but there aren't a lot of good comparative reviews. Plus, the specs like battery life are close to the same mAh, but for e-readers especially with Android optimization/drivers/etc make a gigantic difference.

So I have been Googling for "Reader X vs Reader Y review"(/comparison/etc) hoping to find Reddit comments or non-spam blog posts from people who actually own both to compare screen and battery life. I found a reddit thread comparing them directly and lo and behold the first comment is someone saying "I own both but honestly you could just ask ChatGPT for this". Fortunately a couple other people responded...

When I ask Gemini or ChatGPT, all I get is regurgitation of the tech specs (that are all mostly identical) plus summarized SEO spam reviews (that were probably written by another LLM based on those same tech specs) and it's totally unhelpful. So for this, I absolutely do NOT want an OpenClaw bot to respond as if they've physically used the devices and it would be actively enraging to learn a "helpful" comment "answering" the question was actually just an LLM impersonator.

JumpCrisscrossMar 16, 2026
> shouldn't matter as long as it addresses your ask, yet it does. I also wish I could filter social media on "it's not X. It's Y”

The people copy-pasting slop almost never excerpt the relevant response. As a result, you get non-concise text you have to triple check. This is functionally useless to the point of being fine to skip.

hombre_fatalMar 16, 2026
Exactly. If you can find the answer for someone with AI, then by all means use it. But at least filter, curate, and verify it into an answer.
mpalmerMar 16, 2026
> People want to feel like they're connecting to people. That they're being worthy of someone's else's time and attention

They are achieving the exact opposite. I don't connect with the person who sends me slop. And they send me content that is a waste of my time and attention, because I have to vet it. Why would I trust someone - how can I ever connect with them - when the only thing I know about them is they take shortcuts?

AurornisMar 16, 2026
> It shouldn't matter as long as it addresses your ask, yet it does.

If the LLM output is concise and efficient I don’t actually care that it’s LLM output.

My problem is that much of the LLM prose feels like someone took their half-baked idea and asked the LLM to put a veneer of quality writing on top of it. Then you waste your time reading it to parse out the half-baked idea hiding among the wall of text.

californicalMar 16, 2026
Yes exactly

If a person has a shitty idea that sounds good, they start writing about it. If they exercise some care in their writing, the act of writing itself is enough to make them realize that their idea is shitty.

By the way, it happens to me all the time! Even just on HN, I’ve bailed halfway through writing a comment because I realized that I didn’t know what I was talking about, lol.

But an LLM will gladly take that shitty idea and expand it into a very plausible article/message/post, that seems reasonable if you don’t think very critically about it. And it’ll be done with such a high-seeming level of care that any human author would’ve been fact checking themselves the whole time.

So it forces the reader to think even more critically, rather than letting our subconscious try to judge authenticity of the writer through the language they use.

For example, someone who says “my WiFi is broken” when referring to the fact that their computer is dead, we can quickly judge them as “not an expert at computers”. But if they say that “my M.2 drive has gone bad”, we inherently assume they have some understanding. —- when the first person uses LLMs to write, they sound as informed as the second person even if they are completely clueless and wrong

eucyclosMar 16, 2026
In my case, it's because it doesn't address my ask, which is why I didn't ask an ai in the first place. The only person I know who does sloppypasta is my brother in law. I know he means well, but when I ask his opinion I want the perspective of someone in his demographic. If a generic ai response met my needs, I wouldn't be asking him.
coldteaMar 16, 2026
>I find this fascinating, honestly. It shouldn't matter as long as it addresses your ask, yet it does. I also wish I could filter social media on "it's not X. It's Y". Because it's probably not actually about the content but the sense of connection.

It's also about the content. Generic slop I can get on demand from an LLM myself, vs a novel insight.

heavyset_goMar 16, 2026
I'm purposely talking to a person and not a chatbot.

So it does not meet the bare minimum of addressing my ask, the premise of the ask hinges on a discussion with a real person.

taosxMar 16, 2026
I think it should matter. When you ask the AI something you are in a frame of mind, you have a specific context, the question also holds value and context that might completely change the parsing of the answer or at least the difficulty of it.

What I'm asking and the response from AI through an intermediary lose some context (the prompt), it's like the telephone game where the data becomes more and more distorted, that's why people don't have an issue with their own AI generated answers.

Another issue is that when I'm talking with someone and parsing through what they've said I'm considering them, as a person, taking all available context (some of this might happen unconsciously).

In any case I don't think there is an easy solution to the problem.

MagicMoonlightMar 16, 2026
We can tell by your fury that you’re a slop poster.

I don’t want a random person’s use of an AI to be slopped at me. I don’t know what they asked it, a lot of the words are made up, and I have to go through the effort of decoding it.

If I wanted an AI answer I would ask an AI. AI slop is made up. It’s like handing me a paste of google search results. It’s creating work for me.

falcor84Mar 16, 2026
I am really into this approach of AI being used as a user-agent.

In particular, I've been thinking a lot about educational content, and what I'd love to ask educational providers for is not AI-generated content, but rather carefully human-built curricula offered in a structured manner, which my own AI could then use to create dynamic content for me.

namnnumbrMar 15, 2026
I acknowledge that those likely to copypaste slop aren't likely to find this article themselves, but I built the page to be shared or guide discussions around etiquette like nohello.net or dontasktoask.com. IMO a common understanding of AI etiquette would provide social pressure to halt some of these behaviors.

I honestly don't mind someone else's AI as long as I can trust it/them. One problem I have with sloppypasta specifically is that it reads as raw LLM output and the user isn't transparent about how they worked with the AI or what they verified. "ChatGPT says" isn't enough; for me to avoid inheriting a verification burden, I'd also need to understand what they were prompting for, if they iterated with the AI, and if/what/how they validated.

(the other problem is that dumping a multi-paragraph response in the midst of a chat thread is just obnoxious, but that's true even if its artisanal human-written text)

AeolunMar 15, 2026
Yes, I can replace the link to nohello in my automated responses now :)
lovemenotMar 15, 2026
Couple of expressions from pre-AI culture: "RTFM", "Google is your friend". These were well-used because they are directed, pithy, abrasive.

(n)amow(?): (not) All my own work ?

username223Mar 16, 2026
Good point: RTFM and (wall of slop) are two ways of telling someone that responding to them is not worth your time that are both ruder and more time-consuming than simply saying nothing. Explaining the culture of RTFM, i.e. "if there was any way you could possibly have found the answer otherwise, you should never have asked the question" to non-tech friends usually results in disbelief.

But the slop-wall is even worse, as it wastes the questioner's time in figuring out that they're just getting slop. At least RTFM is efficient.

madroxMar 16, 2026
I think you will find you will get farther by offloading this unpleasantness to an AI and open sourcing it rather than teaching etiquette to the internet, a place not known for its decency.
YurgenJurgensenMar 16, 2026
There’s a certain very satisfying force to turning something into a static website that you can point people at. The Internet equivalent of “don’t make me tap the sign”; especially in an era of AI-slop.
no-name-hereMar 16, 2026
Clickable links for URLs mentioned in parent comment:

https://nohello.net

https://dontasktoask.com

mcphageMar 15, 2026
> We smiled and laughed for years that all of this technology and power is just being used to share cat videos.

Well, cat videos make people happy.

jjgreenMar 15, 2026
This Firefox extension replaces Daily Mail pages by pictures of kittens https://addons.mozilla.org/en-US/firefox/addon/kitten-block/
madroxMar 16, 2026
Touche
waterTanukiMar 15, 2026
I find your comment disingenuous at best.

> The internet was not a bastion of high quality content or discourse pre-AI.

I have read thousands upon thousands of pages of AI-related discourse, watched hundreds of videos since 2022, maybe even a thousand now on it. NEVER at any point in time did people opine for the "high quality" internet of before. They opined for the imperfect HUMAN internet of before. We are now seeing once pristine, curated corners of the internet being infected with sloppypasta.

This is quite a broad brush to paint the internet with. It's like saying The Earth is not a bastion of warzones/peaceful places to live. That is HIGHLY dependent on location.

marcus_holmesMar 16, 2026
Sorry, not related to your point, but the language:

To "opine" is to give an opinion on something.

To "pine" for something is to wish for it, usually in a nostalgic sense.

I get how the two are related and can be confused, especially when you're talking about comments on the web. Just thought I'd clarify.

madroxMar 16, 2026
Even before AI, the human social internet was loaded with bots and disingenuous actors. You want the imperfect human internet that is also pristine and curated. I've been socializing on the internet since 1994, and I feel fairly confident in sharing that this never existed, except in nostalgia.

If that's what you're pining for, you're going to have to find a highly protected part of the internet that is walled off from untrusted actors. However, that's always been the solution, and AI doesn't change that.

fzeroracerMar 16, 2026
And since the foundation of the internet, the correct response to bots and disingenuous actors has been to a) ignore them b) ban them and c) ostracize then. We're talking about basic behaviors that have been understood since Usenet, something you surely should be aware of since you grew up in that era.
madroxMar 16, 2026
I absolutely agree with this. We did not tell bot operators to "do better" like this manifesto is trying to do, which is my whole point.
lich_kingMar 16, 2026
I don't think that "it's more of the same" is a good way to think about it. The internet contained a lot of low-quality content, but even low-quality content used to be fairly expensive and time-consuming to produce. Further, you could immediately discern bottom-of-the-barrel content-farmed nonsense by the writing style alone. Now, LLMs make it practically free to generate unlimited amounts of slop that drowns out human-written stuff, and they can imitate the style hints we used to depend on for quick screening.
madroxMar 16, 2026
Yet how are the alternative ways of thinking about it better? Spending your time angry about what others can do? In any era, that’s a poor life philosophy.

The problem is the same as it has always been. Figure out how to use your time and attention effectively,

lich_kingMar 16, 2026
A sufficient number of people being angry about something is how you end up with social norms. These norms will shape how the technology is used.

Conversely, if your take is that there's no point being angry and we should just take it in stride, that just emboldens the producers of slop.

madroxMar 16, 2026
You're reading too much into my words if you think I'm suggesting we should take it in stride.

I think we should accept that trying to enforce social norms is a waste of time as that will only work on the politest part of the internet. Instead, focus on what you can control: better mechanisms for managing your attention and time.

beepbooptheoryMar 16, 2026
Is it possible to be critical without being angry? Are the only options here misplaced ire or total queiescent fatalism? Does the site here even seem excessively angry?
SpicyLemonZestMar 16, 2026
Strategic, directed anger is an important component of using your time effectively. It sends a clear signal that certain kinds of behavior are unacceptable and people who'd like continued access to your time had best not engage in them. You shouldn't go around yelling at people every time you get a bit frustrated, but you should and I do express anger when someone signs their name to LLM-generated Slack responses.
JumpCrisscrossMar 16, 2026
> I don't have a lot of sympathy for people angry at this type of behavior

I ignore it. But if that isn’t an option, this sort of writing can help you convince someone in power around you it’s okay to ignore it.

slackbaitnowMar 16, 2026
I am sorry, but in what way is everyone letting the "We've been creating bait content for a long time" comment slide?

Did you even read the article? It is about person to person interactions. The three examples weer:

* Someone butting in to an ongoing discussion with a solution (but it's generic and misfitting AIslop)

* Someone being asked for their expertise and responding (but it's generic and misfitting AIslop)

* Someone comes with a problem thesis looking for help (but it's generic and misfitting AIslop)

The only one of these that existed prior to AI was the middle one, and the article very specifically calls out how transparent it used to be, because it had the shape of a google link.

The first one would be impossible because the person would have to either write an unhelpful response, and they wouldn't find the words at length. You could ignore them or pick it apart easily. The last one would be impossible unless if they were copy pasting from a large PDF, which would look nothing like a chat message.

What kind of workplace hellscape do you work on where people posting low effort bait on SLACK was the norm? The premise of this reply is entirely non-sensical.

AurornisMar 16, 2026
> The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow.

Reading AI generated prose, even if it’s my prompt, always gives me the same feeling as when I read a LinkedIn post: Like a simple concept was stretched into an unnecessarily long, formulaic format to trick the reader into thinking it was more than it was.

Everyone taking their scraps of thoughts and putting them into an LLM likes it because the output agrees with them. It’s flattering. But other people don’t like it because we have to read walls of text to absorb what should have been a couple of their scattered bullet points.

Just give me the bullet points. Don’t run it through the LLM expander. That just wastes my time.

bandramiMar 16, 2026
Everybody wants to use LLMs to produce things and absolutely nobody wants to consume the things that LLMs produce and this is the fundamental reason this is all going to collapse unless we find a way for producers to pay consumers to consume their LLM output.
eucyclosMar 16, 2026
Gotta disagree. I've found several great new YouTube channels that clearly use ai for everything but the script writing. I assume it's an enthusiastic and smart niche expert who lacks the charisma to make videos in addition to doing the research. In very glad ai is filling in those people's weak spots.
grey-areaMar 16, 2026
How would you know it’s an enthusiastic and smart expert creating the content you’re consuming, do you have the subject matter expertise to judge that?

The odds are far higher it’s somebody who knows very little about anything but wants to make money from the gullible.

ngetchellMar 16, 2026
How do you know the scripts aren't AI generated?
leptonsMar 16, 2026
One person's slop is another person's treasure, I guess. I've seen a lot of slop on Youtube, and I block the channels putting it out. It's pretty awful. They use AI narration that can't pronounce simple common phrases correctly. I'm not wasting my time with that garbage, I'd rather give views to actual people producing good content. I don't have time for slop.
shimmanMar 16, 2026
What are these youtube channels, care to share their names?
hastily3114Mar 16, 2026
>I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow.

The problem is that getting an AI to answer a question is trivial. If I wanted to know what an AI has to say about the topic, I would just ask myself. Sending AI output has, as the author writes, the same connotation as sending a LMGTFY link. It does not provide me any value at all, I know how to write a question to an AI, just as I know how to use Google.

amarantMar 16, 2026
I'm starting to realise I might have completely misunderstood the whole lmgtfy thing. I thought it was a semi-rude way to call someone out for asking lazy questions instead of trying to find the answer themselves.
coldteaMar 16, 2026
>I find that I don't have a lot of sympathy for people angry at this type of behavior, even though I share the disdain for someone else's AI output. The people doing this kind of thing are not the kind of people to be reading this manifesto. We've been creating bait content for a long time, and humans have never been given the tools to manage this in any sophisticated fashion. The internet was not a bastion of high quality content or discourse pre-AI.

Which is irrelevant. TFA is talking about personal communication (and the examples are from a business setting).

And their concern is not the mere quality or lack thereof, but also its origin, and this is something new.

>I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow.

No, many of us hate "our AI" content too, and wouldn't impose it to other people, same way we wouldn't fling shit at them.

TonyStrMar 16, 2026
Talking about bait, good job getting 42 responses on hacker news! Your opinions are controversial enough to draw out people who need to correct them, yet genuine enough to not be passed off as a troll and downvoted.
GigachadMar 16, 2026
>like they're being hoodwinked somehow

Because they are. It would be like if I bought some trinket off aliexpress and told you I made it by hand just for you. You wouldn't mind if you bought it yourself, but the fact that I lied about it to make it seem like I care is deceptive and immoral.

Sending someone AI generated text without disclosing so is incredibly offensive. It says you don't care about wasting the receivers time and don't care about honesty either.

maplethorpeMar 16, 2026
I think the difference was that before all this, there would be additional information embedded in the way a person types, or the way they'd written their code, that you could use to build a larger picture of the situation.

Right now it's as if everyone started wearing digital face masks that replaced their facial expressions with "better" ones. Sure, maybe everyone's faces weren't perfect before, but their expressions contained useful information.

jclardyMar 16, 2026
> I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral.

Somehow nobody that replied to you mentioned this. The issue is reciprocity. If I spend two hours manually researching and using my expertise to reply to a ticket, then 10 minutes later I get a novel-length AI reply in response...I now have no respect for the person replying with AI, because they can't even be bothered to spend a few minutes and summarize their "findings" and I suspect they didn't even read what their AI wrote. Especially in a professional setting, where you were hired for your (supposed) skillset, not your prompting skills.

If I'm sending out AI content, then sure, give me AI content in return.

leptonsMar 16, 2026
The biggest vendor I work with uses "AI" for all email communications. It's like they use it to sanitize and corporate-speakify their communications, and I really hate it. They can never communicate like a real human being in email. But when we have actual zoom calls they speak like real humans, but in email it becomes so robotic. It's frustrating to feel like I'm speaking with a robot.
simianwordsMar 15, 2026
I've been thinking about this, what if AI runs autonomously and finds things to criticise that are factually incorrect?

It is easy to do in social media because the context is global but in enterprises it is a bit harder.

Something like "flagged as very likely untrue by AI" is something I would really appreciate.

I see many posts and comments throughout the internet that can easily be dispelled by a single LLM prompt. But this should only be used when the confidence is really high.

whatMar 16, 2026
Why do you think an LLM knows what is fact?
simianwordsMar 16, 2026
Same way I do. I ask the truthfulness of a fact in ChatGPT and it gives me good answers almost always
OptionOfTMar 15, 2026
It's very weird how many people take the output of ChatGPT/Gemini/Claude as gospel, and don't question it at all.

It's also very impolite to dump 5 pages of text on someone, because now you're asking _them_ to validate it.

When I ask a question in Slack I want people's input. Part of my work is also consulting the GPTs and see if the information makes sense.

And it shows up the most with people who answer questions in domains they're not a 100% familiar with.

AeolunMar 15, 2026
I don’t mind this so much if they don’t know anything about the subject themselves. What bothers me is when they then copy it at domain experts as if it makes them qualified to talk.
rrr_oh_manMar 15, 2026
It's ironic, because the site has all the hallmarks of an LLM generated website.
spondylMar 15, 2026
I think Claude Code's frontend design is quite a fan of serif fonts from what I've seen in the past.

They did disclose AI usage which is good: https://github.com/ahgraber/stopsloppypasta?tab=readme-ov-fi...

namnnumbrMar 15, 2026
Oh, I 100% acknowledge the site itself was LLM generated. I'm not a web designer, so I needed a lot of help making a visually appealing site, even if that design language is at this point LLM trope.

However, the essay and the guidelines were all human-written!

rrr_oh_manMar 15, 2026
Credit to you for your candor!

I'm possibly too jaded / cynical already...

TerrettaMar 15, 2026
Hits you in the first row of buttons with the classic gen-AI slop "Why It Matters".

So trace* through ninerealmlabs and ahgraber and sure enough:

  I used AI:
  - to help build this website.
  - to help generate examples of sloppypasta
    based on my original guidance
  - to proofread and review the human-written
    copy to provide a critical review
  - to improve my arguments and ensure clarity.
Kudos for being forthright.

---

* Turns out clicking "Open Source" bottom right gets there faster!

namnnumbrMar 15, 2026
I talked myself in circles on that "why it matters" heading but ultimately couldn't come up with a better one. "The problem" has similar ai-slop feel, and "the rant" // "the rules" didn't really evoke the feeling I wanted.

Happy to take suggestions on this!

ahyangyiMar 16, 2026
No, not just that heading, but also the obsession with comparison tables.
thinkingemoteMar 16, 2026
by "human-written" do you mean you just used LLM to help the grammar and spelling and formatting and to think up some use cases but its entirely "my own words"?
slopinthebagMar 16, 2026
This entire post is very avant garde. AI slop about how it's rude to share AI slop posted on an AI slopsite. Very well done.
efilifeMar 16, 2026
It's not difficult to create a visually appealing website. You don't have to be a designer. Many of us here aren't designers and have beautiful sites. Have you tried doing it yourself?
Cthulhu_Mar 16, 2026
As an alternative to LLMs, you can just download ready made themes off the internet, or there's a bajillion site creators with premade themes.
ricANNArdoMar 16, 2026
yawnxyzMar 16, 2026
I believe you, but the AI-looking website makes me default to thinking that the text itself is AI generated
chewbachaMar 15, 2026
When you must remind someone to “think” when using a technology because the least resistant path is to not think… it feels like the technology isn’t really helping.

They are stealing our work, turning it into a model, and then renting our decisions to less intelligent people.

They (tech companies) don’t want us to be smart any more. They are commodifying intelligence.

0xbadcafebeeMar 16, 2026
If I was a bot I would probably write some perfectly punctuated garbage about how your site is a crucial testament to the ever evolving digital landscape or use big words to delve into the multifaceted tapestry of internet ethics. But honestly your website about stopping sloppy pasta is just so dumb and a complete waste of time. Your acting like somebody writing a fake story with ai is the end of the world or something. Literaly nobody cares if some random article was written by a computer so maybe stop pretending your the heroic saviors of the web. Get a real hobby and stop whining about people using chat bots because its really not that deep bro.

- now the fun part: which AI did I use to write the above?

namnnumbrMar 16, 2026
if you used an AI, I'd love to see the prompts you used to get such human grammar and spelling errors
r-wMar 16, 2026
Why bake it into the prompt when a regex will do?
parrellelMar 16, 2026
[flagged]
benatkinMar 16, 2026
At the bottom, it references some stuff that came before widespread use of LLMs. One of them is no hello [0]. I disagree with no hello. If somebody wants to send a message that just says hello then they should go ahead and do that. The way that language works when someone thinks of something to say it often comes all at once. The only question is whether to say it or not, and that is the filtering stage. Now, I'm not one to begin my conversations with just a message "hello" or "hi" more than the average person. I think I do it less than the average person. Yet I was still taken aback by this request. I don't think that peoples' social instincts should be put aside so easily.

As for "Stop Sloppypasta", it doesn't feel like the content is AI-generated to me but it feels like the presentation of it is. I don't know whether that changes my opinion of the whole thing or just the presentation. As for the advice in it, it seems good, but it also seems a little bit brittle, because people can use an LLM session to review things generated in a different LLM session before sending with some success, and this will increase and therefore it's a moving target.

0: https://nohello.net/

singpolyma3Mar 16, 2026
This is what slop used to mean. Then people started using it for everything and LLM assisted with. Language evolving faster than the tools...
gerdesjMar 16, 2026
"Maybe it's on Slack (or Teams), a text message, or an email. Maybe you were tagged in Notion or an Office doc."

I'm 55 years old. "slop" is way older than your examples. Try a dictionary, eg: https://dictionary.cambridge.org/dictionary/english/slop

LLMs are tools. For me (wot had a C64 as one of my first computers) they are seriously close to magic but I understand what a "next token guesser" means.

SupermanchoMar 16, 2026
A great portmanteau, to be sure.
jaimex2Mar 16, 2026
I have a prompt thats basically the CIA sabotage handbook for replying to any co-worker that dares send me LLM generated crap.

It includes 4 follow up actions and I automate check in messages to see how they are progressing with them.

tonymetMar 16, 2026
"just google it" or copying from google is just as bad. It's passive aggressive and aims to shut down dialog.

I wish there was a remedy. I block or mute the person when I can.

apiMar 16, 2026
The solution is to have your bot read the sloppypasta for you!
RapzidMar 16, 2026
This is one of my biggest pet peeves to the point where I'm often pondering how I can leave the industry now..

People who previously couldn't put in the effort or quality, are now vomiting tons of slop I'm meant to read and review.

PRs descriptions. Documentation. Plans. Etc.

Walls of sprawling text, "relevant files", linked references, unhelpful factoids, subtle inconsistencies and incoherencies.

It's oppressive like 95% humidity on a warm day.

boersethMar 16, 2026
This reminds me of why I despise certain works/styles of art and artists. I feel cheated if I'm made to spend more time and effort interpreting a work of art than the creator put into it themselves.
anonzzziesMar 16, 2026
Talking with middle managers in fortune 100 companies, I often get 'send us the documents so we can make a decision'. It used to be that we carefully wrote things and no one would read them. Now we send 3000 pages of AI crap to make sure no one reads it and then we get approved to start working. Not great but the old situation was worse; no one would read anything and ask you to read it for them on a conference call with 36 people; now that does not happen anymore.
jbrozena22Mar 16, 2026
A lot of middle management is reading documents from those below them, giving feedback to improve the clarity of the doc, and then provide their thoughts and comments on the doc.

This is one role that I can't tell if it's completely useless in an AI powered world, or if that's basically what we all end up doing, reviewing and commenting on the work versus actually making it.

djoldmanMar 16, 2026
What's interesting is that there are probably people who could spend a year happily working with an AI "coworker" without knowing it was an AI, but then get upset and change their viewpoint after learning the truth.
GuinansEyebrowsMar 16, 2026
when a truth is revealed to someone operating under a totally different understanding of a situation, it can be confusing, disorienting and upsetting.

this seems reasonable to me, especially in this transition period where we're navigating ethical and respectful collaboration that involves AI. give people a little grace in this weird new world.

GaryBlutoMar 16, 2026
> "ChatGPT says" is the enshittified LLM-era equivalent of LMGTFY [...] Recipients are left to figure out whether it's AI generated

How?

semilinMar 16, 2026
Your ellipsis leaves out the answer to your question. The paragraph is contrasting "ChatGPT says" which is annoying, but transparent (as LMGTFY), with "sloppypasta" which includes no such indicator.

Admittedly, the paragraph is somewhat confusingly written. Also probably written by an LLM.

lxeMar 16, 2026
> ChatGPT, read this article and turn it into a AGENTS.md
unsaved159Mar 16, 2026
Literally never in my life did I receive anything like that website suggests via email or DMs. Curate your social circle is the answer.
aurareturnMar 16, 2026
I got one from my office, who at some point decided to use ChatGPT to write Asana tickets that are clearly not vetted.
bagacrapMar 16, 2026
Oh how I wish I could curate my coworkers...
TZubiriMar 16, 2026
>"I asked Claude about this! Here's what it said:" >"ChatGPT says:" My policy suggestion is that we need to completely people quoting ChatGPT. That's legit, that's not a bannable offense, not against any policy.

The author wastes time talking about this case, and even does it first before talking about the much worse case:

>"The sender shares AI output as their own work, with no indication a chatbot wrote it."

This is 100 times worse, and is objective rather than subjective. If the author admits it's AI when confronted it kills their reputation, (if they don't admit it and turns out it is AI, it's fraud, fireable offense)

Putting these 2 categories of AI use wastes breath and conflates the two, the message will not be clear at all.

What's worse, such a policy actually has the effect of increasing undisclosed AI use. This is a specific case of the general case: banning all AI usage increases unregulated AI usage. Everyone who prohibited employees from using AI in 2024 knows that what you get is undisclosed AI use or content you are not sure is AI written or not. If you give a specific way to use AI, you can add features like auditability, supply chain control, and you can remove any outs from employees and users that do not comply with the policy.

belochMar 16, 2026
Dealing with people who copy-paste unread slop into emails is probably not a huge issue for most of us. There's much more slop out there masquerading as blog posts, HN comments, etc.. It's not a huge issue yet, but there have definitely been times when I found myself midway through reading something and realizing it's just a LLM wasting my time.

I'm starting to be reminded of Neal Stephenson's "Diamond Age". He described a future in which people walked around with a nearly invisible defensive army of nanobots surrounding them whose job it was to counter the offensive nanobot swarms of their enemies. Characters in this novel would go about their business while an unseen nanobot war took place in the air around them.

We're rapidly reaching the point where we will need AI to defend us from AI. i.e. We will soon need agents filtering all that we read and removing slop, just so we can preserve our time and attention for things that are human and real.

connorboyleMar 16, 2026
I can understand why various unscrupulous entities and individuals would use AI to generate "slop" content to drive clicks/karma farm etc. But it's baffling to me when I ask someone a question and they respond saying they asked ChatGPT/Claude/etc. and then just share the full response. They seem to genuinely think this is something I wanted them to do.
czhu12Mar 16, 2026
I’ve encountered an even more nightmarish version of this recently: ai generated tickets. Basically dumping the output of “write a detailed product spec for a clinical trial data collection pipeline” into a jira ticket and handing it off.

Doesn’t match any of our internal product design, adds tons of extraneous features. When I brought this up with said PM they basically responded that these inaccuracies should just be brought up in the sprint review and “partnering” with the engineering team. AI etiquette is something we’ll all have to learn in the coming years.

stingraycharlesMar 16, 2026
As someone who maintains open source projects, I can assure you that this has been a problem for about a year or so. But I reckon it took a bit longer for people to start doing this at work as well.
codemogMar 16, 2026
Let me guess, it’s ok if they do it, but if you handed their crappy ticket to Claude and shipped whatever crud came out, you’d be held accountable? ;)

Funny how that works out.

KeyframeMar 16, 2026
I've heard a great thing recently, more or less - If all you're doing is writing prompts, maybe you're not needed anymore. Stay behind the intent, own the output and understand it and then maybe it makes sense. sloppy prompt + c/p doesn't bring value and will be treated as such. As with anything in life, outcome is usually proportional to the effort put in.
est31Mar 16, 2026
AI etiquette is a great term. AI is useful in general but some patterns of AI usage are annoying. Especially if the other side spent 10 seconds on something and expects you to treat it seriously.

Currently it's a bit of a wild west, but eventually we'll need to figure out the correct set of rules of how to use AI.

GigachadMar 16, 2026
I'm hearing nightmare stories from my friends in retail and healthcare where someone walks in holding a phone and asks you talk to them through their chatbot on their phone. Friend had a person last week walk in and ask they explain what he does to Grok and then ask Grok what questions they should ask him.
dminikMar 16, 2026
Yes. My Jira tickets used to be almost empty, but all of it was useful info. Now, my Jira tickets are way too long. The amount of useful info has also gone down.

Talk about an AI induced productivity increase ...

nvardakasMar 16, 2026
Same thing with PR descriptions. The signal-to-noise ratio has completely flipped. Before, a short PR description meant the dev was lazy. Now, a long detailed one might just mean they hit generate description and didn't even read it. The length went up, the usefulness went down, and the reader has no way to tell which kind they're looking at.
david422Mar 16, 2026
My teammates hit the generate PR button. I'm not reading that, it's a summary of the changes that I am _already_ going to be looking at, wrapped in some flowery language about being "better architecture, cleaner code" etc.

So those PRs may as well not have a description at all as far as I'm concerned.

nvardakasMar 16, 2026
Right, better architecture, cleaner code is the AI equivalent of synergy in corporate emails. It sounds like it says something but it communicates nothing. The useful PR description is changed X because Y was breaking Z and that requires the author to actually think about what they did. If the tool is doing the thinking, the description is just decoration.
ErroneousBoshMar 16, 2026
I'm taking a break from doing Clever Stuff and just working on the networks team at work, because there's a big infrastructure update happening and if you want a thing done right you have to do it yourself.

Anyway.

People are starting to log support tickets using Copilot. It's easily recognisable, and they just fire a Copilot-generated email into the Helldesk, which then means I have to pick through six paragraphs of scrool to find basic things like what's actually wrong and where. Apparently this is a great improvement for everyone over tickets that just say "John MacDonald's phone is crackling, extension number 2345" because that's somehow not informative enough for me to conf up a new one and throw it at the van driver to take to site next time he's passing, and then bring the broken one back for me to repair or scrap.

Progress, eh?

SamuelAdamsMar 16, 2026
I do this quite often, but I also instruct Claude to limit its output to 2-3 sentences or paragraphs, depending on the context. Also "Write this for a team of software developers / MBA's" goes a long way too.

I also do the extra step of eliminating things that are not needed, or we review this during backlog refinement.

geronimoeMar 16, 2026
It's weird that there's little to no focus on making AI describe problems coherently for use-cases like this?
TranquilMarmotMar 16, 2026
We went from Jira tickets with one or two sentences, "Implement feature X. Here are some caveats: (simple bullet points, a few words each)" to literal _pages_ of full-on unreadable garbage.
darkwaterMar 16, 2026
This. In my case I do write from time to time tickets with an LLM but it's always after a long exploratory session with Claude Code, when I go back and forth checking possibilities and gathering data, and then I tell it to create a ticket with the info gathered so far. But even in that case I tend to edit it because I don't like the style or add some useless data that I want to remove.
autoexecMar 16, 2026
It sounds like you'd save a lot of time if you just didn't use the LLM.
reportgunnerMar 16, 2026
But that would mean they would have do a lot more work in the same amount of time.
lelanthranMar 16, 2026
Don't discount the value in rubberducking with an AI.

They write shit code, but but can be prompted to highlight common failures in certain proposals.

For example, I am planning a gateway now, and the ChatGPT correctly pointed out many common vulnerabilities that occur in such a product, all of which I knew but may not have remembered while coding, like request smuggling.

It missed a few, but that's okay too, because I have a more comprehensive list written down than I would have had if I rubber ducked with an actual rubber duck.

If I finally write this product, my product spec has a list of warnings.

darkwaterMar 16, 2026
Not really, the exploratory phase is (probably) much faster with Claude Code that on my own. Writing a well specified ticket is very, very time consuming. With Claude Code for me it's way much easier to branch off and follow the "what if actually...?" and challenging some knowledge that - if I had to do it manually - I will just take for granted during ticket writing. Because if I'm "sure enough" that a fact is what I recall it to be, then I just don't check and trust my memory. When paired with an LLM, that threshold goes up, and if I'm not 101% sure about something, I will send Claude Code fetch that info in some internal artifact (i.e. code in a repository, actual state of the infra etc) and then take the next decision.
asplakeMar 16, 2026
Is tossing stuff over the fence considered ok now? Review the slop with the person that submitted it.
mschildMar 16, 2026
> Is tossing stuff over the fence considered ok now?

Has been for a long time unfortunately. AI didn't create this behaviour but certainly made it easier for the other side to do it.

> Review the slop with the person that submitted it.

Alternatively, mark them as "Needs Work" if you can. But yes, put the ball in their court by peppering them with questions. Maybe they will get the hint.

pixl97Mar 16, 2026
>Has been for a long time unfortunately.

Yea, this is so annoying and AI has only grown the problem.

On the support side of things I love when the customer says "your documentation doesn't work like the product".

xorcistMar 16, 2026
That used to be my joke! Given that most large organization spend (much) more time with the administrative work around code changes than the actual changes themselves (planning, deciding, meetings) then before we let Claude write our code we should let it write our Jira tickets. It was a great joke because while it was obviously absurd to many people it also made them a bit uneasy.

Cue a similar joke about salary negotiation, and the annual dance around goals and performance indicators. Is it really programmers who should be afraid to become redundant, when you think about it?

I should know better than making jokes about reality. It has already one-upped me too many times.

ljmMar 16, 2026
Tried that last year and the problem was, the tickets themselves were broken down well enough to make sense to the naked eye. The second problem was that it was all for a legacy codebase where practically everybody who had built it over the years had left, so it was a real don't-know-what-you-don't-know situation.

The second problem was always going to be there, even with human written tickets, but the problem really is that someone who relies on AI gets into the habit of treating the LLM as a more trustworthy colleague than anybody on the team, and mistakes start slipping in.

This is equally problematic for the engineers using AI to implement the features because they are no longer learning the quirks of the codebase and they are very quickly putting a hard ceiling on their career growth by virtue of not working with the team, not communicating that well, and not learning.

dev_l1x_beMar 16, 2026
Some people use AI as they use anything else. Careless, without putting the effort in, making things somebody else's problem. This existed before AI, it just accelerated the stupidity.
rhysfonixoneMar 16, 2026
Well said, carelessness of the user persists regardless of the tools they're using. The cracks may show in other ways though.
sjamaanMar 16, 2026
I think before, it was easier to spot. Before, the effort spent would often show in the volume or consistency of the writing. Now, one can create a big, wordy and convincing-sounding document (without any grammatical errors!) in mere seconds. It also provides for some convenient plausible deniability: you can always claim the LLM only helped you here and there with the wording.

So now, even figuring out that it was a careless or lazy job takes a lot more time, which drastically skews the economics in favor of the careless person.

bigfishrunningMar 16, 2026
I would much rather have my prose contain grammatical errors then have anyone mistake it for LLM output. I am absolutely shocked that anyone has the opposite preference.
GigachadMar 16, 2026
Careless people never used to be able to create such absolute volumes of garbage that flood the system. Open source projects used to be able to just have an open PRs system, because the effort to create and submit something is quite hard, it's a natural filter. Now automated agents can flood a project with hundreds of useless PRs that disguise themselves as being real.
namdnayMar 16, 2026
I've had exactly the same feeling. Since the beginning of time, it has generally taken more effort to build something than to review it. This is no longer the case, and it completely breaks some processes.

The quick solution is to escalate the arms race, and start using AI to filter the AI slop, but I'm not sure that's a world I want to work in :)

lesostepMar 16, 2026
Had a friend in a similar situation. She got a clearly LLM-generated ticket that didn't make any sense, and was directed to question anything about that ticket.

Apparently, asking "why it doesn't make any sense" wasn't !polite~

If I remember correctly, she came up with ~200 questions for a 2-paged ticket. I helped write some of them, because for parts of the word salad you had to come up with the meaning first and then question the meaning.

You know what happened after she presented it? Ticket got rewritten as a job requirement, and now they seeking some poor sod to make it make sense lol

One had to be very unqualified to even get through the interview for that job without asking questions about the job, I feel. Truly, an AI-generated job for anyone who is new to the field

user142Mar 16, 2026
The first question should have been "Was this ticket AI-generated?".
lesostepMar 16, 2026
Oh, it was! But the guy that generated it insisted that he triple-checked the prose after, and it should be treated as typed by hand

I'm pretty sure it would be okay to stop at 5-10 questions, because it was clear he couldn't answer any. But my friend is from a hateful branch, and so she went for humiliation angle of asking for as much clarification as the ticket itself allowed

brobdingnagiansMar 16, 2026
I have a very similar situation. Except it isn't even a ticket, just an export of a very long "conversation" with ChatGPT with a vague indication that this is what needs to be implemented. When questioned about it, the person insists they completely understood it before but just forgot after a few days. Sometimes the prompts are removed. Lots of contradictory material in it, some doesn't make sense even in context. Very difficult to figure out what is wanted.
dalmo3Mar 16, 2026
> person insists they completely understood it before but just forgot after a few days.

I don't doubt this, to be honest.

I have the feeling of learning a lot when coding with agents. New features, patterns, entire languages... It's very satisfactory asking questions and getting answers in as much detail as you want, with examples, etc.

Except I forget it all soon after. Because I didn't put the effort. Easy come easy goes .

BiraIgnacioMar 16, 2026
I ran into a similar case recently, there was a ticket describing what needed to be done in detail. I was written by a human and it was a well written spec. The problem is that it removed some access controls in the system, essentially given some users more access than they should have.

The ticket was given to an LLM, the code written. Luckily the engineer working on it noticed the discrepancy at some point and was able to call it out.

Scrutinizing specs is always needed, no matter what.

jrjeksjd8dMar 16, 2026
The manager of my team is like this. He LLMed a design doc and then whenever people have questions he's exasperated that people didn't read the design doc. Bro you didn't write it, why would we read it?
david422Mar 16, 2026
Or they are like - here, can you check over this LLM design and see if it makes sense?
egecantMar 16, 2026
This is even worse because you are working with clinical trials, which literally has impact on human lives
duxupMar 16, 2026
I work for a small SaaS company.

We’re getting prospective and existing clients emailing us what look like AI generated spreadsheets with features that are miles long that they want us to respond to. Like thousands of lines. And a lot of features that are “what does that even mean??”

We get on a call with them and they don’t even know what is on the spreadsheet or what it means…

Very much a “So you want us to make Facebook?” (Not actually asking for Facebook) feeling.

I fear these horror shows of spreadsheets are just AI fever dreams….

whstlMar 16, 2026
Oh boy, do I have a story about this.

I had a PM that was unable to work without AI. Everything he did had to include AI somehow.

His magnum opus was 30 extremely large tickets that had the exact same text minus two or three places with slight variations. He wanted us to create 30 website pages with the content.

The ticket went into details such as using a CDN, following the current design, writing a scalable backend, test coverage, about 3-4 pages per ticket, plus VERY DETAILED instructions for the QA. Yep: all in the same task.

In the end it was just about adding each of the 30 items to an array.

I don’t know if he knows, but in the end it was this specific AI slop that got him fired.

harrallMar 16, 2026
I find this problem self resolves when someone else sends them raw AI output.
artyomMar 16, 2026
I find "sloppypasta" extremely useful. Since I've been in charge of people and teams for years, it's a clear signal of who I should get rid of.
febusravengaMar 16, 2026
In my company, overuse of LLm and sloppy pasta is feature of those that you can't fire.

For me it destroyed company as aligned group of people, at C level, it's just bazaar of drones throwing AI slow at each other.

elricMar 16, 2026
At my current company, people who don't use enough of their token budget get a stern talking-to from manglement ...
artyomMar 16, 2026
> at C level, it's just bazaar of drones throwing AI slop at each other.

I don't think it changed much from before LLMs. it's just that the slop is now automated.

rafaeleMar 16, 2026
Nice paraprosdokian there
merrvkMar 16, 2026
I had a guy doing this to reply to PR review comments, copying in the comment to the LLM and pasting the response back.
diathMar 16, 2026
IshKebabMar 16, 2026
bigfishrunningMar 16, 2026
That's a horror show! The willful pushback from the author when questioned about the code is the icing on the cake.
curiousgalMar 16, 2026
Worse, I had a guy literally posting screenshots of copilot replies.
ashwinsundarMar 16, 2026
Related - On Bullshit by Harry Frankfurt.

    What bullshit essentially misrepresents is neither the state of affairs to which it refers nor the beliefs of the speaker concerning that state of affairs. Those are what lies misrepresent, by virtue of being false. Since bullshit need not be false, it differs from lies in its misrepresentational intent. The bullshitter may not deceive us, or even intend to do so, either about the facts or about what he takes the facts to be. What he does necessarily attempt to deceive us about is his enterprise. His only indispensably distinctive characteristic is that in a certain way he misrepresents what he is up to.
Also related - Gish-gallop

    During a typical Gish gallop, the galloper confronts an opponent with a rapid series of specious arguments, half-truths, misrepresentations, and outright lies, making it impossible for the opponent to refute all of them within the format of the debate.[2] Each point raised by the Gish galloper takes considerably longer to refute than to assert. The technique wastes an opponent's time and may cast doubt on the opponent's debating ability for an audience unfamiliar with the technique, especially if no independent fact-checking is involved, or if the audience has limited knowledge of the topics.[3]
cdriniMar 16, 2026
Gish-galloping! Today I learned, I'm going to have to remember that one. I think people can also gish-gallop unintentionally; especially in online discussion threads. When someone leaves comments that are very long, poorly organized, and more stream of consciousness.

The Wikipedia page has some good counter-strategies: https://en.wikipedia.org/wiki/Gish_gallop

ForgotMyUUIDMar 16, 2026
Do you also hear this regularly these days "But chat said ..." ?
coldteaMar 16, 2026
Brave of the author to imply the other person will spend time reading the slop they get sent.

Instead, they'll use an LLM to send a slop response back.

Instant karma!

sbinneeMar 16, 2026
As a senior engineer, I am getting extremely tired of reviewing AI slops. Today at work I have decided that I just have to build a POC project from scratch. I spent 2 weeks to review the code, to log the process, and to build toy examples to make my argument clear that some (actually most) parts were not working.

The funny thing is that I know my manager got this “working” within a week with Claude. I had to spend 2 weeks with 4 JIRA tasks, many commits for toy examples, and three reports.

Cthulhu_Mar 16, 2026
I'm afraid the only options are to stop, give pushback, or embrace it yourself and use AI to review - just add the caveat that "since you decided to not put in the effort, neither will I". Just make sure the author is on-call for outages.
m_muellerMar 16, 2026
why is there even an expectation / requirement that a POC's code should be taken into production? wouldn't you be much faster to just regenerate from scratch, but this time with your proper architectural guidelines in a planning document and with proper code reviews in place?
_zoltan_Mar 16, 2026
Reading this, honestly, I'd say the problem is you then?

Velocity is everything.

kshri24Mar 16, 2026
So tired of AI slop! Please use the tech creatively. This is not it!
galaxyLogicMar 16, 2026
Shouldn't the etiquette be that if you send someone a response from AI, you start your message by telling the prompt that produced that reponse?

That, would give the responder the chance to modify the prompt and get a perhaps better answer from the LLM?

MordisquitosMar 16, 2026
Even better, reply only with the prompt that you would have used, not the resulting text. Don't even run the prompt through an LLM.

That results in a shorter and more concise message, and the original sender can choose to use the prompt you provided on their favourite LLM from the start.

galaxyLogicMar 16, 2026
Right. You might also consider high-lighting some things you learned from AI's response. Summarizing it and perhaps critiquing it.

AI, and different AIs, give different answers to the same question, so it may be useful if you can provide a good summary of the different responses you got.

MordisquitosMar 16, 2026
Even better, what about piping the different AI responses through another LLM to provide the summary? That way you save yourself the time and effort of reading all the different AI responses.

You could even pipe the final summary directly to your email/IM client and save yourself the copy-paste.

stingraycharlesMar 16, 2026
I actually use multi-LLM consensus as a part of my daily work, it’s pretty effective.
zbyMar 16, 2026
I like it because it is constructive!

I am really surprised with the amount of backlash at this site for using llm helpers in writing. There are many ways in which this can go wrong - and the article lists some of them - but it does not blindly close all llm writing helpers.

What would be even more constructive would be an article listing the good ways of using llms.

https://xkcd.com/810/ :)

GuB-42Mar 16, 2026
You can use AI to make a summary of these AI-generated walls of text.

We are getting to this weird situation where instead of Alice sending a message to Bob, Alice sends the message to her AI, which sends it to Bob's AI, which then tries to recover Alice's original message.

To be fair, I don't think it is an AI problem, more of a quirk of formal communication, the same happen with human secretaries. For example, I want my customer to pay me, I want to be professional but not bother with the details, so I ask my secretary to write a well written letter to my customer, with a proper bill and all that stuff, my customer's secretary will then read the letter and tell his boss "hey, our supplier wants $xxx". I could have just called the boss directly and say "hey, it is $xxx", but it is rarely how it is done. Here, it is AI that is taking charge of the formalism, and I find it to work really well for this, as it is essentially a translation task, what LLMs do best.

I am not discounting human secretaries here, they can do much more than write formal letters, but that's a part of their job where LLMs excel at.

sillyflukeMar 16, 2026
>To be fair, I don't think it is an AI problem, more of a quirk of formal communication, the same happen with human secretaries.

Obviously you're not a golfer. Human sectretaries don't have non-deterministic hallucinations and random critical ommissions in their summaries, which I've witnessed first hand with LLMs. More importantly, if they do you have more deterministic mitigations with them than you do with LLMs, as there are no mitigations with LLMs except praying that a new model in some unspecified future will be magically better with the summaries down the line.

The only way to stay sane when using these tools is to pretend that these things won't ever happen and just go about your business like the rest of the zombie workforce, because no one wants to stop the train and address the issue.

There is a reason why the title of Dr.Strangelove is "How I Learned to Stop Worrying and Love the Bomb".

GuB-42Mar 16, 2026
To make things clear, even though I am not a golfer, I have a lot of respect for good secretaries, that's why I said they can do much more than write formal letters. Not only they don't hallucinate like LLMs but they can actually catch mistakes before they happen.

I just wanted to point out the seemingly ridiculous idea of formal communication that none of the interested parties actually read. It is as if two English speakers insisted for the discussion to be in Spanish, and they both brought an interpreter for that.

sillyflukeMar 16, 2026
I think I understand that, which is why I specifically singled out your suggestion that "AI is not the problem" instead of your comment as a whole. Automated processes of Person A talking to automated processes of Person B is not a novel concept. It's been going on in various forms since computers have been invented. You call an Uber and get your phone to talk to Uber's servers and the driver's phone so you don't have to talk to the driver, same with Doordash and the restaurant, same with Amazon and the retail store personnel. In these cases the back and forth between two humans are, as in your words, a type of "formal communication that none of the interested parties actually read" or "two English speakers both bringing Spanish interpreters to talk to each other."

It's this way because us nerds want to remove the need to talk to other humans in order obtain what our hearts' desire at any given point. So we have been trying to remove non-deterministic roadblocks (humans) and replace them with deterministic automation from the very beginning. This is not new. If LLMs were the same deterministic processes but 1000x more versatile and capable then it wouldn't be a problem.

But even though LLMs have lowered the barrier on who can create these automated processes and the speed at which these automated processes are created, to achieve this they brought with them non-deterministic side effects that currently evade a holistic and deterministic fix. This is why it's an AI problem.

vharuckMar 16, 2026
>We are getting to this weird situation where instead of Alice sending a message to Bob, Alice sends the message to her AI, which sends it to Bob's AI, which then tries to recover Alice's original message.

Somebody's going to revolutionize how we use AI by creating an algorithm where the output of an AI is modified so that it has fewer tokens but results in a similar-enough input for another AI to respond to. Like dropping purely syntactic words or using small synonyms. It'll be a bizarre shorthand that makes no sense to humans, riddled with artifacts of the model.

Or we're going to realize that creating output with one AI for another to ingest is needlessly breaking a session into different pieces. But that would mean a lot of professional emailers getting optimized out.

laserbeamMar 16, 2026
This website needs to be simpler, snappier and polite on the homepage. I should be able to send it as a quick reply to anyone doing the deed. Just like nohello.net
chuckadamsMar 16, 2026
AI is new enough that people still get excited about it and use it inappropriately, which is annoying but understandable. Where I draw the line however is the last category: passing off AI slop as one's own thoughts without attribution. It's not only lazy (uh oh), it's a direct insult to the human at the other end, and ought to result in bans from whatever forum was used to communicate it.
unstatusthequoMar 16, 2026
Copy/paste it into your CrapBot of choice and respond with even longer sloppy pasta. Just do that every time and make sure it’s longer than their original message. Maybe slam in some Socratic method questions and theory to wander through the response for awhile. Ensure it creates action items only they can perform and make sure the ball is in their court each time. I should make a skill for this.
tiarafawnMar 16, 2026
I call this getting slopped in the face
leekrasnowMar 16, 2026
I disagree with the premise of automatically dismissing sloppypasta just because it is verbatim LLM output. The sender could hypothetically have spent real effort in the chat interaction that led up to the ctrl-C moment, could have spent real effort weighing all the points raised and thought to themselves “this is all valid stuff to consider, let me forward it to my colleagues and see what they think for some additional human input on the matter.”

In such a case, I think that spending extra time to scrub the output of em-dashes or other slop is just virtue signaling that “I am man, not machine”

hananovaMar 16, 2026
> The sender could hypothetically have spent real effort in the chat interaction that led up to the ctrl-C moment […]

Irrelevant. As soon as the ctrl-c moment happened, all of the real effort became worthless and utterly overwhelmed by the slop in the face. I do not read LLM responses. I will stop reading any text as soon as it becomes clear it was LLM generated. If it is in a setting where I have the power to do something about it, it will be swiftly followed by a flag/ban/block/ignore/comment deletion…

tyleoMar 16, 2026
TBH "The Oracle" personality doesn't always bother me. There's a bunch of stuff that gets asked in corporate chat that makes me think, "does this person know how to use Google?" I think the same can be said for the chatbots at this point.

I feel like we just need the equivalent of "Let me Google That" for LLMs.

bigfishrunningMar 16, 2026
"Let me google that" already mostly gets you LLM output, because google is fairly ruined at this point. Also, if someone asks me a question, I think it would be rude of me to respond with "Maybe I know the answer, maybe I don't, but why don't you send your question to a machine that will give you an answer that just might be right by coincidence?"
prism56Mar 16, 2026
Interesting, that's the one I hate the most.

Caveat that I'm talking about when it's something that's in depth or requires some nuance beyond just a surface level information. They don't know themselves, so rather than saying "oh, sorry I don't know, this document or person might help."

As an example I might post in our teams chat, that I've seen an issue on our physical hardware, there is a step change in the telemetry (increased vibration or some other erratic behavior on a thermocouple). Has anyone had experience with what can cause this on this type of installation.

Then you get a Copilot paste of the prompt "what causes high vibration on rotating machinery".

If somebody clearly asks a question that could just be answered by Google or a LLM prompt quickly then fair enough, but I'm after specific product knowledge from our technical team. In self reflection I might be very specific in my question so other members of the team can't go down that route as easily.

graphememesMar 16, 2026
The fact this was made with AI is hilarious in of itself. Also, maybe have you thought, why someone would send you something like that? Maybe ask them, I find curiosity is better than whatever this is.
graypeggMar 16, 2026
Totally agree, except for the "AI Ghostwriter" framing. I'm at least fine with someone passing off LLM output as their own, because the hidden social contract that wraps that situation is I assume you read it and agree with what it said.

If you tell me that an LLM wrote it, I will stop reading because I assume the only reason you'd tell me that is YOU want to hedge your bets about it being wrong, so it's clearly not worth my time if you don't even believe it.

However, if I don't know, then I will take it at face value. People can get LLMs to output sensible text and facts, so I expect the implementation detail of "used an LLM" to be hidden from me. If you can't do that, I will think of you as a low effort writer that sprinkles in sloppy rhetorical devices and lists of nothing into your massive multi-paragraph messages, but I'll assume that's YOUR taste. If something happens to be wrong in it, it's because YOU didn't know it was wrong.