The Miller Principle (2007)(puredanger.github.io)
75 pointsby FelipeCortezApr 7, 2026

24 Comments

smitty1eApr 12, 2026
I have found much value in reading the python and sqlite documentation. The Arch wiki is another reliable source.

Good documentation is hard.

AkciumApr 12, 2026
I would love to answer your comment but I didn't read it :P
synthaticlyApr 12, 2026
Haha
simultsopApr 12, 2026
I don't know. Under pressure and stress all docs are ugly.
realaleris149Apr 12, 2026
The agents will read them
taffydavidApr 12, 2026
I read this entire post and all the comments this disproving the Miller principle
armchairhackerApr 12, 2026

    This principle applies to the following:

    - User documentation
    - Specifications
    - Code comments
    - Any text on a user interface
    - Any email longer than one line
Not blog posts or comments. Ironic
taffydavidApr 12, 2026
Damn, I guess I didn't read it closely enough
sebastianconcptApr 12, 2026
Proved that read is not causation of understanding but mere correlation.

So if the read of the Miller principle is interpreted as read+understanding (it should) an interesting deeper discussion can happen.

It can be invoked with a way more dramatic "None understands anything"

layer8Apr 12, 2026
If you read closely, you’ll see that there is no claim that this would be an exhaustive list rather than an exemplifying one, and the principle itself unambiguously states “anything”.
AnimatsApr 12, 2026
The LLMs read everything.
formerly_provenApr 12, 2026
Only because they are architecturally unable to not read something.
simultsopApr 12, 2026
until one day
WowfunhappyApr 12, 2026
Well, the LLMs architecturally have to read everything they see. The agents attached to LLMs can choose what to look at.
kronaApr 12, 2026
It doesn't mean they're paying attention.
OutOfHereApr 12, 2026
They do, but it doesn't mean that entire texts will remain in their context. Increasingly they can use agentic reading, whereby they will spawn an agent to read long texts, then present a condensed version back to the parent LLM, leading to a theoretical opportunity for information loss.
spiderfarmerApr 12, 2026
The Laravel documentation is GREAT when you're getting started. Every chapter starts by answering the very question you might ask yourself if you're going through it top to bottom.

I'm a one-man-band so if I write code comments, I write them for future me because up to this point he has been very grateful. Creating API documentation is also easy if you can generate it based on the comments in your code.

Maybe rename it the Filler principle. Nobody reads mindless comments that are 'filler'.

comrade1234Apr 12, 2026
Despite using an ai while programming I still have open Java doc and other api documents and find them very useful as the ai often gives code based on old apis instead of what I'm actually using. So I do read those documents.

But also, I have a somewhat mentally ill (as in he takes medication for it) coworker that sends rambling extra-long emails, often all one paragraph. If I can't figure out what he's asking by reading the first couple and last couple of sentences I ask him to summarize it with bullet pouts and it actually works. Lol.

makachApr 12, 2026
..and emails
stevageApr 12, 2026
> Any email longer than one line

it's in there

sarrephApr 12, 2026
The irony.
hamdouniApr 12, 2026
Yeah, i'm also surprised people just read post title and jump to conclusions ...
torben-friisApr 12, 2026
I wish this was the case. Then we wouldn't have a minority of us deeply frustrated :)

'Thanks for the doc, let's set a meeting' (implied: so you can read the doc aloud to us ) is the bane of my existence.

stevageApr 12, 2026
Should probably be "The Miller Principle (2007)"
sdevonoesApr 12, 2026
I think this is more true now than ever. Before LLMs, when someone came up with an ADR/RFC/etc you had to read it because you had to approve it or reject it. People were putting effort and, yeah, you could use them in your next perf. review to gain extra points. You could easily distinguish well written docs from the crap (that also made the job of reviewing them easie)

Nowadays everyone can generate a 20-page RFC/ADR and even though you can tell if they are LLM generated, you cannot easily reject them based on that factor only. So here we are spending hours reading something the author spent 5 min. to generate (and barely knows what’s about).

Same goes for documentation, PRs, PRs comments…

ghgrApr 12, 2026
As a counterexample, thanks to LLMs many long-form articles that get posted with clickbaity (but devoid of content) headlines that I would have ignored otherwise now get "read" (albeit indirectly, with the prompt "Summarize the insights of the article $ARTICLE_URL in an academic, dry, technical and information-dense way")
eruApr 12, 2026
I notice that with YouTube videos.
jodrellblankApr 12, 2026
Watching the Artemis II splashdown and following media event, I’m suspicious that a woman from TechTalk Media read out some LLM blurb instead of asking a question; I can’t prove it, but I can almost hear the em-dash in:

"What you have done this week is remind the people of Earth that wonder is worth chasing. That curiosity is the most human thing we have. You didn't just test a spacecraft -- you tested mankind's potential...”

nathan_comptonApr 12, 2026
I think the good news here is that very soon, parroting some shit an LLM wrote will be a sure sign to everyone that that person is a moron or lazy or otherwise useless. If all you do is repeat what an AI gives you, then you can be replaced by the AI. I can't imagine why anyone would want to signal that to potential employers or, really, any other human being.
manmalApr 12, 2026
Those generated ADRs are pure crap, full of unnecessary hedges and superficial solutions that don’t survive scrutiny longer than 10 seconds. I do generate ADR skeleton drafts because I hate empty pages, but I need to add the substance or they are not helpful at all.

What we are doing is probably not in training data, maybe that’s why.

coopykinsApr 12, 2026
It's one of the main things I learned when working as tech support and I talked with users all day. Nobody reads anything.
funnybeamApr 12, 2026
I used to refer to the helpdesk as the reading desk - “Hello, you’re through to the IT Helpdesk, what can i read for you today?”
zero-sharpApr 12, 2026
A totally understandable situation. Most people just want to use technology to accomplish their immediate goal. I'm tech savvy and I lose my mind every time I get distracted by broken/misconfigured technology.
layer8Apr 12, 2026
Or maybe those who do read the docs require less tech support.
timrobinson33Apr 12, 2026
tl;dr
ekjhgkejhgkApr 12, 2026
Damn, this is thin content even for HN.

Anyway, this is just projection. The Miller principle really should be "Miller doesn't read anything". I read plenty.

fmajidApr 12, 2026
Write-only memory
sebastianconcptApr 12, 2026
This signals something that is happening somehow predictably due to the increasing abundance of code. It exponentially grows the surface offered for understanding (text as in comments, docs etc) and our attention bandwidth, well, is not exponentially growing, so...
caminanteApr 12, 2026
Not a new trend.

Mini-article is from 2007.

At the time, more reports were generated than humans could read. People weren't reading them for good reason.

I suspect author is more annoyed about people being (grossly) negligent in reading important things.

Borg3Apr 12, 2026
We are reaching society shown in "Johny Mnemonic" movie.. So much (useless) information around that people gets overloaded. I barely read anything these days on NH, too much (crap) information. I skim and only read stuff that is very close to my interest.

I used to read a lot more in the past, not the case anymore..

andaiApr 12, 2026
Well, half the articles I see posted now, the author didn't even bother to write themselves, but outsourced to a machine.

I've heard this sentiment: "If you didn't even bother to write it, why should I bother to read it?"

But often there is real value there, and I sometimes force myself to cringe my way through the GPT-isms, to find the gems buried within.

donatjApr 12, 2026
For fun, I recently rebuilt a little text adventure some friends and I had built in the early 2000s. Originally written in QBasic, I translated it line by line in Go, and set it up as a little SSH server.

For posterity, I didn't want to change anything about the actual game itself but knew beforehand that the commands were difficult to figure out organically. To try and help modern players, I added an introductory heading when you start playing informing the player that this was a text adventure game and that "help" would give them a basic list of commands.

Watching people attempt to play it in the logs, it became painfully obvious no one read the heading, at all. Almost no one ever typed "help". They'd just type tens of invalid commands, get frustrated and quit.

datametaApr 12, 2026
I wonder how different the outcome would be if the idiom used was not help but "instructions", as in, what portion of users did not want to admit they needed assistance?

I'm not refuting the fact that people seldom read, but this seems like an interesting additional vector to explore.

sikk01Apr 12, 2026
Unironically I was pasting the URL of this article into chat GPT to summarise
ineedasernameApr 12, 2026
tl;dr: ' '
nathellApr 12, 2026
Joel Spolsky in 2000 [0]: „Users can’t read anything, and if they could, they wouldn’t want to.”

[0]: https://www.joelonsoftware.com/2000/04/26/designing-for-peop...

hermitcrabApr 12, 2026
A customer contacts me and says 'I have an error'. After several emails I manage to get them to send me a screenshot of the error. The error message describes the exact problem and what to do about in one short sentence. I type pretty much exactly the error message text into my reply. This solves their problem. I think they see 'error' or 'warning' and they don't even read the rest of the sentence. Extraordinary. But it has happened more than once.
staticshockApr 12, 2026
They were taught not to read errors because they encountered thousands of errors (in other software) that were less helpful than that one.

Most people have an adversarial relationship with software: it is just the pile of broken glass they have to crawl through on the way to getting their task done. This understanding is reinforced and becomes more entrenched with each next paper cut.

bachmeierApr 12, 2026
Probably true. Also probably true: people have read enough of the things he listed and concluded that they wasted their time. I remember trying Linux in the RTFM days, and let me tell you, those were some terrible documents even when they did talk about the problem.
pfdietzApr 12, 2026
Except now LLMs read everything.