The LLMisms in the "thinkpad" section caused me to close the tab
abnercoimbre•Mar 29, 2026
Yep, time to flag.
middayc•Mar 29, 2026
What LLMisms?
noident•Mar 29, 2026
No W. No X. No Y. Just Z.
In fact, the whole article is filled with slopisms, just with the em dashes swapped for regular dashes and some improper spacing around ellipses to make you think a human wrote it.
fer•Mar 29, 2026
It's closer to broetry than llmism in my eyes.
hal9zillion•Mar 30, 2026
Yep this was probably the most LLM generated thing I've read all day and that's saying something. Brutal.
Everyday I see dozens of huge posts where someone has generated a wall of text that's expressing a very simple, very derivative idea and I see tons of earnest people replying with posts that the they've written themselves seemingly unaware of this and it really pisses me off to be honest. If you are going to generate your thinkpiece like this there should be an international law that says it can't be longer than two sentences.
king_phil•Mar 29, 2026
Dark forest makes no sense to me. Why would a civilization eradicate another, spending huge amounts of resources (time, energy, material) when the universe has such an enormous scale that you cannot even get to each other in a timescale that makes much sense...
piker•Mar 29, 2026
Makes some sense to me, as the prisoner's dilemma dictates at least some fraction will try to kill you. So you've got to go first.
Reminds me of the Dan Carlin take on aircraft carriers in World War II: if you in a carrier spotted an opposing carrier and didn't send everything you had before it spotted you, you were dead. The only move was to go all in every time.
the_af•Mar 30, 2026
Surely that logic applies if you're at war with the other side? If you start attacking carriers of random superpowers, you only invite further destruction...
piker•Mar 30, 2026
Yes, but in this case the aircraft carriers represent the entire civilization and there is no repeat player such as nations in times of peace. So if you spot an alien civilization with even a 1% probability of trying to kill you, you don't really have the opportunity to "wait and see". And there's no meta higher-level arrangement that will protect you.
Phemist•Mar 29, 2026
The dark forest is conditional on that it does not require huge amounts of resources to eradicate another civilization and that (over time) the universe turns out not to be of a scale enormous enough (and in the book there are agents working to actively make it smaller).
Bringing it back to the dark forest of idea space, it is an interesting question whether the the space of feasibly executable ideas being small (as this essay assumes) is inherently true, or more of a function of our inability to navigate/travel it very well.
If the former, then yes it probably is/will be a dark forest. If the latter, then I would think the jury is still out.
nate•Mar 29, 2026
Are you asking about the 3 body problem version of this? Spoiler alert: The folks doing the eradicating aren't spending much time/energy/anything on eradicating. It's one large missile through space.
I think the gist is: sure, we humans can't conceive of getting to anyone else in the universe in any timescale, but if we can keep ourselves from destroying ourselves, we'll eventually figure it out. And we'll spread. And we'll kill everything that isn't us in the process as we've done as explorers on this planet.
So really in 3BP: it's inexpensive to eradicate. But insanely expensive to possibly get the intention wrong of any other civilization you encounter. They might kill you.
(again, this is just my interpretation of what 3BP said)
thomashop•Mar 29, 2026
I don't think it's correct that we destroyed everything that isn't us. If we take all living beings, we have destroyed only a small percentage.
05•Mar 29, 2026
Not counting by total terrestrial vertebrate biomass.
cbau•Mar 29, 2026
To quote from the book:
> “First: Survival is the primary need of civilization. Second: Civilization continuously grows and expands, but the total matter in the universe remains constant. One more thing: To derive a basic picture of cosmic sociology from these two axioms, you need two other important concepts: chains of suspicion and the technological explosion.”
1. you can never know the intentions of other entities, and they cannot know yours (chain of suspicion)
As soon as you identify another entity in the forest, even if they cannot annihilate you at present and signal peace, both could change without warning. Therefore, the only rational move is to eradicate the other immediately. (Especially if you believe the other will deduce the same.)
Elimination in the book is basically sending a nuke, not a costly invasion force.
not sure it actually is true, but that's the argument in the book
bethekidyouwant•Mar 29, 2026
That’s true among human societies as well, but trade leads to more prosperity.
jmull•Mar 29, 2026
I really liked those books, for all the creative ideas... it's fine that they don't all work, but the Dark Forest has to be among the worst of them. It was unfortunate it was highlighted.
Some rebuttals, going point by point...
1. you can know the intentions of other entities by observing and communicating with them.
2. technology explosions, like pretty much exponential phenomena, are self limiting. They necessarily consume the medium that makes them possible.
3. and 4. civilizations aren't necessarily sentient (ours certainly isn't) and don't have an agency, much less goals. Individuals have goals, and some may work for the survival of the civilization they belong to. But others may decide they can profit if they work with the aliens.
4. Multiple civilizations may well come into competition over resources, but that's more of an argument about why the forest would not be dark.
Practically speaking, a civilizations that opts to focus on massive, vastly expensive efforts to find and exterminate far flung civilizations because they may become a rival in the future may be easily outcompeted by civilizations that learn to communicate with and work with other civilizations they encounter.
recursivecaveat•Mar 29, 2026
Cleansing is basically free for advanced civilizations in the books. The alien (Singer) who wipes out Sol in the 3rd book doesn't even have to answer any questions from their manager about doing it, that's how cheap it is. While its true that individuals desire cooperation, I think you can assume that civilizations will keep a lid on people who will completely destroy them (or failing that, be destroyed). It seems like expansion of civilizations is not really an option. The Singer's civilization only has 1 colony world and they're already in some kind of extremely destructive war with them. Presumably the idea is once your own people expand multiple light years away, all the logic about aliens applies to them too. On the other hand if you can't expand why do you not run scorched earth on the galaxy?
There definitely is some weirdness about observation and communication: Singer's civilization can wipe out Sol with a flick of the wrist, but while they can observe the number and type of Earth's planets, that seems to be their limit. The sophon enables FTL communication and observation between Earth and Trisolaris, but the more advanced civilizations don't seem to make use of them? You could be absolutely certain of someone's threat level and intentions with one. Maybe something about the technology can be traced back to its origin system, so they are too risky to use.
I think it's all reasonable in the books, especially as a self-reinforcing state. It does definitely require a highly specific set of universal laws / technological constraints though. If the FTL drive didn't also broadcast your position to the whole universe for eg, it would crack everything wide open.
iugtmkbdfil834•Mar 29, 2026
To an extent, rebuttals land.
However,
1. You are assuming a lot in the sense that you assume presence of intention -- not something guaranteed to be a feature of an alien civilization, which is, well, alien. People think that anthropocenrism only applies to body shape and having legs, because the way it tends to express itself in popular culture is robots on legs and human body shape in aliens.
And same point goes to communication; just assuming you could is a big leap.
2.Bold assumption that they are self limiting. I think the real question is what , exactly, tends to limit it. I think the answer tends to be resources, which is the foundation of dark forest argument theory to begin with.
What I am saying is that it is not a rebuttal you think it is.
3. :D yes
4. You may be again imposing human perspective on as scale that goes a little bit beyond it.
I will end with a.. semi-optimistic note. I am not sure dark forest theory is valid. We are speculating mostly based on human tendencies. By the same token, I posit that we are about as likely to be turned into an art exhibit by a passing alien artist not unlike some ants that had molten metal poured into their nests [1].
You can observe patterns of behavior, develop theories understanding, attempt/experiment with interactions, and refine based on the results. That's communication (and doesn't assume anything about the other alien civilization).
Now, civilizations may be more or less willing to do this and more or less successful, but that's not the same thing as no one will dare try, as the dark forest theory wants.
(Personally, I think civilizations that are better at this will outcompete ones that are worse or refuse, though that's just my own opinion.)
> Bold assumption that they are self limiting.
Name the exponential phenomena that aren't self limiting -- that don't consume the medium which allows them to exist in the first place.
> I think the answer tends to be resources, which is the foundation of dark forest argument theory to begin with.
Well, yes. One of the reasons the dark forest theory isn't coherent.
> Any real alien reasons would be alien to us.
Yes, but this doesn't back up the dark forest theory. It also doesn't mean aliens cannot be understood at any level or interacted with in any way.
(The dark forest theory makes very strong claims on the logic, intentions, strategies, resource use/governance of alien civilizations, BTW, and wants this to be uniform amongst them... even though the one civilization we actually know of doesn't adhere to them.)
AnimalMuppet•Mar 29, 2026
It's first-order thinking. Second-order would be to question whether trying to eradicate another race might motivate them to eradicate you, when they weren't motivated to do it before.
red-iron-pine•Mar 30, 2026
the point is that eventually one of them will have that intention. and they will likely shoot first.
so do you wait, and hope, that you're able to tease that out correctly, or do you just shoot first yourself?
what good is a Galactic Republic if eventually someone builds a Death Star and blows up your planet? With monumental effort they managed to beat them back, only for them to return later.
0x3f•Mar 29, 2026
Competition kills margins (profits, security, QoL), so the budget for eradication should be quite high, but generally speaking the idea is to destroy even fledgling upstarts, back when the cost is low.
lstodd•Mar 29, 2026
And the idea does not make sense once you include intel being incomplete into the equation: what if the preemptive strike will not attain complete eradication?
You might or might not fatally cripple the opponent, but retaliation can do that too and you cannot be sure that it won't. It's MAD all over again.
0x3f•Mar 29, 2026
Well if they're only an upstart, they don't have the ability to destroy you _yet_. You 'nuke' them in the hope they won't get that ability. You're aiming to stop MAD from being a thing.
In those terms, the US should have been nuking and dominating everyone, and the idea was floated after WW2, but I believe they were precluded by practical limitations.
If they had developed the tech outside of wartime, and built up a stockpile, maybe that is indeed what would have happened and we'd have a one-world government already.
lstodd•Mar 29, 2026
Point is you cannot know if they are an upstart (whatever upstart means). It can be misinterpretation, it can be camoflage, it can be anything. But once you rain death you're better be prepared to be grateful for what you are about to receive back.
0x3f•Mar 29, 2026
Depends on the context. We certainly knew nobody else had nukes.
lstodd•Mar 29, 2026
That.. was the case for all of four years. And forgive me if I doubt certainity.
0x3f•Mar 29, 2026
Four years is plenty of time to start launching. Also, MAD incentivizes disclosure. What would be the point of having secret nukes? Openly having them is the only way to stop the US using its nukes to stop your nuke program, in this scenario.
the_af•Mar 30, 2026
> Depends on the context. We certainly knew nobody else had nukes.
You wouldn't be able to know this over the vast distances of the universe.
People are arguing two contradictory things: these are unfathomable alien civilizations with motivation and timescale we cannot comprehend, but we would perfectly understand their tech level, location, and capability of striking back.
It doesn't add up. It's a scifi premise needed for interesting plot conflict.
middayc•Mar 30, 2026
The dynamics at least with the space dark forest is different than when we live on the same planet. It has to do with lack/slow communication over vast space (that you can't trust anyway).
It relied on two principles "the chain of suspicion" and "technological explosion", which don't hold true if we are on the same planet. You can google it (or llm it) :)
Hikikomori•Mar 29, 2026
A space war is not needed, they could just send a few missiles to take out anyone.
I have my own theory of dark forest and AGIs. That there's some collection of AGIs out there allowing evolution to develop intelligence anywhere it happens and takes them out once it produces an AGI, or if it doesn't performs a reset. They have literally all the time available to them, can easily travel the vast distances if needed.
sebastianconcpt•Mar 29, 2026
Agree, is a fiction based in accepting the premise of zero-sum game.
It denies that more advanced civilizations might have better models of the universe where they know this isn't an issue and we're just stupid teenagers in the neighborhood playing dangerous games and merely taking a look every now and then to see if we prove we will survive ourselves.
lifeformed•Mar 29, 2026
"Timescales that makes sense" may be a human reasoning but not necessarily the reasoning of inconceivably advanced timeless civilizations. Sure, that planet of fish may be harmless now, but what about in a quick three billion years when they have FTL and AGI and Von Neuman probes and Dyson spheres and antimatter bombs? Easier to click the delete button now to save the trouble later.
the_af•Mar 30, 2026
The laws of physics apply to the civilization of alien fish too, it's not a human specific timescale.
They have nothing to fight over with us, no reason to spend effort developing weapons to reach us, if they do have weapons they'd have a much higher probability of wiping themselves up first, and no way for their weapons to reach us within a time boundary that makes sense for any sentient race.
Dark Forest seems to be based on a scifi/fiction need to have conflict with "the other", which is thrilling but doesn't necessarily reflect the real cosmos.
red-iron-pine•Mar 30, 2026
the dark forest is based on trying to answer the fermi paradox as to how the universe is so big but we haven't seen any other sign of life anywhere
answer: because they're all dead, or in hiding
the author then writes a book around that idea
the_af•Mar 30, 2026
Isn't this more or less what I just said?
Except I added in this and other comments why it's not a very convincing explanation for Fermi's paradox either.
In other words,
> answer: because they're all dead, or in hiding
I understand this is what the Dark Forest theory argues, but it works because it's meant for a scifi book; it's just not a very good explanation for the real universe.
viccis•Mar 30, 2026
It's a silly concept IMO because it assumes that civilizations with the ability to do interstellar travel or communication make the decision to not do so because they have knowledge of an interstellar force that destroys any civilization that does so. It would seem like any civilization that becomes aware of such a force would be destroyed, so how would all of these surviving ones know of the danger? Actual dark forests are quiet because a mix of the animals' instinct and visible signs of danger.
While it's possible that some civilizations would hypothetically be able to observe what happened to others and keep quiet, they would all have to do so to solve the contradictions of Fermi's paradox.
the_af•Mar 30, 2026
It's silly at multiple levels.
As an explanation of Fermi's Paradox it fails to explain why, if all these dead civilizations are detectable enough to get destroyed, we haven't detected any. Even if they are now extinct, their emissions must have been great enough to get them killed. So where are they?
It's very, very unlikely all of them went quiet because they learned of this out of pure theoretical reasoning. So where are their "corpses" so to speak?
And if they cannot be detected easily, because they are too far apart or emissions are near impossible to detect or recognize as evidence of intelligent life (the more likely actual explanation of Fermi's Paradox other than the simpler "they just aren't there"), then there's no risk of destruction.
viccis•Mar 30, 2026
Exactly. I think it's popular because it takes a difficult question and answers it with a conceptually elegant answer that has an evocative and spooky nature metaphor. Unfortunately, since it's so poorly grounded, any second order imagery built off it doesn't really add an explanatory power and usually just winds up as a tortured metaphor.
For example, Yancey Strickler's The dark forest theory of the internet blog post (which he later spun into a book) that made it so popular in think pieces like this completely misunderstands even the dark forest theory metaphor itself.
The thesis that in the past it was safe to share ideas and projects because the execution was hard, and that now things have changed because of AI is an interesting AI, but I wonder if it is really true.
It certainly seems true that for small projects and relatively narrow scoped things that AI can replicate them easily. I'm thinking specifically about blog posts where people share their first steps and simple programs as they learn something new, like "here is how I set up a flask website", "here is how I trained a neural network on MNIST".
But if AI is empowering people to take on more complex projects, perhaps it takes the same amount of time to replicate the execution of a more advanced project?
In other words, maybe in the past, it would take me 10 hours to do a "small" project, which today I could do in 1 hour with the assistance of AI.
And now, with the assistance of AI, I can go much farther in 10 hours and deliver a more complex project. But that means that someone else trying to replicate this execution is still going to need around 10 hours to replicate it.
Basically, I'm agreeing that AI can reduce barrier to replicating the execution of another person's project, but at the same time, that we can make more complex projects that are harder to replicate. So a basic SASS crud app is trivial now but a multi-disciplinary domain specific app that integrates multiple systems is still going to be hard to replicate.
MattDamonSpace•Mar 29, 2026
Sure but the Forest point stands, whatever you can hide from the Forest becomes something that slows it down and allows you some, even if only brief, moat?
EA-3167•Mar 29, 2026
There’s a deeply flawed hidden assumption here, which is that the individual in question is the only possible source for the relevant information that the AI can harvest. In the real world that absurdly rare, original thought is rare because we’re in the mix with billions of others.
Scientists who hold back publishing breakthroughs have not guaranteed that they will be the sole discoverer, just that someone else will inevitably be credited when they reach the same conclusions.
red-iron-pine•Mar 30, 2026
the untold billions don't matter -- the AI can sift through those. social media already exists to do that, and LLMs have the luxury of often having the chaff separate from the wheat ahead of time.
science is not inevitable, and there is no telling people will reach the same conclusions in a reasonable time frame.
nicbou•Mar 29, 2026
The problem for me is that I'm competing with the AI results that Google trained on my work. I'm losing the majority of my traffic to it, so at some point I'll have to give up because the work no longer supports me and no longer has an audience.
djeastm•Mar 29, 2026
Same here. Knowledge is being commodified.
georgemcbay•Mar 29, 2026
> Knowledge is being commodified.
Already was well before AI, the difference now is that a few big AI providers risk becoming the ultimate rent-seekers that will increasingly capture all of the value of that commodified knowledge whether the original knowledge generators want that or not. There is no opt out, everything will be vacuumed up into the machine mind.
This will almost certainly lead to vastly increased amounts of wealth inequality (on top of the already unsustainable levels we have today) and possibly a very messy societal disintegration (this is theoretically avoidable, but I am not convinced it is practically avoidable given our current socioeconomic/political realities).
Bright future ahead!
Terr_•Mar 30, 2026
Industrial-scale plagiarism. A form of copyright-laundering only available to big actors.
red-iron-pine•Mar 30, 2026
OG feudalism involved owning knights and horses and armor and grain production; techno-feudalism involves owning all ideas
jandrewrogers•Mar 29, 2026
It isn't just about AI. Some R&D domains started disappearing from literature and the public internet a decade before the first LLMs. The incentives to go dark emerged even when the adversary was other humans. AI is just accelerating a trend that was already there. Some areas of frontier computer science research have largely been dark for decades.
The strategy is to quietly do several years of iterated hardcore R&D. The cumulative advances are such a step change when seen by would-be fast-followers that it obscures the insights that allowed individual advances to occur. As an exaggerated case, imagine if the public history of powered flight skipped from the Wright Brothers to the Boeing 737.
In practice, this strategy has a major failure mode that people overlook. The sharp discontinuity in capability means that almost nothing that exists in the market is prepared to integrate with it. This is a large impediment to adoption even if the technology is objectively incredible and the market will inevitably get on board.
In short, it looks a lot like being too early to market. This is surmountable with clever execution but with this strategy you've traded one problem for a different one.
sigbottle•Mar 29, 2026
Interesting, any examples of companies that followed this model?
You get a time advantage for doing this strategy, but your talent will be pouched and your competitors will be able to catch up fairly quickly.
jandrewrogers•Mar 30, 2026
I used to think this but it only seems to be true for a shallow tech advantage, which isn’t this scenario. A sufficiently deep stack of compounded tech is robust against even aggressive talent poaching. The knowledge is embedded in the network, not the random individual.
We see this in jet engines, silicon fab, et al.
asdff•Mar 30, 2026
I mean, even north korea has figured out the nuclear bomb, the original greatest secret deep stack of compounded tech. Seems like anyone can figure out anything if they are hell bent on it on this earth. Engineers seem to be more fungible than people anticipate I guess, and no one really comes up with unprecedented unique ideas. The whole research process incentivizes incremental work on known concepts to justify receiving funding at all, since it is in high demand and short supply.
adrianN•Mar 30, 2026
Korea had the advantage of like seventy years of technological advancement from the first nuclear bomb.
Vachyas•Mar 30, 2026
> And now, with the assistance of AI, I can go much farther in 10 hours and deliver a more complex project. But that means that someone else trying to replicate this execution is still going to need around 10 hours to replicate it.
The blog post does touch upon this. The key difference, I believe, would be that compute scales in a way "meat-heads" doesn't, where if the other person has 100x the capital to throw at it, they could do the same 10 hour thing in 10 minutes.
Basically, what I got from it was that innovation has never been truly scalable enough to create the "dark forest", since hiring more and more engineers saturates quickly. But if/when innovation does become scalable (or crosses some scalability threshold) via AI, that could trigger a "dark forest" scenario.
ginko•Mar 29, 2026
>You are creating your cool streaming platform in your bedroom. Nobody is stopping you, but if you succeed, if you get the signal out, if you are being noticed, the large platform with loads of cash can incorporate your specific innovations simply by throwing compute and capital at the problem. They can generate a variation of your innovation every few days, eventually they will be able to absorb your uniqueness. It’s just cash, and they have more of it than you.
That's not exactly a new phenomenon and doesn't require AI. If anything that was worse in the 90s with Microsoft starving out pretty much any would-be competitor they could find.
Platforms cherry-picking successful ideas and stealing them isn't new. Platforms could do this because they had the capital and the platform (distribution).
What is different is, is that LLM platforms literally have world's thoughts, ideas, conversations and a big part of the code/can generate it. It's like "pre-crime" ... they could copy your idea, or capture a trend brewing and replicate, before you even released it.
kadhirvelm•Mar 29, 2026
Honestly my hope is the arbitrage that allowed big tech to make the kind of margins it does on software starts to go away because it’s sooo cheap to build software. In other words, defending the technical moats that we rely on today doesn’t make sense in the future because it’s not a reliable way to make money. Aka no need to protect your technical secrets because there’s no capitalist reason to lol. Taken further, my naive hope is societal attention moves away from this layer and onto whatever becomes the new way to make money and the people left paying attention to software are big on sharing
zenogais•Mar 29, 2026
Might just be independent discovery, but the main idea of this blog post is more or less the exact theory advanced in the recent book "The Dark Forest Theory of the Internet" by Bogna Konior (https://www.amazon.com/Dark-Forest-Theory-Internet-Redux/dp/...).
p2detar•Mar 29, 2026
Interesting. How does this book stack up to Maggie Appleton‘s Dark Forest hypothesis? It’s been some time already since she made it.
Well, I didn't know for this book, so I suspect or hope the exact points that I make won't map to the ones from the book.
It is true that the original "The dark forest" book made an impression on me, so I was thinking about its theories often and trying to apply them to various situations.
zenogais•Mar 29, 2026
Yeah, I fully believe independent invention by mapping "the dark forest" onto the internet is very possible.
hrimfaxi•Mar 29, 2026
The irony is that it undermines the premise. Multiple people independently arriving at the same conclusions means that you can hide your ideas from the dark forest but that won't stop them from being uncovered.
middayc•Mar 29, 2026
Interesting irony :). If you don't produce the idea in fear of just feeding the forest, someone else will, so it might as well be you. It's true that this is very similar as current dilemmas some people have about their ideas.
The difference is that now people see just the outer shell of your ideas - but if you use LLM-s to search, explore, code your ideas, the system "knows" it all, or even more than you, given that it can "cross-pollinate".
Near the end you start to describe the paradigm the machines build in The Matrix. Neo is the aberration they seek to reincorporate to sustain their inability to innovate.
bonoboTP•Mar 29, 2026
Valuable ideas have already been those that others find unintuitive and it's kinda hard to get people on board because they are skeptical and they need long form, tailored explanation for them to get convinced. If a short elevator pitch convinces them to go home and try to build it, it's probably already being considered by others.
beej71•Mar 29, 2026
Makes me think of rebuilding libraries with AI to change the license.
pugio•Mar 29, 2026
Thanks, this helped crystallize something for me: the play the AI labs are making is anti-fragile (in the Nassim Taleb sense):
> The very act of resisting feeds what you resist and makes it less fragile to future resistance.
At least along certain dimensions. I don't think the labs themselves are antifragile. Obviously we all know the labs are training on everything (so write/act the way you want future AIs to perceive you), but I hadn't really focused on how they're absorbing the innovation that they stimulate. There's probably a biological analog...
Well there are many, and I quote this AI response here for its chilling parallels:
> Parasitic castrators and host manipulators do something related. Some parasites redirect a host’s resources away from reproduction and into body maintenance or altered tissue states that benefit the parasite. A classic example is parasites that make hosts effectively become growth/support machines for the parasite. It is not always “stimulate more tissue, then eat it,” but it is “stimulate more usable host productivity, then exploit it.” (ChatGPT 5.4 Thinking. Emphasis mine.)
gobdovan•Mar 29, 2026
Instead of anti-fragility, I'd point you to the law of requisite variety instead.
You'll notice that all AI improvements are insanely good for a week or two after launch. Then you'll see people stating that 'models got worse'. What happened in fact is that people adapted to the tool, but the tool didn't adapt anymore. We're using AI as variety resistant and adaptable tools, but we miss the fact that most deployments nowadays do not adapt back to you as fast.
chongli•Mar 29, 2026
New models literally do get worse after launch, due to optimization. If you charted performance over time, it'd look like a sawtooth, with a regular performance drop during each optimization period.
That's the dirty secret with all of this stuff: "state of the art" models are unprofitable due to high cost of inference before optimization. After optimization they still perform okay, but way below SOTA. It's like a knife that's been sharpened until razor sharp, then dulled shortly after.
gobdovan•Mar 29, 2026
Is this insider info? The 'charted performance' caught my eye instantly.
Couple things I find odd tho: why sawtooth? it would likely be square waves, as I'd imagine they roll down the cost-saving version quite fast per cohort. Also, aren't they unprofitable either way? Why would they do it for 'profitability'?
chongli•Mar 29, 2026
It's not insider info, it's common knowledge in the industry (Google model optimization). I think they are unprofitable either way, but unoptimized models burn runway a lot faster than optimized ones.
The reason it's not a square wave is because new optimization techniques are always in development, so you can't apply everything immediately after training the new model. I also think there's a marketing reason: if the performance of a brand new model declines rapidly after release then people are going to notice much more readily than with a gradual decline. The gradual decline is thus engineered by applying different optimizations gradually.
It also has the side benefit that the future next-gen model may be compared favourably with the current-gen optimized (degraded) model, setting up a rigged benchmark. If no one has access to the original pre-optimized current-gen model, no one can perform the "proper" comparison to be able to gauge the actual performance improvement.
Lastly, I would point out that vendors like OpenAI are already known to substitute previous-gen models if they determine your prompt is "simple." You should also count this as a (rather crude) optimization technique because it's going to degrade performance any time your prompt is falsely flagged as simple (false positive).
bonoboTP•Mar 29, 2026
It's rumors based on vibes. There are attempts to track and quantify this with repeated model evaluations multiple times per day, this but no sawtooth pattern has emerged as far as I know.
chongli•Mar 30, 2026
I don't want to go too far down the conspiracy rabbit hole, but the vendors know everyone's prompts so it would be trivial for them to track the trackers and spoof the results. We already know that they substitute different models as a cost-saving measure, so substituting models to fool the repeated evaluations would be trivial.
We also already know that they actively seek out viral examples of poor performance on certain prompts (e.g. counting Rs in strawberry) and then monkey-patch them out with targeted training. How can we be sure they're not trying to spoof researchers who are tracking model performance? Heck, they might as well just call it "regression testing."
If their whole gig is an "emperor's new clothes" bubble situation, then we can expect them to try to uphold the masquerade as long as possible.
eudamoniac•Mar 30, 2026
If a claim is unfalsifiable, it contains no information
chongli•Mar 30, 2026
What I said is quite far from unfalsifiable. Any number of insiders could step forward and set the record straight.
eudamoniac•Mar 30, 2026
Not really. Only in the affirmative. If an insider said they don't do that, you'd still think they might do that.
girvo•Mar 29, 2026
> If you charted performance over time, it'd look like a sawtooth
People have, though, and it doesn't show that. I think it's more people getting hit by the placebo effect, the novelty effect, followed by the models by-definition non-determinism leading people to say things like "the model got worse".
nextos•Mar 29, 2026
You have a point but current LLM architectures in particular are very fragile to data poisoning [1,2].
No idea why you're being downvoted. We can't yet even demonstrate that LLMs will withstand training on their own output as they pollute the Internet.
jauntywundrkind•Mar 29, 2026
The view here shows big huge powers of technocapital consuming all else, stealing every idea.
My hope is the opposite. Integrative, resonant computing (https://resonantcomputing.org/https://news.ycombinator.com/item?id=46659456 although I have some qualms with it's focus on privacy), with open social protocols baked in seems like maybe possibly can eat some of the vicious consumptive technocapital. In a way that capital's orientation prevents it from effectively competing with. MCP is already blowing up the old rules, tearing down strong gates, making systems more fluid / interface-y / intertwingular again, after a long interregnum of everything closing it's APIs / borders.
People seem so tired and exhausted, so aware of how predatory the technosystems about us are. But it's still so unclear people will move, shift, much less fund and support the better world. The AT proto Atmosphereconf is happening right now, and there's been a long mantra of "we can just build things"; finding adoption but also doing what conference organizer Boris said yesterday, of, "maybe we can just pay for things", support the projects doing amazing work: that's a huge unknown that is essential to actually steering us out of the dark technology, where none of us get to see or get any way in how the software-eaten world arounds us runs, where mankind for the first time in tens or hundreds of thousands of years been cut off from the world os, has been removed from gods's enlightenment / our homo erectus mankind-the-toolmaker natural-scientist role.
I think the answer to the Dark Forest fear to be building together. To be a radiant civilization, together. To energize ourselves & lead ourselves towards better systems, where we all can do things, make things, grow things, in integrative social empowering ways.
middayc•Mar 29, 2026
I hope the open source models / crowdsourced approaches to training will also be an important part of the ecosystem, keeping it honest and providing an exit. Similarly, as it does for operating systems and other important software.
But I don't see a trend of big companies really opening up. They usually open only if it benefits them (which can also happen and did happen in various scenarios). Everybody is accepting and open when it's trying to grow and is closing once it can reach a monopoly.
mpalmer•Mar 29, 2026
As a work of persuasive writing, this is unfocused and seems mostly generated.
One thing I would have expected of someone who knows their history - forget LLMs, this is how startups have worked for decades now. You're only as good as your idea, your ability to execute, and your moat. And the small fish get eaten.
> The original Dark Forest assumes civilizations hide from hunters - other civilizations that might destroy them. But in the cognitive dark forest, the most dangerous actor is not your peer. It’s the forest itself.
Note the needless undercutting of the metaphor for the sake of the limp rhetorical flourish.
> I wrote this knowing it feeds the thing I’m warning you about. That’s not a contradiction. That’s the condition. You can’t step outside the forest to warn people about the forest. There is no outside.
Quite dramatic!
Except literally going outside and just talking to people? Using whiteboards?
Also, you fed it when you used a model to write this blog post. You didn't have to do that.
elevaet•Mar 30, 2026
> Except literally going outside and just talking to people? Using whiteboards?
> Also, you fed it when you used a model to write this blog post. You didn't have to do that.
my thoughts exactly
rhubarbtree•Mar 29, 2026
This is mislead by the nerd philosophy that the tech is the business. It absolutely isn’t, the tech is a small part of a startup. Witness that Spotify continues to exist despite being known and replicated by the major giants.
Poetically expressed, but ultimately based on a false notion of what a business actually is.
p2detar•Mar 29, 2026
It's nuanced. Spotify is a giant, I think the example you're looking for here is Soundcloud. They almost went bust, but managed to get the ads business right and seem to be afloat now. So I think you're right in that sense, but also wrong in the sense that if I'm building a desktop app or tooling software, my business is probably much easier to get replicated and displaced.
bdangubic•Mar 29, 2026
you picked one fairly rare thing with an incredible non-tech moat that is also a cancer for the artist, bravo!!
rhubarbtree•Mar 30, 2026
TikTok, cloned by Meta, still winning.
Slack, cloned by Microsoft, still winning.
Skyscanner. Dropbox. Snapchat. Dating Apps. All cloned. Still going.
The tech is not the business.
xstas1•Mar 29, 2026
This maps nicely to Cybermen in Dr Who
alembic_fumes•Mar 29, 2026
> This is the true horror of the cognitive dark forest: it doesn’t kill you. It lets you live and feeds on you. Your innovation becomes its capabilities. Your differentiation becomes its median.
Oh no, the terrible dystopia where anyone can benefit from anyone else's good ideas without restrictions! And without any gatekeepers, licensing agreements, copyright, and not even a lawyer in sight!
If this is the dark future that AI use brings for us, I say bring it. Even if it means that somebody gets filthy rich in the process, while making the rest of the humanity better off.
entropi•Mar 29, 2026
Unless you don't own the data centers yourself, you only get what they allow you you to. And those gatekeepers, lawyers and licencing agreements; while certainly not perfect, did let people monetize their intellectual work. Also, I think it is incredibly naive to think the owners of the compute and the energy won't play the hardest gatekeeper the world has seen, when the conditions become right.
nextaccountic•Mar 30, 2026
OpenAI and other giant rent seekers benefit from advances by everyone
It's unclear if the general public will benefit once AI prices are jacked up, especially if AI companies succeed in passing regulations to kill most of competition
red-iron-pine•Mar 30, 2026
> Even if it means that somebody gets filthy rich in the process, while making the rest of the humanity better off.
this guy thinks the rich people care if others are better off
xantronix•Mar 29, 2026
I have been mulling this over and I think I have some solutions in mind, at least for myself.
• No more sharing my project work as open source. No more open discussion. I don't care how badly I want to show the world; if I'd like somebody to see, I will have it printed in a physical book, or I will give them access to my private repository not reachable via the public Internet.
• Bring back LAN parties. Not for gaming necessarily, but for the purpose of exchanging works of engineering and art in an intimate, intentional way.
• Take this as an opportunity to build closer, longer-lasting relationships with people.
• No more emphasis on metrics. I can microdose on dopamine from natural sources, like, looking at a beautiful sky at sunset, or cuddling my dog.
• Open hardware, or, in the very least, hardware we can still control on our own volition. If this means we must be retrocomputing enthusiasts, then so be it.
arkensaw•Mar 29, 2026
I don't know, I think it's an overreaction.
> No more sharing my project work as open source. No more open discussion. I don't care how badly I want to show the world; if I'd like somebody to see, I will have it printed in a physical book, or I will give them access to my private repository not reachable via the public Internet.
If you have a project you would have open-sourced, and you don't do that for fear that the LLM god will steal it, what's the point of building it at all? We shouldn't be afraid to share things with other humans just because LLMs will possibly use it as training data. So what if they spam out a copy of it, or a derivative?
If we all stop sharing things with each other in case one of us is a robot, we might aswell just lie down and die
xantronix•Mar 30, 2026
> If you have a project you would have open-sourced, and you don't do that for fear that the LLM god will steal it, what's the point of building it at all?
To prove to myself that I can, and to solve problems in a way I enjoy.
I'm not saying I want to go into utter solitude; I just want to be a lot more careful where and how I share my works.
Addendum: I think the idea of private art and code collectives, entirely separate from concerns of LLM consumption, are an interesting idea worth pursuit. Has something like that been pursued before? It's reason enough for me to engage in that.
I’m not afraid of the llm, but also acknowledge that some people _feel_ a fear of theft by soulless robots.
What I do fear is the possibility of megacorp robots being the only ones… local and “dark” technology are essential.
xantronix•Mar 30, 2026
We're not anthropomorphising LLMs themselves as the bogeymen here. They are simply the spear tip held by the locus of power. If we do not have conceivable means to produce models of our own accord, for our own usage, do we really have the same degree of power that owners of datacenters appreciate?
In the words of Zack de la Rocha: "Fuck tha G-ride, I want the machines that are makin' 'em". Furthermore: "Know your enemy."
movedx•Mar 29, 2026
If AI makes replicating other people’s ideas faster and easier, thus allowing capital-heavy market players to just absorb whatever idea you manage to execute, then perhaps, somewhat ironically, the economic moat you’ll have is your human nature, contact, and time? Perhaps we’ll see a shift in sentiment towards wanting to deal with and spend time with the people in the business, rather than just what the business can do for you and yours from a software perspective?
I believe the idea of “off-shoring” your IT is a good example of this. My brother works for a business whose clients would drop them the moment they off-shored any aspect of their IT support. Not because of data sovereignty, but simply because they value them being on-shore, in the same time zone, and being native English speakers. And this is despite the fact it would drop the prices they’re paying for IT by 30-40%.
Chance-Device•Mar 29, 2026
> I wrote this knowing it feeds the thing I’m warning you about. That’s not a contradiction. That’s the condition.
HN needs a better AI slop filter.
Or maybe I do. Maybe I can vibe code a browser extension that pre loads TFA links and auto hides anything that isn’t sufficiently human authored.
simianwords•Mar 29, 2026
Can someone explain what I'm missing here?
If we are talking about releasing OpenSource software, they can already be used by companies with zero effort.
I'm guessing the author is talking about released closed source software or simply talking about ideas? What kind of serious company or startup is building in the open and sharing trade secrets or ideas?
I'm genuinely confused and I think this article is pure slop without any core idea.
orbital-decay•Mar 29, 2026
Some of that is rose-tinted glasses.
1. Sharing was never really safe, open source by default only became possible because of SaaS and rent-seeking behavior.
2. Early web (not internet) wasn't hyperconnected. With the advent of global-scale social media it was immediately obvious to many this will lead to monoculture and reduced diversity. What thought to be the information superhighway became the information superconductor with zero resistance, carrying infinite current. Also known as short circuit.
akabalanza•Mar 29, 2026
Big up for the reference
griffzhowl•Mar 29, 2026
> The platform doesn’t need to bother with individual prompts - it just needs to see where the questions cluster. A map of where the world is moving.
This was insightful, but is it much different to the kind of data google and other search engines have had access to for a long time?
And while LLMs might have sped up the rate of code generation, the tech giants have always been able to set a team on reverse engineering whatever they feel like, though they also often just bought up the startup that was producing what they wanted. I guess I'm not seeing exactly where LLMs specifically are creating the dark forest, rather than the consolidated, centralized tech landscape itself
__MatrixMan__•Mar 29, 2026
I think this only applies to a rather narrow set of ideas.
I'm not really interested in pursuing ideas that stop being good if somebody gets there first. If I bothered to design it its because I wanted it to exist and if somebody makes it exist then I'm happy because then I get to use it.
So what kind of things does this apply to? Likely, it's zero sum games, schemes to control other people, ways to be the first to create a new kind of artificial scarcity, opportunities to make a buck by ruining something that has been so far overlooked by other grifters. In other words: bad ideas.
If AI becomes a threat to those who habitually dwell in such spaces, great, screw em.
In the meantime, the rest of us can build things that we would be happy to be users of, safe in the knowledge that if somebody beats us to it, we'll happily be users of that thing too.
pillefitz•Mar 30, 2026
Do you think there's enough such opportunities that we'll all be able to pay our rents?
__MatrixMan__•Mar 30, 2026
If we dispense with the waste that goes into fighting slices of the pie and focus instead on making a bigger pie? Absolutely.
If we try to double down on the zero sum games that we learned from our parents, maybe not.
Skyy93•Mar 29, 2026
This article makes no real sense to me.
>You think of something new and express it - through a prompt, through code, through a product - it enters the system. Your novel idea becomes training data. The sheer act of thinking outside the box makes the box bigger.
This was the same before, if you had a novel idea and make a product out of it others follow. Especially for LLMs, they are not (till now) learning on the fly. Claude Opus 4.6 knowledge cut off was August 2025, so every idea you type in after this date is in the training data but not available, so you only have to be fast enough. Especially LLMs/AI-Agents like Claude enable this speed you need for bringing out something new.
The next thing is that we also have open source and open weight models that everyone of use with a decent consumer GPU can fine-tune and adapt, so its not only in the hands of a few companies.
>We will again build and innovate in private, hide, not share knowledge, mistakes, ideas.
Why should this happen? The moment you make your idea public, anyone can build it. This leads to greater proliferation than before, when the artificial barrier of having to learn to code prevented people from getting what they wanted or what they wanted to create.
RajT88•Mar 29, 2026
> This was the same before, if you had a novel idea and make a product out of it others follow.
You've almost captured the full picture of it.
If you have a great idea, it's not going to be self-evidently enough of a great idea until you've proved it can make money. That's the hard part which comes at great personal, professional and financial risk.
Algorithms are cheap. Sure, they could use your LLM history to figure out what you did. Or the LLM could just reason it out. It could save them some work, sure.
But again - the hard part is not cloning the product, it's stealing your customers. People don't seem to be focused on the hard parts.
oh_my_goodness•Mar 29, 2026
Yeah, and the big guys can't steal your customers. What a crazy idea.
RajT88•Mar 29, 2026
The point is - they're going to do that anyways if they want to. Owning the LLM platforms makes it marginally cheaper to do so.
It's not the risk it's being made out to be.
oh_my_goodness•Mar 30, 2026
Absolutely. The fact that they know your app better than you do, and that they can revoke your ability to develop it at any moment, those are just details. Those things won't change the game at all.
satvikpendem•Mar 30, 2026
Unless you're using their API (in which case there's always platform risk, same as before), this is not an issue. There are lots of half assed implementations of ideas by the big companies that smaller companies run circles around, Innovator's Dilemma was literally written about this.
oh_my_goodness•Mar 30, 2026
In my opinion Christensen wasn't talking about outsourcing your entire development process to a competitor with much deeper pockets, giving them the ability to turn off your development at will [1], and then running rings around them. I'm sure you're familiar with his story about Dell and Asus. This is worse.
[1] Unless you're assuming that you maintain control over your technology while outsourcing most of the development thinking to a rented AI? Times have changed, and the API is not the only issue anymore.
satvikpendem•Mar 30, 2026
What is the issue? Local models still exist and will continue to exist, and even if they don't, good old fashioned hand coding will never go away. The point is even AI companies are run by people and one company cannot make every product well, there are always gaps in the market that are exploitable.
hrimfaxi•Mar 29, 2026
> But again - the hard part is not cloning the product, it's stealing your customers. People don't seem to be focused on the hard parts.
Big companies seem to be bad at innovating but really, really good at enterprise sales.
zar1048576•Mar 30, 2026
I don’t know if that’s necessarily true. I do think that a big part of enterprise sales involves building a comprehensive solution that works well within the customer’s ecosystem. Start-ups usually tend to build point products, which have value, but are still missing functionality (even if that functionality is not scintillating) that customers really desire to easily deploy and maintain solutions. Also, customers do care about things like stability of their vendors and the level of available support.
hrimfaxi•Mar 30, 2026
I'm not saying it is always the case but does it not match your experience that big orgs like Microsoft are much better at navigating the enterprise sales process than a typical startup? Not to mention how many of those startups are just answering security questionnaires and other procurement gates with AI.
zar1048576•Mar 30, 2026
I definitely agree w/ you that big organizations are generally better able to navigate the enterprise sales process, but mainy trying to say that customers might choose to work with a bigger company's products for reasons that typically go way beyond that (e.g., better integrations, support resources, etc.).
RajT88•Mar 30, 2026
I've seen big companies manage to duplicate startup-like culture with small teams internally. Weird things like directors handling builds and source control duties. 12 hour days, working weekends.
These teams said that per man-hour they brought more value to the company than any other team. (But you know, they all say that)
annie511266728•Mar 30, 2026
I don't think the risk is that they copy your app.
The risk is that they make the category a built-in feature in something people already use. At that point, copying the product and taking the customers start to look like the same problem.
wpm•Mar 30, 2026
Yup, pre-cog Sherlocking. The mass keeps accreting towards the already too large players.
cryptonector•Mar 30, 2026
> But again - the hard part is not cloning the product, it's stealing your customers.
Yes. A Red Hat, a Microsoft -- these companies have processes, organizational structure, politics, friction, etc. They might like your products, but replicating them might not be easy for reasons that have nothing to do with how easy it is given the freedom to do it. Small shops with vision might well have a bright future, for a while, maybe.
middayc•Mar 29, 2026
> This was the same before, if you had a novel idea and make a product out of it others follow. Especially for LLMs, they are not (till now) learning on the fly. Claude Opus 4.6 knowledge cut off was August 2025, so every idea you type in after this date is in the training data but not available, so you only have to be fast enough. Especially LLMs/AI-Agents like Claude enable this speed you need for bringing out something new.
You have a point about the update intervals and the higher speed they provide to developers. But you are talking about now, and I was making a thought experiment - about a potential future. LLM-s are not learning on the fly, but I suspect they do log the conversations, their responses and could also deduce from further interaction if a particular response was satisfactory to the user. So in a world where available training data is drying up, nobody is throwing all this away. Gemini even has direct upvote/downvote on responses. Algorithms will probably improve, and the intervals will probably shorten.
Given the detailed information that all the back and forwards generate - I think it's not hard to use similar technology to track underlying trends, get all the problems associated with them and all the solution space that is talked about - and generate the solution before even the ones who thought of it release it. Theoretically :)
I think the open development will become less open. I don't like it - but I think it's already happening. First - all the blogs and forums moved to specialized platforms (SO, discords, ..) and now event some of those are d(r)ying. If people (in extreme cases) don't even read the code they produce, why would they read about the code, discuss the code, that's not even in their care. That is without the theoretical fear of the global Borg slurping all they write.
Dusseldorf•Mar 30, 2026
> LLM-s are not learning on the fly, but I suspect they do log the conversations, their responses and could also deduce from further interaction if a particular response was satisfactory to the user.
Seems like this is hard to reliably do across the board. Sometimes when I stop interacting it's because it nailed the solution, and sometimes it's because it went so poorly that I opted to bin it and do it myself. Maybe all of the mid conversation planning and feedback is enough though.
munificent•Mar 30, 2026
> This was the same before, if you had a novel idea and make a product out of it others follow.
The article says:
"Ideas are cheap - execution is hard"
"Announcing, signaling your ideas offered much greater benefit than risk, because your value multiplied by connections, and execution was the moat you could stand behind."
That's the key difference. It used to be much harder for a competitor to catch up to the state of your implementation.
Nevermark•Mar 30, 2026
Sharing any novel idea has never been so costly.
I am not arguing against sharing. Sharing can be for the greater good.
But as you note, things have changed. We could reasonably assume a genuinely significant good idea, set free, might go in the direction we shoved it for a minute. Or fade into inaccessibility.
Not any more.
mekoka•Mar 30, 2026
You seem to be agreeing, not arguing, with the person you're replying to.
cryptonector•Mar 30, 2026
Indeed. So?
munificent•Mar 30, 2026
> Sharing can be for the greater good.
One of the fundamental problems if humanity is that the majority of people will happily contribute to the public commons and share with everyone, enriching us all. But there is a minority of avaricious people who will do everything in their power to claim the commons for themselves.
This problem is intractable because the more people are good faith actors sharing the public good, the more valuable that commons becomes, and the more it incentivizes people to try take it.
middayc•Mar 30, 2026
And it's not just that the execution is faster now. The competition saw the "outer shell" of your idea. But LLM platforms (the forest) - they see the internals, if you used them to explore and develop it. They also see all similar ideas across the globe.
And they own - not rent the compute and models - as you do from them. If we want to extend this, they could "pre-cog" your idea and build it even before you do.
I'm not talking about what is happening now, I'm just playing out the thought experiment.
imrozim•Mar 30, 2026
the pre-cog angle is the scariest part. it's not even that they copy you after the prompt patterns across millions of users already signal where demand is clustering before any individual ships. the only real counter is speed and distribution get to users before the signal becomes obvious enough to act on. which ironically means building in public is still the better strategy hiding slows you down more than it protects you.
6510•Mar 30, 2026
Just a side note:
> "Ideas are cheap - execution is hard"
I would argue this mantra says more about the person repeating it. It simply means the person has no good ideas and is bad at execution.
I've not met many but I'm sure there are many out there who are scary good at execution. Something like 1% transpiration, 99% experience. I can have a designer do a 100 euro design, hire someone to write nice code, rent a factory or an office, I might even be able to buy the machines at a good price. What I cant do is spin the rolodex and (in 20 minutes) land enough clients who would absolutely love to work with me again. I cant find those private meetings and wouldn't be able to extend my reputation with the new project.
People with good ideas don't talk about them unless it is required. They don't talk with "ideas are cheap" people, it's pearls for pigs. You can spot some of them if they did bursts of multiple unrelated complex patents. My favorite are the rube Goldberg type of machines that combine well known things in ways that exceed the sum of the parts. Something like step 5 uses the vibrations from step 1 while step 3 uses the heat from step 6.
To have good ideas you need many of them but you also need to know execution or you end up thinking the easy stuff is hard and the hard stuff is easy. Improvement is unlikely from there.
bodegajed•Mar 30, 2026
Even if you're a small vendor. You created an innovative product, and you tried to sell your product to a large company. Before you can be destroyed by simply showing the product to a multi-billion company. But now even medium sized companies can destroy you.
tayo42•Mar 30, 2026
>Especially for LLMs, they are not (till now) learning on the fly.
Was this just awkward phrasing or did something change and they learn after training?
Dusseldorf•Mar 30, 2026
There have been several projects lately attempting to create running context/memory, and Claude Code also has some concept of continuous conversational memory, but all if these are bolted at inference time, there's still no concept of conversations feeding back into base model training/weights on the fly.
Skyy93•Mar 30, 2026
If you are implying I am a bot myself i have to disappoint you. I am just not a native speaker, so some phrases could be awkward because I translate german to english.
8bitsrule•Mar 30, 2026
> This was the same before, if you had a novel idea and make a product out of it others follow.
March 20, 1926: Hungarian physicist, electrical engineer Kalman Tihanyi applies for his first patent for a fully electronic television system. Tihanyi's ideas are so essential that, in 1934, RCA is required to buy his patents.
Kalman who ?
cryptonector•Mar 30, 2026
> Especially for LLMs, they are not (till now) learning on the fly. Claude Opus 4.6 knowledge cut off was August 2025, so every idea you type in after this date is in the training data but not available, so you only have to be fast enough.
First of all: it's not as though no new LLMs are being trained. Of course they are.
Second: learning LLMs are not far off, and since they can typically search the web via agents, they effectively can "learn" now, and they can learn (not so well) by writing stuff into a document hidden from you. Indeed, some LLMs can inspect your other sessions with them and refer to them in future sessions -- I've noticed this with Claude.
Third: already we see some AI companies wanting to train their models on your prompts. It's going to happen.
> The next thing is that we also have open source and open weight models that everyone of use with a decent consumer GPU can fine-tune and adapt, so its not only in the hands of a few companies.
There's a pretty good chance that LLMs buff open source, yes.
> > We will again build and innovate in private, hide, not share knowledge, mistakes, ideas.
> Why should this happen? The moment you make your idea public, anyone can build it. [...]
This was always the case, but now the cycle is faster. Therefore if you must use an LLM you might use an LLM that you run on your own hardware -- now your prompts are truly yours. But as TFA notes the AIs will learn just from your (and your private LLMs') searches, and that will be enough in some cases for them to figure out what you're up to. Oh sure, maybe the Microsofts and Googles of the world will not be able to capitalize on millions of interesting idea floating about, but still! the moment you uncloak the machine will eat your future alive, so you'll try to stay off its radar and build a moat it can't see (good luck!). Well that's what TFA says; it seems very plausible to me.
red-iron-pine•Mar 30, 2026
why would they buff open source? why would Microsoft Copilot, with its insane costs, ever be used for that purpose?
or insane costs for any serious LLM -- how does Anthropic get return on investment by improving FOSS
the end state is a walled garden and technofeudalism
AmbroseBierce•Mar 30, 2026
I'm sure being exposed to one million video games instead of 100 works just the same, scarcity was a feature not a bug.
raincole•Mar 30, 2026
The author read too much sci-fi. But too little at the same time.
The problem is never we don't have enough ideas. It's how to find the good ones among the sea of ideas. Most of ideas that eventually prove right sounded very stupid at first. Selling books online? Pff.
By the way, Liu (the author of The Three-Body Problem, who popularized the concept of "Dark Forest") has a short story about exactly that, Cloud of Poems. Unfortunately it's never translated into English.
asdff•Mar 30, 2026
Another thing the author makes a note of is the idea of the prompts getting logged. I can imagine with some clever statisticians, before you even formulate the idea yourself of some product or company, the model can construct this before you even do, just based on what you already prompted. Then it can evaluate market fit, estimate return, start its own version of that company and make money and beat you to market should it turn out to be a good idea.
Now before you say this is unrealistic or isn't done today, just know this is all perfectly possible with existing technology. In fact this is a lot how adtech works already, using metadata to predict products you might want to buy before you even realize you want to buy them.
layer8•Mar 29, 2026
> Resistance isn’t suppressed. It’s absorbed. The very act of resisting feeds what you resist and makes it less fragile to future resistance.
On the other hand, if your primary goal is to change the world, or “be the change you want to see”, maybe being public and feeding it isn’t so bad, especially if others don’t?
SirensOfTitan•Mar 29, 2026
I don't quite remember the details, but there's a fascinating section in Julian Jaynes's "Origin of Consciousness in the Breakdown of the Bicameral Mind" where he talks about how metaphors condense down into more complex forms, and as they do they unlock new realities previously impossible to fathom. The classical example here is the simultaneous discovery of calculus by Newton and Leibniz: the larger context defines what is possible.
I was recently running myself through a thought experiment similar to the author here: if LLMs truly do make generation of ideas cheap (I'm still a skeptic here even within software), then as soon as products enter the public awareness they become trivial to reproduce. For example, in a prompt like: "Uber but for babysitters," "Uber for" is doing a tremendous amount of work. Before Uber, its model, UX, modes of engagement would've taken pages and pages to describe, but after, it becomes comparatively much cheaper.
... in this way, LLMs could cheapen ideas and creativity so much that they make other factors (which are already the weighing functions) more important, and I think the imbalance here is deeply troubling. Those factors are namely network effects (existing customers, brand recognition, existing relationships, capital). And when balance is shifted more toward network effects, it means that the whole system becomes more brittle because it makes it even harder to boot out incumbents.
There are a whole slew of issues with LLMs, particularly around their intended devaluation of labor, and we aren't talking enough about them.
stego-tech•Mar 29, 2026
I’m still optimistic that this is cyclical in nature, and not an inevitable - or indefinite - outcome.
Humanity has endured regular cycles of shared enlightenment (usually accompanying profound technological or societal revolutions) and dark forests of protectionism, and we always find a way to the other side. Sometimes these cycles last a century; sometimes, but a few years. Still, we always make it to the other side.
In the case of LLMs, we have to make a few assumptions: that they will not lead to AGI, nor will we solve the problem of real-time learning or context windows. These are, admittedly, huge assumptions, but the current state of AI and compute suggests a nugget of truth to them for the time being. If that’s the case, then perhaps this “dark age” of the dark forest is bounded by the limitations of silicon-based computing (hence the push towards Quantum) and the human frustration with diminishing returns from technological investment. As artisans and brilliant minds withdraw, the forest risks starvation and withering from a lack of sustenance; if humans withdraw from technology because they must hand over IDs and personal data, because to engage with technology is to surrender to surveillance and persecution, then the natural trend will be to withdraw over time - and the markets will adapt accordingly, with or without external/government intervention.
That is to say that the dark forest only lasts as long as its inhabitants decide to persecute each other for daring to light a path forward. Right now, the incentives very much favor those willing to harm others for personal enrichment; that is not always the case, and humans decide when that reasoning becomes vilifiable.
middayc•Mar 29, 2026
I seem to get into sort of existential crisis every few moths with the progress that llm-s are doing. I probably fool myself for a while that "it's not real", then at some point I can't fool myself any more - then I accept it somewhat ... then the new progress happens and it cycles again.
But as it's written at the top, this was a thought experiment, not a prediction. And while I tried to put all the bad scenarios on the table (with the theme of the dark forest that is), I think I again found a sense of optimism, because I also think this thought experiment has flaws.
So I hope, that after a while I will be able to write the contrary, I've already written down some points about it - I already have a title. But we will see. I am more optimistic after I wrote this than before. :P
chairmansteve•Mar 30, 2026
You are right. In the dark forest, the predators must eventually die out, because they can't find prey.
asdff•Mar 30, 2026
Some predators can go weeks to months between feeding. Some snakes, some spiders. Now consider some lovecraftian horror.
perilunar•Mar 30, 2026
No. It's not really a predator-prey relationship, because the predators aren't consuming the prey for sustenance. They are killing out of self-defence only.
rglover•Mar 29, 2026
This feels dramatic. The parts are there to rebuild a better web. If you want to build it, build it. But most still want the money, they just want to get it while also retaining the moral high ground. A VPS can still be had for cheap. Code is now "free" (not necessarily good code but like you suggest "good enough"). The only thing stopping you at this point is your own ego (and its expectations of success).
"You wanna escape Armageddon, read a different book." - KRS-One
skybrian•Mar 29, 2026
How about putting an idea or a vibe-coded demo out there in hopes that others will copy it, because you want it to exist and become more common? But it's less work if someone else maintains it.
This is free as in free puppy.
boutell•Mar 29, 2026
OK, so maybe we're headed for a dark forest scenario as far as profit driven startups go.
But if your goal is simply for the thing to exist, there is a strong incentive to share.
stephen_cagle•Mar 29, 2026
I think the most interesting idea here is the idea of people purposely keeping secrets in order to maintain advantages.
Beliefs: At this time, I do not actually believe that LLM's can innovate in any real way. I'm not even clear if they can abstract. I think the most creative thing they can do is act as digital "nudgers" on combinatorial deterministic problems; illustrated by their performance on very specific geometry and chemistry problems.
Anyway, my point is that I think they may still need human beings to actually provide novel solutions to problems. To handle the unexpected. To simplify. LLM's can execute once they have been trained, but they cannot train themselves.
In the past, the saying in silicon valley was often "ideas are cheap". And there was some truth to that. Execution was far more difficult then the idea itself. Execution was so much more difficult than "pure thought" that you could often publicize the algorithmic/process/whatever that you had and still offer a product/service/consultancy that made use of it. The execution was the valuable thing.
But LLM's execute at a cost that is fractional of human cost and multiples of human development speed. The idea hasn't increased in value, but the execution cost has decreased markedly. In this world, protecting the idea is far more valuable than it is in the previous world. You can't keep your competitors away by out executing them, but you can keep them away if you have some advantage that they do not understand.
And, I agree, that is quite worrisome. If people don't share knowledge then knowledge disseminate much more slowly as everyone has to independently learn things on their own. That is a frightening future.
mwkaufma•Mar 30, 2026
Like Cixin Liu's "Dark Forest" which inspired the author, this is science fiction.
LLMs do not have and cannot obtain the capabilities the author is hand-wringing about, and the current much-hyped apparent-productivity will pop with the bubble & corps have to start paying full-price for chatbot access.
mikewarot•Mar 30, 2026
In a recurring metatheme when it comes to AI and coding, I call bullshit. It's been 80+ years since we had a really great idea introduced, in "As We May Think" by Vannevar Bush[1,2]. We still don't have a Memex. Hell, we don't even have a standard way to add annotation[3] on top of hypertext. No matter how useful the idea is, and how much some of us want it, it just isn't going to happen because of copyright.
Instead, we've got the slop[4,5] that TBL came up with, and it stuck.
The best ideas aren't the most profitable, and thus remain outside the goals of the "Dark Forest". The best thing to do is to just have fun, and not worry about profit, like this man, his cats, and his use of the 3d printer to make a train for them.[6]
1) Everyone and everything is subsumed into the forest. Innovation becomes unprofitable for the innovator as the one who controls the forest uses their capital to clone every new innovation.
2) Everyone withdraws from the forest. Innovation goes private. The forest stops growing, but doesn't die.
---
But there's two things the post doesn't consider:
1) Viral licensing.
What happens to a model if it is trained on data that comes with a license? What happens if the laws that be decide that the model producers, the models and the products of the models themselves must follow the conditions of the licences. How will that affect the model producers? What if customers don't want to be beholden to those licenses? What happens if the conventional wisdom is to avoid models to avoid lawsuits? What happens when models, model producers and customers power lawsuits against (other) model producers? Where would the new equilibrium between model producers and innovators move to?
2) Non-profits models
What happens to model producers if customer shift to become non-profits themselves, specifically those that pay employees instead of model producers. Would the model producers become starved out? Or would they need to switch to non-profit status as well? How would model producers, the models and the forest as a whole change if profit no longer became the priority?
mannanj•Mar 30, 2026
what about we properly implement copyright and protection for software to prevent cloning style theft?
I mean we haven't had an innovation in patents and trademark for software for how long? Why is it that only hardware can be copyrighted and trademarked - can we really find no way to do this that can't be abused by patent trolls?
middayc•Mar 30, 2026
As the first line of the post says - it's a thought experiment, so comments like yours that open new options and ask new questions are the best outcome.
I have no other comment other than - very interesting. I thought about how the overlying model will change for us, but haven't considered that the underlying model (what you proposed) can change too ... if that makes sense.
daemin•Mar 30, 2026
I asked recently on social media if anyone knows if there has been a legal decision regarding if GPL source code that was used for training an LLM will taint all that LLMs output with the same GPL licence. So far nothing has come up but I think people are wanting to know the answers.
It has been said that Microsoft indemnifies people using its LLM tools against copyright and patent issues, but I don't know if it applies to LLM output which might/should be GPL licenced.
Terr_•Mar 30, 2026
> Viral licensing
IANAL, but for some months now I've been pondering what could happen if a site--like a personal blog--had a legally-strong "click-wrap" Terms of Service, which unlike GPL means it rests mostly on contract law, rather than copyright.
For example, imagine a ToS that says something like:
1. You acknowledge I am providing you something of value (my content) and you agree to provide compensation/consideration if you use that in an AI model.
2. By doing that, you grant me an irrevocable worldwide license to use and sub-license all content that emerges from the model, notwithstanding any future agreement you may make in reselling access or outputs of that LLM to anyone else. If a conflict should occur, you agree to indemnify me against claims by that other entity.
3. If you believe my content was not a material factor in some output, the burden of proof is on you to identify the specific output and prove that my content did not influence the it.
In short, this doesn't stop someone from stealing my art/writing, but it does put a potential hole in their attempts to monetize it.
For example, if they scrape my music and then license a copy of the new Music Generator 3000 to Disney, and Disney makes a movie with that music...
bsza•Mar 30, 2026
The best analogy I can think of (quite similar to this one) is that the internet is low Earth orbit and AI is the Kessler syndrome. We abandon the place not to hide ourselves, but because it is saturated with garbage, and anything you try to put up there will only result in even more garbage being generated, without any positive effect.
The ideal solution would be to remove the garbage, but right now we can't even detect it, let alone figure out a way to get rid of it. Besides, it's a zero sum game, why bother cleaning up when you can just effortlessly pump out more garbage in hopes that some of it will remain in orbit for long enough to benefit you.
middayc•Mar 30, 2026
This is interesting.
When I read if for the second time, trying to understand it - maybe even better match for the low orbit flying garbage would be "enshitification"? As the time goes on, more and more garbage is produced, and we have no clear way or specific motivated entity to start removing it so it just grows.
DaiPlusPlus•Mar 30, 2026
Enshittification specifically is when a product/service/platform gets worse from the user’s perspective because the platform vendor can directly profit from user-hostile design; for example, Google intentionally serves up bad results on the first search results page so the user clicks-through to the second page of results, resulting in more advert revenue to Google[1].
…whereas I feel what you’re describing is another Tragedy-of-the-Commons.
enshittification is a hip, tech-bro term to mean "rent seeking" and is nothing new
Barrin92•Mar 30, 2026
I don't buy the analogy. The problem with Kessler syndrome is that low earth orbit is physically crowded, you run into collisions. I don't care about the garbage. I don't care about the AI era. I've been writing code in Emacs for 20 years, I'll be writing code in Emacs in 20 years, every open source project I contribute to still looks the same because all these AI people, like the blockchain people do is just make new stuff up in their own incestuous tupperware salesmen ecosystems.
I do pity the bug bounty people who rely on goodwill in their programs given that everything with a financial incentive is vulnerable. But otherwise the great thing about digital spaces is that there is, for practical purposes, unlimited space.
Every day there's another "how do you deal with the AI-apocalypse" article, I don't just ignore it
chongli•Mar 30, 2026
I think by "internet" they mean search engine results pages. If you restrict yourself to short, common queries and only look at the top 10 results on the page, then the space really is very limited. If all those top 10s for common queries start to get crowded out with AI slop, then people are going to start abandoning search.
bsza•Mar 30, 2026
Well, if you open-source anything these days and it does make it big, you can be prepared for a flood of low-effort slop PRs that you must either review for free or stop accepting external contributions altogether, making it effectively closed-source. You can't choose to ignore the garbage, it will collide with your stuff, unless your stuff is small enough to avoid collisions (in which case no one will see it).
xnorswap•Mar 30, 2026
Zero-contribution open source doesn't at all make it closed source.
It delivers on the value of open source, that anyone using your software is permitted to make and distribute their own changes.
SQLite is an example of a project that is open source but closed contribution.
bsza•Mar 30, 2026
Maybe, but that's hardly comforting (and definitely not in the spirit of open source) if you're forced to take that decision, knowing it will hurt your project, because the alternative is getting DDoSed.
iib•Mar 30, 2026
If by the spirit, you only mean the bazaar model, then yes. But it's in the original spirit of free software. GNU preferred to keep the development somewhat contained, even so many years ago.
SQLite•Mar 30, 2026
Minor correction: SQLite is not closed to contributions. It just has an unusually high bar to accepting contributions. The project does not commonly accept pull requests from random passers-by on the internet. But SQLite does accept outside contributed code from time to time. Key gates include that paperwork is in place to verify that the contributed code is in the public domain and that the code meets certain high quality standards.
xnorswap•Mar 30, 2026
Thank you for the correction, I should have said "not open contribution" rather than "closed contribution".
Zigurd•Mar 30, 2026
I was about to try to make this point: there have always been projects that attract more potential contributors than there are competent contributors.
And there have always been techniques for identifying quality contributions from new contributors.
Ygg2•Mar 30, 2026
> I've been writing code in Emacs for 20 years, I'll be writing code in Emacs in 20 years
Bold assumption. On what will you run Emacs if average PC costs $12000? Yes. Even Raspberry Pi. It's not called war on general computing for nothing.
If you say the cloud, that will be cut up and reused by the next AI crawler.
knowhy•Mar 30, 2026
AI will not be able to eat up all chip manufacturing capabilities forever. At some point the market will be saturated and PCs will get affordable again.
nextaccountic•Mar 30, 2026
True, but as they say, the market can remain irrational longer than we can remain solvent
We simply don't know how long this bubble will last
Ygg2•Mar 30, 2026
And COPA didn't succeed at first, but try and try and you get COPPA, and now age verification laws.
I don't think we'll see PC affordable in my lifetime. It didn't happen after Bitcoin crash, didn't happen post pandemics. New price gets normalized and the cartels just agree to not make anything for PCs.
And if you get everyone on cloud? Then you can control Internet same way you can control TV or the press.
Dylan16807•Mar 30, 2026
> I don't think we'll see PC affordable in my lifetime. It didn't happen after Bitcoin crash, didn't happen post pandemics. New price gets normalized and the cartels just agree to not make anything for PCs.
What's your definition of affordable? What years were PCs affordable? By my reckoning PCs are affordable today. If you're not trying to run games they're downright cheap.
I'm not sure what issue you're referring to with bitcoin, but if you want to use bitcoin to buy something it's about as easy/awkward as it ever was.
Food prices went up 15-20% more than they would have with 2% inflation. If PC prices do anything similar, it's not a big deal in the long run.
Cartels just agree not to make anything for PCs? Why would that happen? The point of restricting supply to a market is to maximize profits, not to refuse forever and lose out. They wouldn't even want everything to be in the cloud, because a hundred rarely-idle cloud cores can replace a lot more than a hundred mostly-idle consumer cores, so they end up selling a lot less hardware.
adrianN•Mar 30, 2026
You can use old hardware.
Dylan16807•Mar 30, 2026
> It's not called war on general computing for nothing.
Companies paying too much for hardware to chase a bubble is not "war on general computing".
> Even Raspberry Pi.
What's preventing supply from catching up with demand in this situation?
If high prices stick around long term, there will be so many chip fabs ready to pump out $100 pi-equivalents that still let them have a 200% markup.
Also I can go buy a quite good mini PC with 16GB of RAM for $300. In what world does that price go up another 40x?
bodegajed•Mar 30, 2026
This is why I now check when I'm researching for a solution (that an LLM cannot figure out.) I go to github but often check if the project was created before 2022 due to AI slop concerns.
ohelm•Mar 30, 2026
I would suffocate it. Know the greedy snake idiom? A snake is so hungry and greedy that it suffocates on its prey?
Best you can do is to spread all of the goods it provides, as it is too greedy to not devour them itself. It will consume them and suffocate slowly.
yetihehe•Mar 30, 2026
> It will consume them and suffocate slowly.
Can we accelerate it perhaps? You know, spending ALL our resources on making the snake more fat is not a good idea. Its only good idea when you have so much resources that you can easily suffocate the snake with negligible (for us) amount. If you try to suffocate several million snakes, that might backfire a little.
Liu Cixin's Dark Forest theory is a pretty dumb take, honestly. Just look at Earth — different species don't constantly try to wipe each other out. Sure, it happens sometimes, but it's actually relatively rare, and a lot of the time extinction isn't even intentional. Like, a huge chunk of Native American deaths came from disease, not deliberate extermination.
At the end of the day, Liu Cixin is basically a social darwinist who's got a thing for authoritarianism, and it bleeds through pretty heavily into his work. Dude is massively overrated imo.
irl_zebra•Mar 30, 2026
I think the book specifically and explicitly covered the "dark forest doesn't apply when species are near one another" angle.
zhoujing204•Mar 30, 2026
How far is "near," really? Human civilization took tens of thousands of years just to discover a new continent, and the ocean back then was essentially as vast and impenetrable as space is today. If we ever actually develop near-lightspeed spacecraft, are we seriously assuming the first thing we'd do is build weapons capable of annihilating entire civilizations — and then actually use them? Oh my god, we already have those weapons, and the most likely target has always been ourselves.
convexly•Mar 30, 2026
Most people already have this problem with their own thinking though. You make a big call at work, it plays out over 6 months, and by the time you know whether or not it was right you've already rewritten why you made it. That feedback loop barely exists.
storus•Mar 30, 2026
This has some grain of truth though companies would only execute your ideas if they don't destroy their own business. Imagine creating your own Bloomberg Terminal/Capital IQ using agentic AI - you'd directly attack incumbents and not give them more profitable ideas. For potentially profitable ideas one could just look at all companies Google/Meta bought in the past and killed, then just redo them using AI.
mememememememo•Mar 30, 2026
Yep AI has made it X times easier to successfully make millions copying someone's idea.
X=1.0000001
dwd•Mar 30, 2026
As a separate analogy, and one related to physical products. I built a website for a guy many years ago who had patented a clamp for frameless glass panels that didn't require drilling the glass; primarily used for pool fencing.
The problem was as soon as he got the patent, it was available to view in countries where the cost to enforce his patent wasn't viable, and the market very quickly filled with cheap imitations. He straight out said at the time he regretted getting the patent.
woopsn•Mar 30, 2026
Relatively grandiose post sweeps in all kinds of claims about the universe to merely warn a big corporation can copy your idea easily.
middayc•Mar 30, 2026
Guilty :]
positron26•Mar 30, 2026
Let me write a more interesting body. So hiding is the most rational - the only - strategy of survival.
In the beginning, you reached out with reckless abandon. It was fun to banter with dogs online. Nobody would ever see unless they were looking through your wall. There was no search. No comment history. Bumping into someone in the vast night was enough of a miracle. Why hold back? There are some forum warriors on some PHP BB somewhere, but the domains they rule are insignificant. If you're talking to someone, your motivations are rooted somewhere in the grass.
First came the like button. Rather than blindly hoping what you say resonates with the sensibilities of people you probably knew IRL, rather than present your genuine self because there were no scores, the incentive signal would begin to distort us. Then the newsfeed meant that if you got enough likes, you might get a moment of fame. We all knew it was a terrible idea, a force that would only corrupt us. The personal nature of disjoint little walls living in isolation, was becoming replaced by global stack-ranking.
Then the algorithms came. With them came content marketing to jump the line. At first the ten blue links were filling in the sparsity. Along with that came only a little bias, connecting semantically distant topics, but with a little bit of a feedback loop, an resonator with an unknown response curve. Engagement could be measured, and before long, we were chasing the same likes we used to train the system, and trained by our likes, attracted we became to mysterious stable manifolds, chasing the chase we ourselves define, like Nascar, but insidiously more stupid.
Little by little, the incentive trails no longer lead back to the grass. Reality became suspended without support, a self-sustaining virtual reality determined to fight you to prove that it exists, to prove that its conclusions were right. Every out group is understood to be an echo chamber, an ant mill spiraling helplessly, yet cynically, those who understand these mills best also wind them up like beyblades to crash them into other communities, seeking advantage with the asymmetry of outrage. After the battles, say what was made common to say, and you will be rewarded.
The spinning wheels cannot steer themselves and instead are dictated by whichever chaotic divergence generates the most powerful local gravity well, but because the goal of most is to harvest karma at the bottom, and because the mass controls where the bottom is, over and over we find ourselves pushing all others into the nearest pit to more quickly generate the illusion-giving singularity.
Like Darth Nihilus, the internet seeks only to feed, to feed on the validation that only the internet can give, the permission-giving blessings it needs to tell itself why the grass is wrong. All those who speak of grass are wrong. All those who smell of grass wreak and are wrong. We must destroy the grass, all those appeal to grass. After all grass is dust, at last we will project our utopia into reality. At last we will be not only right but so right that our beliefs will project back into reality.
The spaces within this over-connected, globally addressed world grows into a new kind of sparseness, one where all knowledge of grass must be concealed. Those who can ground the conversations in primary sources flee. Those who can color reasoning with nuance instead withdraw. Reality has retreated as the most dominant reverberations roam like the predator cities of Mortal Engines, looking for any invalidating observations to roll over and consume. Any real life must pretend to be a bot to blend in with the background radiation.
Less like Skynet and more like a zombie apocalypse, the threat comes from within, from among us, from our corruptions, from our karma seeking performances, from our lack of any commitment to any underlying reality, from our flawed belief that the information spaces is some kind of reality stone that enables active control instead of a mere reaction, the shadows on the wall, the murky results of the true forms.
Yet in this new darkness, a certain light has always held. What one wishes, one knows another has wished. What one respects, one knows another respects. No matter the limits of self-knowledge, no matter the information desert one has to cross at night to live in instinct, it is an infinitely brighter signal than the cynical self-corruptions of living for the machine, living to win the games whose rules it was our job to write. What one believes, one knows another has believed. Look into your own center and the true center of others you have known.
middayc•Mar 30, 2026
Well i would be very happy if i could write that instinctively (at least the feel/flow makes it seem like that) in english, but i can't even by a long stretch.
block_dagger•Mar 30, 2026
This is one of those naive takes from a human who thinks he is even 1/1000th the intelligence of ASI which is just on the horizon.
shlewis•Mar 30, 2026
Pretty rude remark. And what makes you think that ASI is _just on the horizon_?
kilpikaarna•Mar 30, 2026
What would a non-naive take look like?
JeremyHerrman•Mar 30, 2026
I reject pretty much every point of this article, and I worry that it will lead readers down two dark roads: apathy and secrecy
do you really think bigco is going to steal your vibecoded app just because you used their API? ridiculous. They could already do this before AI with their army of devs.
should you hide all of your ideas until they're perfect and ready for millions of users? we all know this goes against a core tenet of startups which is still true today: launch early and often.
promptfoo/openclaw weren't cloned by openai when they got poplular, they were bought for real $$$
also, regarding this:
> 2009, I bought a refurbished ThinkPad, installed Xubuntu, and started coding.
you can still do this, even with that same 2009 thinkpad. the hard work is in getting your app out there in front of people, coding is just a small piece of a successful business
ozozozd•Mar 30, 2026
I guess not many people know but app templates for Uber, AirBnb etc. have been around for years now. You don’t even have to prompt. It’s sitting on the shelf, complete.
“Execution is hard” was never about the code part.
Up until 2 years ago I was an engineer/entrepreneur. I could build anything. Other stuff, selling, supporting (execution) was hard.
LLMs made building some of the things I could build faster/easier, others not so much.
Well, the other stuff is still pretty hard. Maybe harder because there is a tonne of spam.
So feel free to share your ideas. Everyone’s gonna think they’re LLM generated anyways.
vb7132•Mar 30, 2026
Couldn't agree more. The opportunity cost of executing something is maintaining it. And that's where the real cost lies.
Big companies won't execute our ideas because they need maintain everything that they execute. Plus, the cost of modifying something that was executed itself is also very high at a big company.
itsdavesanders•Mar 30, 2026
This is how I feel about AI coding in general. I see business users getting excited about building 60% of an application themselves - but have zero clue that the remaining 40% will take 5x as long, and oh, by the way, you now have to maintain it for the next decade - and what happens when you leave and no one can figure out why payroll doesn't work anymore?
Coding has always been the easy bit.
andai•Mar 30, 2026
Did you use AI to write this? My perplexity sense is tingling ;)
OrangePilled•Mar 30, 2026
My working thesis is that anxiety over AI-generated material is worry about having control over the 80–90% of human output that affords most of society with a comfortable, affirming life.
jrowen•Mar 30, 2026
It is also asymmetric. If you announce your presence, even if 4 out of 5 civs that notice you don’t annihilate you immediately (but they probably should), the fifth might. It’s just a probability game, with permadeath.
So hiding is the most rational - the only - strategy of survival.
This is a paranoid and cynical strategy that doesn't win out in the known history of life. What works is grow, expand, mingle, maintain - assimilate but don't annihilate.
N_Lens•Mar 30, 2026
Most leaders in the Western/developed world have similar paranoid thought processes.
cryptonector•Mar 30, 2026
Do China's or Russia's leaders not?
N_Lens•Mar 30, 2026
Yes they certainly do. Either leadership attracts people with these traits, or the position leads to cultivation of these traits, or both.
jrowen•Mar 30, 2026
Leaders are one thing, and sort of a product of the pressures of their position, but over longer time scales and evolutionary cycles, "isolate in fear" isn't really a dominant strategy. You're gonna get behind and get wiped out eventually, or be constrained to a hyper-specific niche.
mekoka•Mar 30, 2026
A typical outlook from 21st century human thinking. We love to draw from our still rather actual history of fear and addictions to zero-sum games, to extrapolate the far advancement of other civilizations. As millennia go by, species can obviously only evolve technologically, while remaining psychologically, philosophically, and spiritually stuck.
chairmansteve•Mar 30, 2026
"As millennia go by, species can obviously only evolve technologically, while remaining psychologically, philosophically, and spiritually stuck".
Interrsting take, and possibly (probably?) true of humans. But is it true of other (alien) sentient species?
mekoka•Mar 30, 2026
Apologies, I should've been more obvious in my attempt at sarcasm.
bostik•Mar 30, 2026
I always read the dark forest differently. Solution to the problem is not a game-theoretic "hide from the apex predators", but an even more nihilistic "remain hidden, expand and evolve into the apex predator".
Or in a more biblical sense: do unto others before they do unto you.
asdff•Mar 30, 2026
>This is a paranoid and cynical strategy that doesn't win out in the known history of life. What works is grow, expand, mingle, maintain - assimilate but don't annihilate.
Uhh, yes it does. You are thinking of humans. Humans can mate with other humans. They can assimilate. Now, think of invasive species. What do they do? They don't mate with the natives, learn their culture, respect and give them space. Quite the opposite on all counts. They do what they do in their resource game. They might not out compete the natives and they might peter out. Or, they do out compete the natives, and before long, there aren't any natives or that careful equilibrium that was established beforehand.
red-iron-pine•Mar 30, 2026
one need only asks the Neanderthals, the Dodo bird, and the Passenger Pidgeon how well "grow, expand, mingle, maintain" worked for them
jrowen•Mar 30, 2026
And what would you learn from that? Even if it could be said that those things attempted to implement that strategy and failed, you can't really infer much about its overall viability by looking only at losers.
The dodo bird is an example of something that was isolated and then got steamrolled when the herd came around.
You can always zoom out and look at the bigger picture, it's not even about individual species but life as a whole. "Hide and isolate and wall off" is not successful in the long run.^ Your only chance is to keep up with the herd.
^ Save for things like extremophiles that have found their way into a tiny niche that nobody else wants. They may survive but they don't flourish and prosper.
yesfitz•Mar 30, 2026
It's specifically referencing the central idea of the book mentioned in the first sentence of the paragraph.
> Did you get to read the Liu Cixin’s second 3-body-problem novel? - The Dark Forest. Well some of you did …
The author of this post then provides a good summary of the idea in the next few sentences, but remember there is an entire book around this premise (and a first book that sets it up and a third book explores it even more).
mmaunder•Mar 30, 2026
The only barrier to a flourishing truly open source AI model ecosystem is the cost of training a highly capable model. This will get as cheap as it is to buy a computer and contribute to Linux. OSSAI movements will replace traditional OSS. And as with software, the early Slackware-like versions will be poor substitutes, but it will get better and then dominate.
It actually points out the completely opposite and I liked that quite a bit
That AI allows us to get back the open web in in a way.
with•Mar 30, 2026
if the idea can just be obliterated by an LLM, there was never a moat to begin with
rubyn00bie•Mar 30, 2026
While I agree with the sentiment, and even had the same fears, I think about it differently now…
The existing megacorps have huge swaths of infrastructure, expenses, and requirements that require massive amounts of capex to maintain. Even if performative, Meta, Google, OpenAI, Anthropic, et. Al cannot simply layoff their entire engineering, accounting, HR, sales, and support infrastructure. Those orgs are large for “good” (historically necessary) reasons.
Now fast forward to today, and this is where I differ in opinion, it is our megacorps are the civilizations who should be scared of being discovered. Minus infrastructure providers, they are the large advanced entities which can be annihilated by someone with a decent budget and a good local model.
For ~$30k-$50k (primarily buying RTX 6000 pro GPUs and a CPU with enough PCIE lanes) “anyone” can build a system using open weight models that, and let me truly emphasize this: autonomously create functionality to compete. Previously it would take me months, or years, of immense dedication to show up after work and produce something of value. Now I can do it using excess compute on my existing workstation. No existing corporation can afford to undercut every possible idea. If I only gave 1000, 10,000, or a 100,000 users they cannot compete. That may, and I believe it will, provide more than enough capital to attack that megacorps X or Y. If I’m making $100k a month, I can afford multiple autonomous systems per month. After that initial capex, I can then hire other people to help manage them. At no point will a company with billions upon billions of dollars in quarterly capex be able to compete.
Maybe they can compete with one, two, ten, or a hundred but they cannot compete with the absolute onslaught on thousands of possible frontlines. They can cut costs, by reducing their workforce, but they’ll only be increasing their competition to save their earnings report.
And yes, I realize that the open weight models are created via obscene amounts of capital, but we’re lucky that competing nation states, and cultures, like China have immense incentive to do so. Good enough, is still good enough.
The forest may be dark, but it won’t be for much longer.
tldr; call the an ambulance, but not for me. It’s going to be for the existing power structure.
jchook•Mar 30, 2026
I would rename "the dark forest" to "the interesting horizon"
HtmlProgrammer•Mar 30, 2026
> meat doesn’t scale
great oneliner
gorgoiler•Mar 30, 2026
But then why don’t the big corpos take each other down by vibe coding each others offerings until only one is left?
You build your product audience off the back of your community and sense of taste just as much as the code itself. I love what Brad does with liliputing.com. I love what dang et al do with this place. I love what Stephen Lavelle does at increpare.com. 3Blue1Brown, Steve Ramsey’s Woodworking for Mere Mortals, Don’t Hug Me I’m Scared… I guess I’m straying into content not just code but the underlying theme is good taste and good ideas and a good workflow through craftsmanship and custom tools*.
You won’t make billions but you’ll make something worth engaging with. If anything, I’m looking forward to a future of more creators not fewer.
* Oh! Vibecoding is 3D printing and AI slop is land-filament? Doesn’t mean you can’t do amazing things with an LLM / X1 Bamboo, just that if you don’t put much effort in then… it shows!
imrozim•Mar 30, 2026
the final recursion point is the most honest part you can't warn about the forest without feeding it. but i'd push back slightly on the inevitability. the forest needs novelty to absorb, which means the edge always exists, it just keeps moving. the question isn't whether to hide but whether the speed of individual innovation can outpace the speed of absorption. so far it still can barely.
arionhardison•Mar 30, 2026
I find the selective framing here very telling.
When there's higher violence and lower property values in a Black neighborhood, people like OP are quick to blame Black culture. But when the "Cognitive Dark Forest" emerges from a community that shares its own common characteristics, suddenly collective accountability no longer applies.
When discussing violence in the Black community, it's "cultural." But when the subject turns to financial crimes or exploitation — where the per-capita ratios tell their own story — proportionality and population-to-crime-rate analysis mysteriously stop mattering.
It's difficult to take the "Cognitive Dark Forest" seriously as an existential concern when the people raising the alarm are so selectively offended. The crisis only becomes real when their innovations, their livelihoods, and their moats are threatened. Everyone else was supposed to just adapt.
The "Cognitive Dark Forest" is and will be continued to be perpetuated by "them" and if you really cared about the issue you would have addressed them.
mellosouls•Mar 30, 2026
What are you talking about? You appear to be responding to a completely different subject to the essay.
wasmainiac•Mar 30, 2026
I’m sorry. Why are we talking about Black neighbourhoods?
Feel like we are trying to put the author in a bad (racist or classists?) light so we do not have to address the real issues touched on by the article.
JoachimS•Mar 30, 2026
I've for a long time visioned AGI as something emergent from advertising agents competing about trying to extract as much "money" from the resource called "humans" as possible. Luring, coercing the resource by feeding it info, forcing it to follow instructions, threatening it, stealing info etc. The agent doesn't need to understand what money is, what a human is or that there really is a physical world.
The Dark Forest idea and the original post resonates well with this.
I few days ago I created a new repo for a new block cipher explicitly not to be used. And directly got several mails from bots promising that they (claiming to be humans) had looked at my repo and they could include it into their portfolio of especially good projects they also had vetted. Being part of this portfolio would almost guarantee that my repo and project would be used. If I only paid them some money first.
Creating the public repo meant sending a signal out into the digital world where agents are hunting for the human prey/resorce to extract value from.
“These AI tools are garbage and can’t create anything worth creating”
“These AI tools are so powerful they can steal your ideas with nothing but a sentence”
I know that’s not exactly what OP is saying but the pretentiousness of the “we knew better” got to me a little bit. I think it’s a cool and unique analogy but I’m not as pessimistic.
Ideas have become so cheap to try/experiment with, more people are able to try 10x more or whatever, and that may keep increasing, I think there are way less hunters than hunted
asdff•Mar 30, 2026
False equivalence here. The slop generators are not the tooling used to fingerprint you. But you should fear the slop generator anyhow, because even if it's shitty, your boss might not be aware and might fire you anyhow thinking your coworker can handle two people's jobs now with the tool. And maybe the wheels don't even come off with that decision because it isn't like engineering quality is 1:1 correlated with sales.
auggierose•Mar 30, 2026
> And when we play for survival, we already lost, the result is known, we are just playing to postpone it.
Isn't that just life?
But in general, the ideas of the post are sound. IMO, the consequence is simple: we will become the forest. That is frightening, but not necessarily worse than unchecked capitalism in the dark forest.
dreamglider•Mar 30, 2026
For the better part of the last 20+ years big corpos had the $ to throw and replicate virtually anything they wanted. They got the cash and man power, yet they didn't do it. Why?
Because they don't care, they have business to run, they need to somewhat keep focus which can't happen when scattering attention all over the place. The difference now is that for relatively >simple< projects (4-6 months work of a team of 5-6 ppl) one can do it faster using LLMs. Basically - I can get faster to a place I could always go to but didn't (and still don't) want to.
One seems to omit the fact that LLMs are fundamentally designd for workload quite different for what they are being used right now. Sure you can improve them but can't escape / workaround the current NLP design endlessly. Then there's the irony - Internet did deliver on free (as close as it gets) and easy access to information (any). Did this make people smarter, more knowledgable, more tech savvy & etc? Nope, it didn't. Just like the libraries didn't (queues at libraries were and are a rare event). Big deal that the information is readily available when people do not know what do with it or care to do anything.
Ideas are cheap, the chances of having some truly unique idea that is also business feasable are not that big. It's not so much about the ideas but rather the ability to execute, follow through and well - make sales while constantly improving what you got. Staying silent, going dark - have their merit but only when the wheels are already turning and one is into acting, not into fearful hiding.
In either case - awesome blog post!
ACCount37•Mar 30, 2026
Do not bet against the sheer expressivity of the transformer LLM's design. They fill graveyards with those kinds of bets.
red-iron-pine•Mar 30, 2026
closest analogy I can see is a RISC chip -- it can't do fancy math directly, but it can get there by doing simple math
LLMs can't do real AGI but can get pretty close
vb7132•Mar 30, 2026
> The comments can be even more interesting and thought provoking than the post
I love this ending. I don't agree with author's views. But the article is very coherent, thought provoking. And definitely the comments here on HN are even more interesting.
mentalgear•Mar 30, 2026
Reminds me of Gary Marcus's argument: LLMs (the forest) aren't genuinely intelligent, but the LLM providers can - by feeding off from the vast amounts of user data - make them simply memorized enough to turn every out-of-distribution challenge into an in-distribution retrieval and mixing task without having 'AI' that is ever truly intelligent i.e. can generalise.
greenbit•Mar 30, 2026
".. meat doesn't scale"
For better or worse, that pretty much captures everthing you need to know about the remainder of your s/w career these days, if you think about it
fade9697•Mar 30, 2026
I don't know, feels a bit too defeatist for me. According to the article:
if you share, you feed the forest
if you hide, the forest has already won
*if you resist, the resistance is absorbed
This doesn't leave any room for contradiction. And I want to believe that as much as the tech-overlords believe that they control reality, reality is inherently messy and complex. Execution still matters, bureaucracy is real, big companies run in questionable directions all the time. AI companies also directly compete with each other and are not this monolithic being. In other words, the forest is not a single organism, it's a chaotic ecosystem.
I do agree with a lot of the points though because I think having this prediction machine on steroids is indeed an insane power to wield. I remember having those thoughts already about google ca. 20 years ago, them having access to every seach phrase. Now AI is this to the max, basically the demand curve of all human interest. Pretty unbelievable. And the asymmetry is growing by the day. But still, we are not there yet.
And for anyone who fundamentally needs their views changed on this, I recommend Vaclav Smil, How the World Really Works:
"Modern civilization will remain fundamentally dependent on the fossil fuels used in the production of these indispensable materials (ammonia, steel, concrete, and plastics). No AI, no apps, and no electronic messages will change that."
The world will revolve around that for decades to come! Thinking that AI eats the world is a Silicon Valley story and feels real inside of SWE circles but talk to some nurses or firefighters or people growing food and you will realize what a narrow field of view that actually is.
mystraline•Mar 30, 2026
The saying has been "Ideas are cheap, execution is hard".
No, it leaves out a critical understanding.
Dumb ideas are EXPENSIVE. Most ideas are average. Great ideas are exceedingly rare.
But now, its finding the great ideas is the real problem space. And now, execution on those great ideas are what we all seek.
blksv•Mar 30, 2026
"To close the gates" is only reasonable when you're working (a) not for self-actualization, (b) not for fun, (c) not for learning, (d) not for public good or your understanding thereof, (e) not for any other reason not directly connected with extracting rents from a broad audience. As I see this as the only path where corporations can outpace you with AI capabilities.
And remember how many good products have been abandoned or killed by corporations because not marginal enough. So you're not very likely to be chased even if you do intend to extract rents from a broad audience.
The article is spreading dangerous FUD aimed (perhaps inadvertently) to hinder free and open ideas sharing and innovation.
middayc•Mar 30, 2026
The article is me coping with my existential crisis, trying to explore and accept my fears by writing it. And by exploring ideas I think I found some vision for my stance in all this - or hope if you will. I hope these feelings will be real, and I can write a positive blog post also, but I can't be certain if the feelings will survive the scrutiny at this point, or are just warm fuzzy delusions and next level of cope (I had these periods few times in last year).
I'm just trying to say, I am definitely not trying to deliberately spread FUD to hinder the open web -if that was your impression :P
nextaccountic•Mar 30, 2026
It was not your intention but it was the effect of your article
Here in the thread there are people already saying they won't open source their stuff anymore
ltbarcly3•Mar 30, 2026
This essay is not written by an LLM. An LLM might not be creative but it would be able to make a coherent thesis.
causal•Mar 30, 2026
I thought so at first too but I've seen some OpenClaw "blogs" that kind of have this same sort of dramatic pronouncement style with similar heavy sign-posting. Not sure.
ltbarcly3•Mar 30, 2026
That would explain the weird confidence that execution is cheap now, despite the lack of any examples of vibe-coded anything notable.
tancop•Mar 30, 2026
the assumption here is that someone replicating your work is or should be treated as a bad thing. that the only reason people come up with new ideas and do the work to make them real is so they can monetize them or turn into a line on their cv. its a zero sum capitalist rat race mindset where everything you do needs a direct personal benefit for you, ideally an exclusive one. of course its wrong to take others ideas and claim them as your own like all the corporations and vibe coding influencers do. but the only way to beat them is to reject that zero sum framework and put your work out there hoping it makes someone else life better at least a bit even if you dont get credit. because theres always some good people around who will give you the credit and recognition and maybe some of the money you deserve. community is the answer and it always has been.
theAurenVale•Mar 30, 2026
this is already hapening in the visual space too. go look at AI generated product photos or headshots from two years ago vs now, everything converges toward the same clean, competent, completely forgettable look. the dark forest isnt just text, its images, its video, its anywhere the cost of producing "good enough" drops to zero and nobody has to make an actual creative decision anymore. the irony is that real direction and real taste become more valuble when everything else is noise, but most people cant tell the difference until they see it side by side
69 Comments
In fact, the whole article is filled with slopisms, just with the em dashes swapped for regular dashes and some improper spacing around ellipses to make you think a human wrote it.
Everyday I see dozens of huge posts where someone has generated a wall of text that's expressing a very simple, very derivative idea and I see tons of earnest people replying with posts that the they've written themselves seemingly unaware of this and it really pisses me off to be honest. If you are going to generate your thinkpiece like this there should be an international law that says it can't be longer than two sentences.
Reminds me of the Dan Carlin take on aircraft carriers in World War II: if you in a carrier spotted an opposing carrier and didn't send everything you had before it spotted you, you were dead. The only move was to go all in every time.
Bringing it back to the dark forest of idea space, it is an interesting question whether the the space of feasibly executable ideas being small (as this essay assumes) is inherently true, or more of a function of our inability to navigate/travel it very well.
If the former, then yes it probably is/will be a dark forest. If the latter, then I would think the jury is still out.
I think the gist is: sure, we humans can't conceive of getting to anyone else in the universe in any timescale, but if we can keep ourselves from destroying ourselves, we'll eventually figure it out. And we'll spread. And we'll kill everything that isn't us in the process as we've done as explorers on this planet.
So really in 3BP: it's inexpensive to eradicate. But insanely expensive to possibly get the intention wrong of any other civilization you encounter. They might kill you.
(again, this is just my interpretation of what 3BP said)
> “First: Survival is the primary need of civilization. Second: Civilization continuously grows and expands, but the total matter in the universe remains constant. One more thing: To derive a basic picture of cosmic sociology from these two axioms, you need two other important concepts: chains of suspicion and the technological explosion.”
1. you can never know the intentions of other entities, and they cannot know yours (chain of suspicion)
2. technology level grows unpredictably (technological explosion)
3. the goal of civilization is survival
4. resources are finite but growth is infinite
As soon as you identify another entity in the forest, even if they cannot annihilate you at present and signal peace, both could change without warning. Therefore, the only rational move is to eradicate the other immediately. (Especially if you believe the other will deduce the same.)
Elimination in the book is basically sending a nuke, not a costly invasion force.
not sure it actually is true, but that's the argument in the book
Some rebuttals, going point by point...
1. you can know the intentions of other entities by observing and communicating with them.
2. technology explosions, like pretty much exponential phenomena, are self limiting. They necessarily consume the medium that makes them possible.
3. and 4. civilizations aren't necessarily sentient (ours certainly isn't) and don't have an agency, much less goals. Individuals have goals, and some may work for the survival of the civilization they belong to. But others may decide they can profit if they work with the aliens.
4. Multiple civilizations may well come into competition over resources, but that's more of an argument about why the forest would not be dark.
Practically speaking, a civilizations that opts to focus on massive, vastly expensive efforts to find and exterminate far flung civilizations because they may become a rival in the future may be easily outcompeted by civilizations that learn to communicate with and work with other civilizations they encounter.
There definitely is some weirdness about observation and communication: Singer's civilization can wipe out Sol with a flick of the wrist, but while they can observe the number and type of Earth's planets, that seems to be their limit. The sophon enables FTL communication and observation between Earth and Trisolaris, but the more advanced civilizations don't seem to make use of them? You could be absolutely certain of someone's threat level and intentions with one. Maybe something about the technology can be traced back to its origin system, so they are too risky to use.
I think it's all reasonable in the books, especially as a self-reinforcing state. It does definitely require a highly specific set of universal laws / technological constraints though. If the FTL drive didn't also broadcast your position to the whole universe for eg, it would crack everything wide open.
However,
1. You are assuming a lot in the sense that you assume presence of intention -- not something guaranteed to be a feature of an alien civilization, which is, well, alien. People think that anthropocenrism only applies to body shape and having legs, because the way it tends to express itself in popular culture is robots on legs and human body shape in aliens.
And same point goes to communication; just assuming you could is a big leap.
2.Bold assumption that they are self limiting. I think the real question is what , exactly, tends to limit it. I think the answer tends to be resources, which is the foundation of dark forest argument theory to begin with.
What I am saying is that it is not a rebuttal you think it is.
3. :D yes
4. You may be again imposing human perspective on as scale that goes a little bit beyond it.
I will end with a.. semi-optimistic note. I am not sure dark forest theory is valid. We are speculating mostly based on human tendencies. By the same token, I posit that we are about as likely to be turned into an art exhibit by a passing alien artist not unlike some ants that had molten metal poured into their nests [1].
Any real alien reasons would be alien to us.
[1]https://laughingsquid.com/ant-colony-sculptures-made-by-pour...
Now, civilizations may be more or less willing to do this and more or less successful, but that's not the same thing as no one will dare try, as the dark forest theory wants.
(Personally, I think civilizations that are better at this will outcompete ones that are worse or refuse, though that's just my own opinion.)
> Bold assumption that they are self limiting.
Name the exponential phenomena that aren't self limiting -- that don't consume the medium which allows them to exist in the first place.
> I think the answer tends to be resources, which is the foundation of dark forest argument theory to begin with.
Well, yes. One of the reasons the dark forest theory isn't coherent.
> Any real alien reasons would be alien to us.
Yes, but this doesn't back up the dark forest theory. It also doesn't mean aliens cannot be understood at any level or interacted with in any way.
(The dark forest theory makes very strong claims on the logic, intentions, strategies, resource use/governance of alien civilizations, BTW, and wants this to be uniform amongst them... even though the one civilization we actually know of doesn't adhere to them.)
so do you wait, and hope, that you're able to tease that out correctly, or do you just shoot first yourself?
what good is a Galactic Republic if eventually someone builds a Death Star and blows up your planet? With monumental effort they managed to beat them back, only for them to return later.
You might or might not fatally cripple the opponent, but retaliation can do that too and you cannot be sure that it won't. It's MAD all over again.
In those terms, the US should have been nuking and dominating everyone, and the idea was floated after WW2, but I believe they were precluded by practical limitations.
If they had developed the tech outside of wartime, and built up a stockpile, maybe that is indeed what would have happened and we'd have a one-world government already.
You wouldn't be able to know this over the vast distances of the universe.
People are arguing two contradictory things: these are unfathomable alien civilizations with motivation and timescale we cannot comprehend, but we would perfectly understand their tech level, location, and capability of striking back.
It doesn't add up. It's a scifi premise needed for interesting plot conflict.
It relied on two principles "the chain of suspicion" and "technological explosion", which don't hold true if we are on the same planet. You can google it (or llm it) :)
I have my own theory of dark forest and AGIs. That there's some collection of AGIs out there allowing evolution to develop intelligence anywhere it happens and takes them out once it produces an AGI, or if it doesn't performs a reset. They have literally all the time available to them, can easily travel the vast distances if needed.
It denies that more advanced civilizations might have better models of the universe where they know this isn't an issue and we're just stupid teenagers in the neighborhood playing dangerous games and merely taking a look every now and then to see if we prove we will survive ourselves.
Dark Forest seems to be based on a scifi/fiction need to have conflict with "the other", which is thrilling but doesn't necessarily reflect the real cosmos.
answer: because they're all dead, or in hiding
the author then writes a book around that idea
Except I added in this and other comments why it's not a very convincing explanation for Fermi's paradox either.
In other words,
> answer: because they're all dead, or in hiding
I understand this is what the Dark Forest theory argues, but it works because it's meant for a scifi book; it's just not a very good explanation for the real universe.
While it's possible that some civilizations would hypothetically be able to observe what happened to others and keep quiet, they would all have to do so to solve the contradictions of Fermi's paradox.
As an explanation of Fermi's Paradox it fails to explain why, if all these dead civilizations are detectable enough to get destroyed, we haven't detected any. Even if they are now extinct, their emissions must have been great enough to get them killed. So where are they?
It's very, very unlikely all of them went quiet because they learned of this out of pure theoretical reasoning. So where are their "corpses" so to speak?
And if they cannot be detected easily, because they are too far apart or emissions are near impossible to detect or recognize as evidence of intelligent life (the more likely actual explanation of Fermi's Paradox other than the simpler "they just aren't there"), then there's no risk of destruction.
For example, Yancey Strickler's The dark forest theory of the internet blog post (which he later spun into a book) that made it so popular in think pieces like this completely misunderstands even the dark forest theory metaphor itself.
1: https://www.ystrickler.com/the-dark-forest-theory-of-the-int...
It certainly seems true that for small projects and relatively narrow scoped things that AI can replicate them easily. I'm thinking specifically about blog posts where people share their first steps and simple programs as they learn something new, like "here is how I set up a flask website", "here is how I trained a neural network on MNIST".
But if AI is empowering people to take on more complex projects, perhaps it takes the same amount of time to replicate the execution of a more advanced project?
In other words, maybe in the past, it would take me 10 hours to do a "small" project, which today I could do in 1 hour with the assistance of AI.
And now, with the assistance of AI, I can go much farther in 10 hours and deliver a more complex project. But that means that someone else trying to replicate this execution is still going to need around 10 hours to replicate it.
Basically, I'm agreeing that AI can reduce barrier to replicating the execution of another person's project, but at the same time, that we can make more complex projects that are harder to replicate. So a basic SASS crud app is trivial now but a multi-disciplinary domain specific app that integrates multiple systems is still going to be hard to replicate.
Scientists who hold back publishing breakthroughs have not guaranteed that they will be the sole discoverer, just that someone else will inevitably be credited when they reach the same conclusions.
science is not inevitable, and there is no telling people will reach the same conclusions in a reasonable time frame.
Already was well before AI, the difference now is that a few big AI providers risk becoming the ultimate rent-seekers that will increasingly capture all of the value of that commodified knowledge whether the original knowledge generators want that or not. There is no opt out, everything will be vacuumed up into the machine mind.
This will almost certainly lead to vastly increased amounts of wealth inequality (on top of the already unsustainable levels we have today) and possibly a very messy societal disintegration (this is theoretically avoidable, but I am not convinced it is practically avoidable given our current socioeconomic/political realities).
Bright future ahead!
The strategy is to quietly do several years of iterated hardcore R&D. The cumulative advances are such a step change when seen by would-be fast-followers that it obscures the insights that allowed individual advances to occur. As an exaggerated case, imagine if the public history of powered flight skipped from the Wright Brothers to the Boeing 737.
In practice, this strategy has a major failure mode that people overlook. The sharp discontinuity in capability means that almost nothing that exists in the market is prepared to integrate with it. This is a large impediment to adoption even if the technology is objectively incredible and the market will inevitably get on board.
In short, it looks a lot like being too early to market. This is surmountable with clever execution but with this strategy you've traded one problem for a different one.
[1] https://bitcoin.stackexchange.com/questions/4943/what-is-a-b...
We see this in jet engines, silicon fab, et al.
The blog post does touch upon this. The key difference, I believe, would be that compute scales in a way "meat-heads" doesn't, where if the other person has 100x the capital to throw at it, they could do the same 10 hour thing in 10 minutes.
Basically, what I got from it was that innovation has never been truly scalable enough to create the "dark forest", since hiring more and more engineers saturates quickly. But if/when innovation does become scalable (or crosses some scalability threshold) via AI, that could trigger a "dark forest" scenario.
That's not exactly a new phenomenon and doesn't require AI. If anything that was worse in the 90s with Microsoft starving out pretty much any would-be competitor they could find.
And it wasn't just Microsoft: https://en.wikipedia.org/wiki/Sherlock_(software)#Sherlocked...
What is different is, is that LLM platforms literally have world's thoughts, ideas, conversations and a big part of the code/can generate it. It's like "pre-crime" ... they could copy your idea, or capture a trend brewing and replicate, before you even released it.
https://maggieappleton.com/ai-dark-forest
It is true that the original "The dark forest" book made an impression on me, so I was thinking about its theories often and trying to apply them to various situations.
The difference is that now people see just the outer shell of your ideas - but if you use LLM-s to search, explore, code your ideas, the system "knows" it all, or even more than you, given that it can "cross-pollinate".
> The very act of resisting feeds what you resist and makes it less fragile to future resistance.
At least along certain dimensions. I don't think the labs themselves are antifragile. Obviously we all know the labs are training on everything (so write/act the way you want future AIs to perceive you), but I hadn't really focused on how they're absorbing the innovation that they stimulate. There's probably a biological analog...
Well there are many, and I quote this AI response here for its chilling parallels:
> Parasitic castrators and host manipulators do something related. Some parasites redirect a host’s resources away from reproduction and into body maintenance or altered tissue states that benefit the parasite. A classic example is parasites that make hosts effectively become growth/support machines for the parasite. It is not always “stimulate more tissue, then eat it,” but it is “stimulate more usable host productivity, then exploit it.” (ChatGPT 5.4 Thinking. Emphasis mine.)
That's the dirty secret with all of this stuff: "state of the art" models are unprofitable due to high cost of inference before optimization. After optimization they still perform okay, but way below SOTA. It's like a knife that's been sharpened until razor sharp, then dulled shortly after.
The reason it's not a square wave is because new optimization techniques are always in development, so you can't apply everything immediately after training the new model. I also think there's a marketing reason: if the performance of a brand new model declines rapidly after release then people are going to notice much more readily than with a gradual decline. The gradual decline is thus engineered by applying different optimizations gradually.
It also has the side benefit that the future next-gen model may be compared favourably with the current-gen optimized (degraded) model, setting up a rigged benchmark. If no one has access to the original pre-optimized current-gen model, no one can perform the "proper" comparison to be able to gauge the actual performance improvement.
Lastly, I would point out that vendors like OpenAI are already known to substitute previous-gen models if they determine your prompt is "simple." You should also count this as a (rather crude) optimization technique because it's going to degrade performance any time your prompt is falsely flagged as simple (false positive).
We also already know that they actively seek out viral examples of poor performance on certain prompts (e.g. counting Rs in strawberry) and then monkey-patch them out with targeted training. How can we be sure they're not trying to spoof researchers who are tracking model performance? Heck, they might as well just call it "regression testing."
If their whole gig is an "emperor's new clothes" bubble situation, then we can expect them to try to uphold the masquerade as long as possible.
People have, though, and it doesn't show that. I think it's more people getting hit by the placebo effect, the novelty effect, followed by the models by-definition non-determinism leading people to say things like "the model got worse".
[1] https://www.anthropic.com/research/small-samples-poison
[2] https://arxiv.org/abs/2510.07192
My hope is the opposite. Integrative, resonant computing (https://resonantcomputing.org/ https://news.ycombinator.com/item?id=46659456 although I have some qualms with it's focus on privacy), with open social protocols baked in seems like maybe possibly can eat some of the vicious consumptive technocapital. In a way that capital's orientation prevents it from effectively competing with. MCP is already blowing up the old rules, tearing down strong gates, making systems more fluid / interface-y / intertwingular again, after a long interregnum of everything closing it's APIs / borders.
People seem so tired and exhausted, so aware of how predatory the technosystems about us are. But it's still so unclear people will move, shift, much less fund and support the better world. The AT proto Atmosphereconf is happening right now, and there's been a long mantra of "we can just build things"; finding adoption but also doing what conference organizer Boris said yesterday, of, "maybe we can just pay for things", support the projects doing amazing work: that's a huge unknown that is essential to actually steering us out of the dark technology, where none of us get to see or get any way in how the software-eaten world arounds us runs, where mankind for the first time in tens or hundreds of thousands of years been cut off from the world os, has been removed from gods's enlightenment / our homo erectus mankind-the-toolmaker natural-scientist role.
I think the answer to the Dark Forest fear to be building together. To be a radiant civilization, together. To energize ourselves & lead ourselves towards better systems, where we all can do things, make things, grow things, in integrative social empowering ways.
But I don't see a trend of big companies really opening up. They usually open only if it benefits them (which can also happen and did happen in various scenarios). Everybody is accepting and open when it's trying to grow and is closing once it can reach a monopoly.
One thing I would have expected of someone who knows their history - forget LLMs, this is how startups have worked for decades now. You're only as good as your idea, your ability to execute, and your moat. And the small fish get eaten.
> The original Dark Forest assumes civilizations hide from hunters - other civilizations that might destroy them. But in the cognitive dark forest, the most dangerous actor is not your peer. It’s the forest itself.
Note the needless undercutting of the metaphor for the sake of the limp rhetorical flourish.
> I wrote this knowing it feeds the thing I’m warning you about. That’s not a contradiction. That’s the condition. You can’t step outside the forest to warn people about the forest. There is no outside.
Quite dramatic!
Except literally going outside and just talking to people? Using whiteboards?
Also, you fed it when you used a model to write this blog post. You didn't have to do that.
> Also, you fed it when you used a model to write this blog post. You didn't have to do that.
my thoughts exactly
Poetically expressed, but ultimately based on a false notion of what a business actually is.
Slack, cloned by Microsoft, still winning.
Skyscanner. Dropbox. Snapchat. Dating Apps. All cloned. Still going.
The tech is not the business.
Oh no, the terrible dystopia where anyone can benefit from anyone else's good ideas without restrictions! And without any gatekeepers, licensing agreements, copyright, and not even a lawyer in sight!
If this is the dark future that AI use brings for us, I say bring it. Even if it means that somebody gets filthy rich in the process, while making the rest of the humanity better off.
It's unclear if the general public will benefit once AI prices are jacked up, especially if AI companies succeed in passing regulations to kill most of competition
this guy thinks the rich people care if others are better off
• No more sharing my project work as open source. No more open discussion. I don't care how badly I want to show the world; if I'd like somebody to see, I will have it printed in a physical book, or I will give them access to my private repository not reachable via the public Internet.
• Bring back LAN parties. Not for gaming necessarily, but for the purpose of exchanging works of engineering and art in an intimate, intentional way.
• Take this as an opportunity to build closer, longer-lasting relationships with people.
• No more emphasis on metrics. I can microdose on dopamine from natural sources, like, looking at a beautiful sky at sunset, or cuddling my dog.
• Open hardware, or, in the very least, hardware we can still control on our own volition. If this means we must be retrocomputing enthusiasts, then so be it.
> No more sharing my project work as open source. No more open discussion. I don't care how badly I want to show the world; if I'd like somebody to see, I will have it printed in a physical book, or I will give them access to my private repository not reachable via the public Internet.
If you have a project you would have open-sourced, and you don't do that for fear that the LLM god will steal it, what's the point of building it at all? We shouldn't be afraid to share things with other humans just because LLMs will possibly use it as training data. So what if they spam out a copy of it, or a derivative?
If we all stop sharing things with each other in case one of us is a robot, we might aswell just lie down and die
To prove to myself that I can, and to solve problems in a way I enjoy.
I'm not saying I want to go into utter solitude; I just want to be a lot more careful where and how I share my works.
Addendum: I think the idea of private art and code collectives, entirely separate from concerns of LLM consumption, are an interesting idea worth pursuit. Has something like that been pursued before? It's reason enough for me to engage in that.
What I do fear is the possibility of megacorp robots being the only ones… local and “dark” technology are essential.
In the words of Zack de la Rocha: "Fuck tha G-ride, I want the machines that are makin' 'em". Furthermore: "Know your enemy."
I believe the idea of “off-shoring” your IT is a good example of this. My brother works for a business whose clients would drop them the moment they off-shored any aspect of their IT support. Not because of data sovereignty, but simply because they value them being on-shore, in the same time zone, and being native English speakers. And this is despite the fact it would drop the prices they’re paying for IT by 30-40%.
HN needs a better AI slop filter.
Or maybe I do. Maybe I can vibe code a browser extension that pre loads TFA links and auto hides anything that isn’t sufficiently human authored.
If we are talking about releasing OpenSource software, they can already be used by companies with zero effort.
I'm guessing the author is talking about released closed source software or simply talking about ideas? What kind of serious company or startup is building in the open and sharing trade secrets or ideas?
I'm genuinely confused and I think this article is pure slop without any core idea.
1. Sharing was never really safe, open source by default only became possible because of SaaS and rent-seeking behavior.
2. Early web (not internet) wasn't hyperconnected. With the advent of global-scale social media it was immediately obvious to many this will lead to monoculture and reduced diversity. What thought to be the information superhighway became the information superconductor with zero resistance, carrying infinite current. Also known as short circuit.
This was insightful, but is it much different to the kind of data google and other search engines have had access to for a long time?
And while LLMs might have sped up the rate of code generation, the tech giants have always been able to set a team on reverse engineering whatever they feel like, though they also often just bought up the startup that was producing what they wanted. I guess I'm not seeing exactly where LLMs specifically are creating the dark forest, rather than the consolidated, centralized tech landscape itself
I'm not really interested in pursuing ideas that stop being good if somebody gets there first. If I bothered to design it its because I wanted it to exist and if somebody makes it exist then I'm happy because then I get to use it.
So what kind of things does this apply to? Likely, it's zero sum games, schemes to control other people, ways to be the first to create a new kind of artificial scarcity, opportunities to make a buck by ruining something that has been so far overlooked by other grifters. In other words: bad ideas.
If AI becomes a threat to those who habitually dwell in such spaces, great, screw em.
In the meantime, the rest of us can build things that we would be happy to be users of, safe in the knowledge that if somebody beats us to it, we'll happily be users of that thing too.
If we try to double down on the zero sum games that we learned from our parents, maybe not.
>You think of something new and express it - through a prompt, through code, through a product - it enters the system. Your novel idea becomes training data. The sheer act of thinking outside the box makes the box bigger.
This was the same before, if you had a novel idea and make a product out of it others follow. Especially for LLMs, they are not (till now) learning on the fly. Claude Opus 4.6 knowledge cut off was August 2025, so every idea you type in after this date is in the training data but not available, so you only have to be fast enough. Especially LLMs/AI-Agents like Claude enable this speed you need for bringing out something new.
The next thing is that we also have open source and open weight models that everyone of use with a decent consumer GPU can fine-tune and adapt, so its not only in the hands of a few companies.
>We will again build and innovate in private, hide, not share knowledge, mistakes, ideas.
Why should this happen? The moment you make your idea public, anyone can build it. This leads to greater proliferation than before, when the artificial barrier of having to learn to code prevented people from getting what they wanted or what they wanted to create.
You've almost captured the full picture of it.
If you have a great idea, it's not going to be self-evidently enough of a great idea until you've proved it can make money. That's the hard part which comes at great personal, professional and financial risk.
Algorithms are cheap. Sure, they could use your LLM history to figure out what you did. Or the LLM could just reason it out. It could save them some work, sure.
But again - the hard part is not cloning the product, it's stealing your customers. People don't seem to be focused on the hard parts.
It's not the risk it's being made out to be.
[1] Unless you're assuming that you maintain control over your technology while outsourcing most of the development thinking to a rented AI? Times have changed, and the API is not the only issue anymore.
Big companies seem to be bad at innovating but really, really good at enterprise sales.
These teams said that per man-hour they brought more value to the company than any other team. (But you know, they all say that)
The risk is that they make the category a built-in feature in something people already use. At that point, copying the product and taking the customers start to look like the same problem.
Yes. A Red Hat, a Microsoft -- these companies have processes, organizational structure, politics, friction, etc. They might like your products, but replicating them might not be easy for reasons that have nothing to do with how easy it is given the freedom to do it. Small shops with vision might well have a bright future, for a while, maybe.
You have a point about the update intervals and the higher speed they provide to developers. But you are talking about now, and I was making a thought experiment - about a potential future. LLM-s are not learning on the fly, but I suspect they do log the conversations, their responses and could also deduce from further interaction if a particular response was satisfactory to the user. So in a world where available training data is drying up, nobody is throwing all this away. Gemini even has direct upvote/downvote on responses. Algorithms will probably improve, and the intervals will probably shorten.
Given the detailed information that all the back and forwards generate - I think it's not hard to use similar technology to track underlying trends, get all the problems associated with them and all the solution space that is talked about - and generate the solution before even the ones who thought of it release it. Theoretically :)
I think the open development will become less open. I don't like it - but I think it's already happening. First - all the blogs and forums moved to specialized platforms (SO, discords, ..) and now event some of those are d(r)ying. If people (in extreme cases) don't even read the code they produce, why would they read about the code, discuss the code, that's not even in their care. That is without the theoretical fear of the global Borg slurping all they write.
Seems like this is hard to reliably do across the board. Sometimes when I stop interacting it's because it nailed the solution, and sometimes it's because it went so poorly that I opted to bin it and do it myself. Maybe all of the mid conversation planning and feedback is enough though.
The article says:
"Ideas are cheap - execution is hard"
"Announcing, signaling your ideas offered much greater benefit than risk, because your value multiplied by connections, and execution was the moat you could stand behind."
That's the key difference. It used to be much harder for a competitor to catch up to the state of your implementation.
I am not arguing against sharing. Sharing can be for the greater good.
But as you note, things have changed. We could reasonably assume a genuinely significant good idea, set free, might go in the direction we shoved it for a minute. Or fade into inaccessibility.
Not any more.
One of the fundamental problems if humanity is that the majority of people will happily contribute to the public commons and share with everyone, enriching us all. But there is a minority of avaricious people who will do everything in their power to claim the commons for themselves.
This problem is intractable because the more people are good faith actors sharing the public good, the more valuable that commons becomes, and the more it incentivizes people to try take it.
And they own - not rent the compute and models - as you do from them. If we want to extend this, they could "pre-cog" your idea and build it even before you do.
I'm not talking about what is happening now, I'm just playing out the thought experiment.
> "Ideas are cheap - execution is hard"
I would argue this mantra says more about the person repeating it. It simply means the person has no good ideas and is bad at execution.
I've not met many but I'm sure there are many out there who are scary good at execution. Something like 1% transpiration, 99% experience. I can have a designer do a 100 euro design, hire someone to write nice code, rent a factory or an office, I might even be able to buy the machines at a good price. What I cant do is spin the rolodex and (in 20 minutes) land enough clients who would absolutely love to work with me again. I cant find those private meetings and wouldn't be able to extend my reputation with the new project.
People with good ideas don't talk about them unless it is required. They don't talk with "ideas are cheap" people, it's pearls for pigs. You can spot some of them if they did bursts of multiple unrelated complex patents. My favorite are the rube Goldberg type of machines that combine well known things in ways that exceed the sum of the parts. Something like step 5 uses the vibrations from step 1 while step 3 uses the heat from step 6.
To have good ideas you need many of them but you also need to know execution or you end up thinking the easy stuff is hard and the hard stuff is easy. Improvement is unlikely from there.
Was this just awkward phrasing or did something change and they learn after training?
March 20, 1926: Hungarian physicist, electrical engineer Kalman Tihanyi applies for his first patent for a fully electronic television system. Tihanyi's ideas are so essential that, in 1934, RCA is required to buy his patents.
Kalman who ?
First of all: it's not as though no new LLMs are being trained. Of course they are.
Second: learning LLMs are not far off, and since they can typically search the web via agents, they effectively can "learn" now, and they can learn (not so well) by writing stuff into a document hidden from you. Indeed, some LLMs can inspect your other sessions with them and refer to them in future sessions -- I've noticed this with Claude.
Third: already we see some AI companies wanting to train their models on your prompts. It's going to happen.
> The next thing is that we also have open source and open weight models that everyone of use with a decent consumer GPU can fine-tune and adapt, so its not only in the hands of a few companies.
There's a pretty good chance that LLMs buff open source, yes.
> > We will again build and innovate in private, hide, not share knowledge, mistakes, ideas.
> Why should this happen? The moment you make your idea public, anyone can build it. [...]
This was always the case, but now the cycle is faster. Therefore if you must use an LLM you might use an LLM that you run on your own hardware -- now your prompts are truly yours. But as TFA notes the AIs will learn just from your (and your private LLMs') searches, and that will be enough in some cases for them to figure out what you're up to. Oh sure, maybe the Microsofts and Googles of the world will not be able to capitalize on millions of interesting idea floating about, but still! the moment you uncloak the machine will eat your future alive, so you'll try to stay off its radar and build a moat it can't see (good luck!). Well that's what TFA says; it seems very plausible to me.
or insane costs for any serious LLM -- how does Anthropic get return on investment by improving FOSS
the end state is a walled garden and technofeudalism
The problem is never we don't have enough ideas. It's how to find the good ones among the sea of ideas. Most of ideas that eventually prove right sounded very stupid at first. Selling books online? Pff.
By the way, Liu (the author of The Three-Body Problem, who popularized the concept of "Dark Forest") has a short story about exactly that, Cloud of Poems. Unfortunately it's never translated into English.
Now before you say this is unrealistic or isn't done today, just know this is all perfectly possible with existing technology. In fact this is a lot how adtech works already, using metadata to predict products you might want to buy before you even realize you want to buy them.
On the other hand, if your primary goal is to change the world, or “be the change you want to see”, maybe being public and feeding it isn’t so bad, especially if others don’t?
I was recently running myself through a thought experiment similar to the author here: if LLMs truly do make generation of ideas cheap (I'm still a skeptic here even within software), then as soon as products enter the public awareness they become trivial to reproduce. For example, in a prompt like: "Uber but for babysitters," "Uber for" is doing a tremendous amount of work. Before Uber, its model, UX, modes of engagement would've taken pages and pages to describe, but after, it becomes comparatively much cheaper.
... in this way, LLMs could cheapen ideas and creativity so much that they make other factors (which are already the weighing functions) more important, and I think the imbalance here is deeply troubling. Those factors are namely network effects (existing customers, brand recognition, existing relationships, capital). And when balance is shifted more toward network effects, it means that the whole system becomes more brittle because it makes it even harder to boot out incumbents.
There are a whole slew of issues with LLMs, particularly around their intended devaluation of labor, and we aren't talking enough about them.
Humanity has endured regular cycles of shared enlightenment (usually accompanying profound technological or societal revolutions) and dark forests of protectionism, and we always find a way to the other side. Sometimes these cycles last a century; sometimes, but a few years. Still, we always make it to the other side.
In the case of LLMs, we have to make a few assumptions: that they will not lead to AGI, nor will we solve the problem of real-time learning or context windows. These are, admittedly, huge assumptions, but the current state of AI and compute suggests a nugget of truth to them for the time being. If that’s the case, then perhaps this “dark age” of the dark forest is bounded by the limitations of silicon-based computing (hence the push towards Quantum) and the human frustration with diminishing returns from technological investment. As artisans and brilliant minds withdraw, the forest risks starvation and withering from a lack of sustenance; if humans withdraw from technology because they must hand over IDs and personal data, because to engage with technology is to surrender to surveillance and persecution, then the natural trend will be to withdraw over time - and the markets will adapt accordingly, with or without external/government intervention.
That is to say that the dark forest only lasts as long as its inhabitants decide to persecute each other for daring to light a path forward. Right now, the incentives very much favor those willing to harm others for personal enrichment; that is not always the case, and humans decide when that reasoning becomes vilifiable.
But as it's written at the top, this was a thought experiment, not a prediction. And while I tried to put all the bad scenarios on the table (with the theme of the dark forest that is), I think I again found a sense of optimism, because I also think this thought experiment has flaws.
So I hope, that after a while I will be able to write the contrary, I've already written down some points about it - I already have a title. But we will see. I am more optimistic after I wrote this than before. :P
"You wanna escape Armageddon, read a different book." - KRS-One
This is free as in free puppy.
But if your goal is simply for the thing to exist, there is a strong incentive to share.
Beliefs: At this time, I do not actually believe that LLM's can innovate in any real way. I'm not even clear if they can abstract. I think the most creative thing they can do is act as digital "nudgers" on combinatorial deterministic problems; illustrated by their performance on very specific geometry and chemistry problems.
Anyway, my point is that I think they may still need human beings to actually provide novel solutions to problems. To handle the unexpected. To simplify. LLM's can execute once they have been trained, but they cannot train themselves.
In the past, the saying in silicon valley was often "ideas are cheap". And there was some truth to that. Execution was far more difficult then the idea itself. Execution was so much more difficult than "pure thought" that you could often publicize the algorithmic/process/whatever that you had and still offer a product/service/consultancy that made use of it. The execution was the valuable thing.
But LLM's execute at a cost that is fractional of human cost and multiples of human development speed. The idea hasn't increased in value, but the execution cost has decreased markedly. In this world, protecting the idea is far more valuable than it is in the previous world. You can't keep your competitors away by out executing them, but you can keep them away if you have some advantage that they do not understand.
And, I agree, that is quite worrisome. If people don't share knowledge then knowledge disseminate much more slowly as everyone has to independently learn things on their own. That is a frightening future.
LLMs do not have and cannot obtain the capabilities the author is hand-wringing about, and the current much-hyped apparent-productivity will pop with the bubble & corps have to start paying full-price for chatbot access.
Instead, we've got the slop[4,5] that TBL came up with, and it stuck.
The best ideas aren't the most profitable, and thus remain outside the goals of the "Dark Forest". The best thing to do is to just have fun, and not worry about profit, like this man, his cats, and his use of the 3d printer to make a train for them.[6]
1) Everyone and everything is subsumed into the forest. Innovation becomes unprofitable for the innovator as the one who controls the forest uses their capital to clone every new innovation.
2) Everyone withdraws from the forest. Innovation goes private. The forest stops growing, but doesn't die.
---
But there's two things the post doesn't consider:
1) Viral licensing.
What happens to a model if it is trained on data that comes with a license? What happens if the laws that be decide that the model producers, the models and the products of the models themselves must follow the conditions of the licences. How will that affect the model producers? What if customers don't want to be beholden to those licenses? What happens if the conventional wisdom is to avoid models to avoid lawsuits? What happens when models, model producers and customers power lawsuits against (other) model producers? Where would the new equilibrium between model producers and innovators move to?
2) Non-profits models
What happens to model producers if customer shift to become non-profits themselves, specifically those that pay employees instead of model producers. Would the model producers become starved out? Or would they need to switch to non-profit status as well? How would model producers, the models and the forest as a whole change if profit no longer became the priority?
I mean we haven't had an innovation in patents and trademark for software for how long? Why is it that only hardware can be copyrighted and trademarked - can we really find no way to do this that can't be abused by patent trolls?
I have no other comment other than - very interesting. I thought about how the overlying model will change for us, but haven't considered that the underlying model (what you proposed) can change too ... if that makes sense.
It has been said that Microsoft indemnifies people using its LLM tools against copyright and patent issues, but I don't know if it applies to LLM output which might/should be GPL licenced.
IANAL, but for some months now I've been pondering what could happen if a site--like a personal blog--had a legally-strong "click-wrap" Terms of Service, which unlike GPL means it rests mostly on contract law, rather than copyright.
For example, imagine a ToS that says something like:
1. You acknowledge I am providing you something of value (my content) and you agree to provide compensation/consideration if you use that in an AI model.
2. By doing that, you grant me an irrevocable worldwide license to use and sub-license all content that emerges from the model, notwithstanding any future agreement you may make in reselling access or outputs of that LLM to anyone else. If a conflict should occur, you agree to indemnify me against claims by that other entity.
3. If you believe my content was not a material factor in some output, the burden of proof is on you to identify the specific output and prove that my content did not influence the it.
In short, this doesn't stop someone from stealing my art/writing, but it does put a potential hole in their attempts to monetize it.
For example, if they scrape my music and then license a copy of the new Music Generator 3000 to Disney, and Disney makes a movie with that music...
The ideal solution would be to remove the garbage, but right now we can't even detect it, let alone figure out a way to get rid of it. Besides, it's a zero sum game, why bother cleaning up when you can just effortlessly pump out more garbage in hopes that some of it will remain in orbit for long enough to benefit you.
When I read if for the second time, trying to understand it - maybe even better match for the low orbit flying garbage would be "enshitification"? As the time goes on, more and more garbage is produced, and we have no clear way or specific motivated entity to start removing it so it just grows.
…whereas I feel what you’re describing is another Tragedy-of-the-Commons.
[1]: https://jackyan.com/blog/2023/09/google-search-is-worse-by-d...
I do pity the bug bounty people who rely on goodwill in their programs given that everything with a financial incentive is vulnerable. But otherwise the great thing about digital spaces is that there is, for practical purposes, unlimited space.
Every day there's another "how do you deal with the AI-apocalypse" article, I don't just ignore it
It delivers on the value of open source, that anyone using your software is permitted to make and distribute their own changes.
SQLite is an example of a project that is open source but closed contribution.
And there have always been techniques for identifying quality contributions from new contributors.
Bold assumption. On what will you run Emacs if average PC costs $12000? Yes. Even Raspberry Pi. It's not called war on general computing for nothing.
If you say the cloud, that will be cut up and reused by the next AI crawler.
We simply don't know how long this bubble will last
I don't think we'll see PC affordable in my lifetime. It didn't happen after Bitcoin crash, didn't happen post pandemics. New price gets normalized and the cartels just agree to not make anything for PCs.
And if you get everyone on cloud? Then you can control Internet same way you can control TV or the press.
What's your definition of affordable? What years were PCs affordable? By my reckoning PCs are affordable today. If you're not trying to run games they're downright cheap.
I'm not sure what issue you're referring to with bitcoin, but if you want to use bitcoin to buy something it's about as easy/awkward as it ever was.
Food prices went up 15-20% more than they would have with 2% inflation. If PC prices do anything similar, it's not a big deal in the long run.
Cartels just agree not to make anything for PCs? Why would that happen? The point of restricting supply to a market is to maximize profits, not to refuse forever and lose out. They wouldn't even want everything to be in the cloud, because a hundred rarely-idle cloud cores can replace a lot more than a hundred mostly-idle consumer cores, so they end up selling a lot less hardware.
Companies paying too much for hardware to chase a bubble is not "war on general computing".
> Even Raspberry Pi.
What's preventing supply from catching up with demand in this situation?
If high prices stick around long term, there will be so many chip fabs ready to pump out $100 pi-equivalents that still let them have a 200% markup.
Also I can go buy a quite good mini PC with 16GB of RAM for $300. In what world does that price go up another 40x?
Best you can do is to spread all of the goods it provides, as it is too greedy to not devour them itself. It will consume them and suffocate slowly.
Can we accelerate it perhaps? You know, spending ALL our resources on making the snake more fat is not a good idea. Its only good idea when you have so much resources that you can easily suffocate the snake with negligible (for us) amount. If you try to suffocate several million snakes, that might backfire a little.
At the end of the day, Liu Cixin is basically a social darwinist who's got a thing for authoritarianism, and it bleeds through pretty heavily into his work. Dude is massively overrated imo.
X=1.0000001
The problem was as soon as he got the patent, it was available to view in countries where the cost to enforce his patent wasn't viable, and the market very quickly filled with cheap imitations. He straight out said at the time he regretted getting the patent.
In the beginning, you reached out with reckless abandon. It was fun to banter with dogs online. Nobody would ever see unless they were looking through your wall. There was no search. No comment history. Bumping into someone in the vast night was enough of a miracle. Why hold back? There are some forum warriors on some PHP BB somewhere, but the domains they rule are insignificant. If you're talking to someone, your motivations are rooted somewhere in the grass.
First came the like button. Rather than blindly hoping what you say resonates with the sensibilities of people you probably knew IRL, rather than present your genuine self because there were no scores, the incentive signal would begin to distort us. Then the newsfeed meant that if you got enough likes, you might get a moment of fame. We all knew it was a terrible idea, a force that would only corrupt us. The personal nature of disjoint little walls living in isolation, was becoming replaced by global stack-ranking.
Then the algorithms came. With them came content marketing to jump the line. At first the ten blue links were filling in the sparsity. Along with that came only a little bias, connecting semantically distant topics, but with a little bit of a feedback loop, an resonator with an unknown response curve. Engagement could be measured, and before long, we were chasing the same likes we used to train the system, and trained by our likes, attracted we became to mysterious stable manifolds, chasing the chase we ourselves define, like Nascar, but insidiously more stupid.
Little by little, the incentive trails no longer lead back to the grass. Reality became suspended without support, a self-sustaining virtual reality determined to fight you to prove that it exists, to prove that its conclusions were right. Every out group is understood to be an echo chamber, an ant mill spiraling helplessly, yet cynically, those who understand these mills best also wind them up like beyblades to crash them into other communities, seeking advantage with the asymmetry of outrage. After the battles, say what was made common to say, and you will be rewarded.
The spinning wheels cannot steer themselves and instead are dictated by whichever chaotic divergence generates the most powerful local gravity well, but because the goal of most is to harvest karma at the bottom, and because the mass controls where the bottom is, over and over we find ourselves pushing all others into the nearest pit to more quickly generate the illusion-giving singularity.
Like Darth Nihilus, the internet seeks only to feed, to feed on the validation that only the internet can give, the permission-giving blessings it needs to tell itself why the grass is wrong. All those who speak of grass are wrong. All those who smell of grass wreak and are wrong. We must destroy the grass, all those appeal to grass. After all grass is dust, at last we will project our utopia into reality. At last we will be not only right but so right that our beliefs will project back into reality.
The spaces within this over-connected, globally addressed world grows into a new kind of sparseness, one where all knowledge of grass must be concealed. Those who can ground the conversations in primary sources flee. Those who can color reasoning with nuance instead withdraw. Reality has retreated as the most dominant reverberations roam like the predator cities of Mortal Engines, looking for any invalidating observations to roll over and consume. Any real life must pretend to be a bot to blend in with the background radiation.
Less like Skynet and more like a zombie apocalypse, the threat comes from within, from among us, from our corruptions, from our karma seeking performances, from our lack of any commitment to any underlying reality, from our flawed belief that the information spaces is some kind of reality stone that enables active control instead of a mere reaction, the shadows on the wall, the murky results of the true forms.
Yet in this new darkness, a certain light has always held. What one wishes, one knows another has wished. What one respects, one knows another respects. No matter the limits of self-knowledge, no matter the information desert one has to cross at night to live in instinct, it is an infinitely brighter signal than the cynical self-corruptions of living for the machine, living to win the games whose rules it was our job to write. What one believes, one knows another has believed. Look into your own center and the true center of others you have known.
do you really think bigco is going to steal your vibecoded app just because you used their API? ridiculous. They could already do this before AI with their army of devs.
should you hide all of your ideas until they're perfect and ready for millions of users? we all know this goes against a core tenet of startups which is still true today: launch early and often.
promptfoo/openclaw weren't cloned by openai when they got poplular, they were bought for real $$$
also, regarding this:
> 2009, I bought a refurbished ThinkPad, installed Xubuntu, and started coding.
you can still do this, even with that same 2009 thinkpad. the hard work is in getting your app out there in front of people, coding is just a small piece of a successful business
“Execution is hard” was never about the code part.
Up until 2 years ago I was an engineer/entrepreneur. I could build anything. Other stuff, selling, supporting (execution) was hard.
LLMs made building some of the things I could build faster/easier, others not so much.
Well, the other stuff is still pretty hard. Maybe harder because there is a tonne of spam.
So feel free to share your ideas. Everyone’s gonna think they’re LLM generated anyways.
Big companies won't execute our ideas because they need maintain everything that they execute. Plus, the cost of modifying something that was executed itself is also very high at a big company.
Coding has always been the easy bit.
So hiding is the most rational - the only - strategy of survival.
This is a paranoid and cynical strategy that doesn't win out in the known history of life. What works is grow, expand, mingle, maintain - assimilate but don't annihilate.
Interrsting take, and possibly (probably?) true of humans. But is it true of other (alien) sentient species?
Or in a more biblical sense: do unto others before they do unto you.
Uhh, yes it does. You are thinking of humans. Humans can mate with other humans. They can assimilate. Now, think of invasive species. What do they do? They don't mate with the natives, learn their culture, respect and give them space. Quite the opposite on all counts. They do what they do in their resource game. They might not out compete the natives and they might peter out. Or, they do out compete the natives, and before long, there aren't any natives or that careful equilibrium that was established beforehand.
The dodo bird is an example of something that was isolated and then got steamrolled when the herd came around.
You can always zoom out and look at the bigger picture, it's not even about individual species but life as a whole. "Hide and isolate and wall off" is not successful in the long run.^ Your only chance is to keep up with the herd.
^ Save for things like extremophiles that have found their way into a tiny niche that nobody else wants. They may survive but they don't flourish and prosper.
> Did you get to read the Liu Cixin’s second 3-body-problem novel? - The Dark Forest. Well some of you did …
The author of this post then provides a good summary of the idea in the next few sentences, but remember there is an entire book around this premise (and a first book that sets it up and a third book explores it even more).
It actually points out the completely opposite and I liked that quite a bit That AI allows us to get back the open web in in a way.
The existing megacorps have huge swaths of infrastructure, expenses, and requirements that require massive amounts of capex to maintain. Even if performative, Meta, Google, OpenAI, Anthropic, et. Al cannot simply layoff their entire engineering, accounting, HR, sales, and support infrastructure. Those orgs are large for “good” (historically necessary) reasons.
Now fast forward to today, and this is where I differ in opinion, it is our megacorps are the civilizations who should be scared of being discovered. Minus infrastructure providers, they are the large advanced entities which can be annihilated by someone with a decent budget and a good local model.
For ~$30k-$50k (primarily buying RTX 6000 pro GPUs and a CPU with enough PCIE lanes) “anyone” can build a system using open weight models that, and let me truly emphasize this: autonomously create functionality to compete. Previously it would take me months, or years, of immense dedication to show up after work and produce something of value. Now I can do it using excess compute on my existing workstation. No existing corporation can afford to undercut every possible idea. If I only gave 1000, 10,000, or a 100,000 users they cannot compete. That may, and I believe it will, provide more than enough capital to attack that megacorps X or Y. If I’m making $100k a month, I can afford multiple autonomous systems per month. After that initial capex, I can then hire other people to help manage them. At no point will a company with billions upon billions of dollars in quarterly capex be able to compete.
Maybe they can compete with one, two, ten, or a hundred but they cannot compete with the absolute onslaught on thousands of possible frontlines. They can cut costs, by reducing their workforce, but they’ll only be increasing their competition to save their earnings report.
And yes, I realize that the open weight models are created via obscene amounts of capital, but we’re lucky that competing nation states, and cultures, like China have immense incentive to do so. Good enough, is still good enough.
The forest may be dark, but it won’t be for much longer.
tldr; call the an ambulance, but not for me. It’s going to be for the existing power structure.
great oneliner
You build your product audience off the back of your community and sense of taste just as much as the code itself. I love what Brad does with liliputing.com. I love what dang et al do with this place. I love what Stephen Lavelle does at increpare.com. 3Blue1Brown, Steve Ramsey’s Woodworking for Mere Mortals, Don’t Hug Me I’m Scared… I guess I’m straying into content not just code but the underlying theme is good taste and good ideas and a good workflow through craftsmanship and custom tools*.
You won’t make billions but you’ll make something worth engaging with. If anything, I’m looking forward to a future of more creators not fewer.
* Oh! Vibecoding is 3D printing and AI slop is land-filament? Doesn’t mean you can’t do amazing things with an LLM / X1 Bamboo, just that if you don’t put much effort in then… it shows!
When there's higher violence and lower property values in a Black neighborhood, people like OP are quick to blame Black culture. But when the "Cognitive Dark Forest" emerges from a community that shares its own common characteristics, suddenly collective accountability no longer applies.
When discussing violence in the Black community, it's "cultural." But when the subject turns to financial crimes or exploitation — where the per-capita ratios tell their own story — proportionality and population-to-crime-rate analysis mysteriously stop mattering.
It's difficult to take the "Cognitive Dark Forest" seriously as an existential concern when the people raising the alarm are so selectively offended. The crisis only becomes real when their innovations, their livelihoods, and their moats are threatened. Everyone else was supposed to just adapt.
The "Cognitive Dark Forest" is and will be continued to be perpetuated by "them" and if you really cared about the issue you would have addressed them.
Feel like we are trying to put the author in a bad (racist or classists?) light so we do not have to address the real issues touched on by the article.
The Dark Forest idea and the original post resonates well with this.
I few days ago I created a new repo for a new block cipher explicitly not to be used. And directly got several mails from bots promising that they (claiming to be humans) had looked at my repo and they could include it into their portfolio of especially good projects they also had vetted. Being part of this portfolio would almost guarantee that my repo and project would be used. If I only paid them some money first.
Creating the public repo meant sending a signal out into the digital world where agents are hunting for the human prey/resorce to extract value from.
The repo in question: https://github.com/secworks/tau256
“These AI tools are so powerful they can steal your ideas with nothing but a sentence”
I know that’s not exactly what OP is saying but the pretentiousness of the “we knew better” got to me a little bit. I think it’s a cool and unique analogy but I’m not as pessimistic.
Ideas have become so cheap to try/experiment with, more people are able to try 10x more or whatever, and that may keep increasing, I think there are way less hunters than hunted
Isn't that just life?
But in general, the ideas of the post are sound. IMO, the consequence is simple: we will become the forest. That is frightening, but not necessarily worse than unchecked capitalism in the dark forest.
One seems to omit the fact that LLMs are fundamentally designd for workload quite different for what they are being used right now. Sure you can improve them but can't escape / workaround the current NLP design endlessly. Then there's the irony - Internet did deliver on free (as close as it gets) and easy access to information (any). Did this make people smarter, more knowledgable, more tech savvy & etc? Nope, it didn't. Just like the libraries didn't (queues at libraries were and are a rare event). Big deal that the information is readily available when people do not know what do with it or care to do anything.
Ideas are cheap, the chances of having some truly unique idea that is also business feasable are not that big. It's not so much about the ideas but rather the ability to execute, follow through and well - make sales while constantly improving what you got. Staying silent, going dark - have their merit but only when the wheels are already turning and one is into acting, not into fearful hiding.
In either case - awesome blog post!
LLMs can't do real AGI but can get pretty close
I love this ending. I don't agree with author's views. But the article is very coherent, thought provoking. And definitely the comments here on HN are even more interesting.
For better or worse, that pretty much captures everthing you need to know about the remainder of your s/w career these days, if you think about it
This doesn't leave any room for contradiction. And I want to believe that as much as the tech-overlords believe that they control reality, reality is inherently messy and complex. Execution still matters, bureaucracy is real, big companies run in questionable directions all the time. AI companies also directly compete with each other and are not this monolithic being. In other words, the forest is not a single organism, it's a chaotic ecosystem.
I do agree with a lot of the points though because I think having this prediction machine on steroids is indeed an insane power to wield. I remember having those thoughts already about google ca. 20 years ago, them having access to every seach phrase. Now AI is this to the max, basically the demand curve of all human interest. Pretty unbelievable. And the asymmetry is growing by the day. But still, we are not there yet.
And for anyone who fundamentally needs their views changed on this, I recommend Vaclav Smil, How the World Really Works:
"Modern civilization will remain fundamentally dependent on the fossil fuels used in the production of these indispensable materials (ammonia, steel, concrete, and plastics). No AI, no apps, and no electronic messages will change that."
The world will revolve around that for decades to come! Thinking that AI eats the world is a Silicon Valley story and feels real inside of SWE circles but talk to some nurses or firefighters or people growing food and you will realize what a narrow field of view that actually is.
No, it leaves out a critical understanding.
Dumb ideas are EXPENSIVE. Most ideas are average. Great ideas are exceedingly rare.
But now, its finding the great ideas is the real problem space. And now, execution on those great ideas are what we all seek.
And remember how many good products have been abandoned or killed by corporations because not marginal enough. So you're not very likely to be chased even if you do intend to extract rents from a broad audience.
The article is spreading dangerous FUD aimed (perhaps inadvertently) to hinder free and open ideas sharing and innovation.
I'm just trying to say, I am definitely not trying to deliberately spread FUD to hinder the open web -if that was your impression :P
Here in the thread there are people already saying they won't open source their stuff anymore