262 pointsby sedevApr 2, 2026

20 Comments

dbt00Apr 2, 2026
2004, actually, with a minor update in 2008. This was the same principle I used coincidentally at the same time to also disbelieve the same thing.
nkurzApr 2, 2026
I think the standard is that the parenthesized date shows the last update, not the original. Is this not correct?
derrakApr 2, 2026
Makes me think of academic papers that overhype their contribution. Also makes me think about AI hype.
rawgabbitApr 2, 2026
For me the danger of AI is that it enables the surveillance state through facial recognition and the instantaneous aggregation of all my data. For "national security" reasons, I may be detained and denied of my rights if Palantir hallucinates. Who do I sue if Palantir decides I am an illegal?
mememememememoApr 2, 2026
Ot worse because it didn't hallucinate, and they are coming for you, as a free thinking "radical". They can tell from a long deleted blog post you made in 2005 about green energy.
asdffApr 2, 2026
Why bother with all that though? Just ask them to do their job for the party. If they don't, or you suspect they don't align with the party, you just execute them. Don't need tech for this. The tech is just for some people to get rich, not to really enable any new evil that can't already be achieved today with pen and paper and bullet (as modeled extensively in the last century).

Put it this way, if Hitler had grok, would it really get any worse for the Jews? I don't think so. I think they would be screwed no matter what.

mememememememoApr 2, 2026
Because you can't do the Nazi Germany thing these days. I mean... disgust aside, it kinda failed. But you can spy on people under "national security" while keeping them feeling happy enough. And that arrangement can last 1000 years.
asdffApr 2, 2026
Still not convinced that AI is offering anything new here. Especially when the statistics you'd reach for are often like 100 years old or more. Bayes theorem is older than the united states. I think among lay people there is a lot of conflation between AI and statistics, and also a lack of understanding of the state of that field and how mature it is. Nazi Germany of course heavily used statistical modeling and even contracted with IBM to quantify Jewish populations.
pixl97Apr 2, 2026
This your point of view is kind of silly when you think about it. They used the modeling going after jews, but going after the people that were German but hid jews was much more difficult. With moden AI/statistical modeling they'd take all those people too.
dragonwriterApr 2, 2026
> Because you can't do the Nazi Germany thing these days. I mean... disgust aside, it kinda failed.

It failed because Nazi Germany was not militarily superior to combination of the nations that it got upset with it externally, not because of any internal failure of control. While its nice to think that Nazi Germany “failing” somehow disproves the viability of the same broad kind of one-party, massacre-the-opposition totalitarianism, it isn't really justified.

bluGillApr 3, 2026
Most of Nazi Germany is after the fact revision. They were popular around the world in the 1930s - for their plan to deal with the Jews. It is only after they went to war that we decided they were bad for that plan as well. (some people were opposed to the plan all along, but there were plenty who were in favor of it)
atq2119Apr 3, 2026
> if Hitler had grok, would it really get any worse for the Jew

Not grok specifically, but yes.

The holocaust in the Netherlands was remarkably bad in large part because the Dutch administration was so well-organized and had kept a registry of Jews.

Bad guys are going to use this technology to evil ends if given the chance.

BTW, there's a chilling alternate history novel called NSA by German author Andreas Eschbach about precisely that kind of idea. The premise is that computer science progressed a lot more quickly. The book opens with German data scientists in the 1930s combining census and financial transaction data (i.e., food purchases using electronic cash) to identify households that are hiding Jews and other "undesirables" .

jazz9kApr 2, 2026
Another danger I see is job loss. Who wants a world where the wealthy control AI and completely destroy the middle class.
yunnppApr 2, 2026
The wealthy.
asdffApr 2, 2026
The thing is a government never needed technology to be authoritarian. The government today already has all the tools to ruin your life. It had them in 1940. It had them in 1840 and it had them in the year 40 as well. And that tool is known as the monopoly on violence. It can be wielded in many ways good and bad.
bluefirebrandApr 2, 2026
This is all true, but surely you can see how automating the authoritarian bent of the government still makes things worse than before?
flextherulerApr 2, 2026
You're confusing autocratic with authoritarian. Total war reached its most recent zenith in the 20th century. If governments have always been able to control people to the same degree, why was not until Napoleon that we saw the beginnings of nationalism? I say this rhetorically, as it is quite obvious that it was technology and industrialization. When we look at ancient Empires and see their territory on a map it would be much more accurate to only highlight population centers not the entirety of the land. Illiterate farmers, who made up the majority of the world, resided in small towns and villages and their daily lives were largely unaffected by conquerors.
asdffApr 3, 2026
There was nationalism pre napoleon. Arguably east asia is a better example than european history IMO. I would say there is strong sense of nationalism among han chinese both now and in history. Likewise for Japan and Korea. Pre islam Persia as well. I guess the source of this was consistent centralized authority over a large region vs any technological change. You had that in east asia. You didn't have that in europe after roman times. Even larger empires like kingdom of spain were not really seen as "spain" as we know it but a unified monarchy over the kingdoms of castile, leon, aragon, sicily, and napoli. Interestingly you didn't really have that in india either, no one controlled the continent until mughal times and by then the religious and cultural regional differences were pretty set in stone.
lmmApr 3, 2026
> The thing is a government never needed technology to be authoritarian. The government today already has all the tools to ruin your life. It had them in 1940. It had them in 1840 and it had them in the year 40 as well. And that tool is known as the monopoly on violence. It can be wielded in many ways good and bad.

Not to the same extent. An army of humans is obedient up to a point, but there is a limit to what orders you can give them. When the officers are algorithms that limitation is a lot weaker.

asdffApr 3, 2026
> An army of humans is obedient up to a point, but there is a limit to what orders you can give them.

Whatever that limit might be is genuinely terrifying, given how far obedient soldiers have gone and not hit such a limit many times over the past.

thaumasiotesApr 3, 2026
> The government today already has all the tools to ruin your life. It had them in 1940. It had them in 1840 and it had them in the year 40 as well. And that tool is known as the monopoly on violence.

There are a couple of problems with this:

1. As a matter of raw empirical fact, a government around the year 40 wasn't too likely to possess a monopoly on violence.

2. A monopoly on violence isn't necessary to ruin your life. A simple nonexclusive license, which governments of the period did have, is sufficient.

themafiaApr 3, 2026
> It had them in 1940. It had them in 1840

Yea, and they were way more successful at it in 1940 than 1840. Are you accounting for all the times they tried to enforce their authority but ultimately failed?

> And that tool is known as the monopoly on violence.

No one has a monopoly on violence. What they really have is called "qualified immunity."

In this particular instance, though, their violence is particularly enabled by cheap technology and computing power.

0x3fApr 2, 2026
Human law enforcement hallucinates all the time. This is a bit like a poor argument against self driving.
achieriusApr 2, 2026
The last line of GP's comment is key here: "Who do I sue if Palantir decides I am an illegal?"

This shouldn't make as much of a difference as it does, but due to how our legal system works, it's much harder to get meaningful legal satisfaction when an algorithm (or other inhuman distributed system) commits a crime against a person than when a person does so.

0x3fApr 3, 2026
I think you're confused about the mechanism involved. It's hard to get satisfaction due to e.g. qualified immunity. The fact they use technology is largely irrelevant. You couldn't sue the NSA for spying on you before AI either.
pixl97Apr 2, 2026
"Computer says no", look it up.

Cars measure success by not hitting things.

Cops measure success by number of people they arrest. Note, not the number of people found guilty, that's the prosecutor.

Cops will gladly use a hallucinating computer system to beat the absolute fuck out of you with qualified immunity.

0x3fApr 3, 2026
> Cops measure success by number of people they arrest.

Simply not true. Police KPIs tend to focus on crime rates.

Regardless, they can mistreat you with or without AI backing.

nostrademonsApr 2, 2026
Interesting that this quote was initially about stock options at tech companies. It turned out that stock options did become nearly universal in tech compensation, and companies that granted them outcompeted companies that did not. So the management that was ostensibly “doing a massive blag at the expense of shareholders” wasn’t really, time vindicated their practices and things like option backdating and not treating them as an expense weren’t even really necessary, but it took a few years. It wasn’t obvious in 2002 that this is how it would play out.

And relevant to the title quote: maybe it should be amended to “good ideas do not need a lot of lies to gain public acceptance eventually”. The dynamic here is that a significant part of public opinion is simply “well, this is how things work now, and it seems to be working”, and any new and innovative idea by definition is not going to be how things work now. The lies are needed to spur action and disturb the equilibrium of today. But if you’re still telling lies a few years in, you’ve failed and it’s a bad idea to begin with.

peacebeardApr 2, 2026
So in your view, even a useful innovative idea cannot gain traction without being overhyped?
SpicyLemonZestApr 2, 2026
Almost any useful innovation is going to have a right tail of people who overhype it. They shouldn't, and I wish they wouldn't. But if your strategy for evaluating new ideas is to find the biggest sources of hype and fact check them, you're going to systematically undervalue innovation.
6510Apr 2, 2026
The problem is that many care more about presentation than substance. The irony gets overwhelming where boring is usually the best solution and the least exciting.
indymikeApr 2, 2026
There was a body of evidence far before 2002 that dealing employees in was a good move.
commandlinefanApr 2, 2026
> stock options did become nearly universal in tech compensation

Although I've noticed that options have been replaced more and more these days with RSU's (plain old grants) because options have a tendency to go "underwater", suggesting that they weren't all that great to begin with.

lotsofpulpApr 2, 2026
It’s been standard advice on this forum for at least 10 years to value options at $0, and only consider cash comp + RSUs.
0x3fApr 2, 2026
Options have some minor value in signalling that you're a true believer. You should in fact care only about base salary, but not telling the people doing the hiring that can be quite useful. Doing a fake come-down on base in exchange for options shows you are invested and surely worth hiring.
zozbot234Apr 2, 2026
Right, options go underwater precisely when the company is not doing well and you are at greatest risk of losing the job. That's not a great risk profile.
JumpCrisscrossApr 2, 2026
> options have been replaced more and more these days with RSU's (plain old grants)

RSUs are also much-less liquid and tightly controllable by companies than actual stock. That has made them attractive to management and insiders.

bluGillApr 3, 2026
I learned long ago (when my company decided they couldn't give me options because we were too big so they did these "I can't believe it isn't an option", which expired worthless): until cash is in my bank account it is just a promise waiting to be broken. If I want to invest I want it my choice.

In any case, it is a bad idea to invest in the company you work for - unless you are high enough up in the company that you see the real books, or you have so much invested they have to show you as a large shareholder. (nobody is the later - large shareholders have a full time job managing their money not working for someone else). There have been a number of cases where a company has unexpectedly filed bankruptcy and someone lost their job and their savings on the same day.

nostrademonsApr 3, 2026
> In any case, it is a bad idea to invest in the company you work for

I'd question this conventional wisdom, simply because you have a lot more information about the company as an employee than a random investor does, even if you are not in possession of things like financials that the SEC considers "material non-public information". Things like culture, intelligence of your coworkers, whether or not you're actually delivering on your commitments, how many feature requests and bug reports you get from your customers, mood of management, perks offered, etc. are all intangibles, but they are usually better predictors of long-term company performance than the financials that the company gives investors.

If your company is not doing well enough or is not something that you would consider investing in, you should find a different company to work for. Bad things are going to happen in your future, regardless of whether you own shares or not.

raw_anon_1111Apr 3, 2026
No you don’t. If you did, you would be subject to lock outs. The average rank and file employee at any BigTech company knows only a minuscule more than the general public.

Amazon for instance has over 1 million employees. You know nothing about most of your coworkers or whether other teams are delivering featured

youarentrightjrApr 3, 2026
> The average rank and file employee at any BigTech company knows only a minuscule more than the general public.

Huh? We're not talking about the custodial staff.

> Amazon for instance has over 1 million customers. You know nothing about most of your coworkers or whether other teams are delivering featured

This is a hilarious example; especially at Amazon, "rank and file" employees are privy to $100M+ AWS deals, they have to implement them after all.

raw_anon_1111Apr 3, 2026
I worked for AWS in Professional Services (full time blue badge employee). Part of “sales”. Even when we talked internally asking for advice from the service teams (the people who worked on the various AWS services) or even internally within ProServe outside the project team, when we spoke on Slack, we didn’t mention the customers in Slack channels outside of a need to know basis and used the acronym “IHAC” (I have a customer) when referring to the customer.

I assure you the random developer on the EC2 service team for instance knew nothing about the sales deals.

Also a “$100 million dollar sales deal” is nothingburger for AWS not enough to move the market.

Do you think someone on the Alexa team in the retail division (“CDO”) knew anything about what was going on within AWS?

youarentrightjrApr 3, 2026
> Do you think someone on the Alexa team in the retail division (“CDO”) knew anything about what was going on within AWS?

Hmm, no?

As a solutions architect at Amazon I was very much a "rank and file" employee, and privy to large deals, so I'm not sure what you're on about. I haven't heard of Professional Services, presumably you guys had different responsibilities.

raw_anon_1111Apr 3, 2026
So you worked at AWS as an SA and never tried to sell its own internal consulting services?

https://aws.amazon.com/professional-services/

But either way, it’s monumentally a kind of weird statement to think that anyone besides “janitors” would know anything about the deals that would go through or to think a “$100 million sales deal” would move the needle especially as we see right now that AMZN is tanking because they reported they will spend more than all of their free cash flow on CAPEX for AI. You couldn’t have predicted that

youarentrightjrApr 3, 2026
> So you worked at AWS as an SA and never tried to sale its own internal consulting services?

Not sure I understand the value proposition here, but then again Amazon is known for having redundant teams every now and again.

raw_anon_1111Apr 3, 2026
SAs are not allowed to give the customer code or actually do anything. When a customer signs a contract (SOW) with ProServe, they are billable consultants who actually do implementations. Even they can’t touch production workloads and basically do everything in non production environments and teach the customer hope to do the work and move it into production
p_lApr 3, 2026
I used to be on a project that, IMHO, had possibly considerable impact on capabilities and even some specific financials in a publicly traded corporation.

After about third earnings call (which happened a tiny bit before the trading window for our stock grants opened), I (re)learned the hard lesson that even if we delivered and I had actual, material, move the needle impact on corporate financials, that would not translate in any way to stock price. Except maybe if I pushed it really, really, down by causing an avalanche of problems that resulted in some big name deal going down.

The stock prices are vibe based, once its publicly traded your share value will be based on whatever vibes pushed numbers in excel around earnings call, and it's perfectly normal occurrence to beat expected earnings per share for 3 quarters straight and every quarter get a different vibed-off reason as to why the price should go down.

pjc50Apr 2, 2026
The specific lie discussed was the idea that granting options was not somehow an "expense" and could be excluded from the accounts.

(Google tells me this is a relevant summary of US GAAP https://carta.com/uk/en/learn/startups/equity-management/asc... )

nostrademonsApr 3, 2026
That's not what the quote in the article is:

> Our lecturer, in summing up the debate, made the not unreasonable point that if stock options really were a fantastic tool which unleashed the creative power in every employee, everyone would want to expense as many of them as possible, the better to boast about how innovative, empowered and fantastic they were.

That's saying that it's stock options themselves which are the bad idea. The lie is in how they are expensed or not expensed. The point the accountant is making is that if stock options were a good idea, they could be expensed, thus not needing the lie.

But nowadays, stock options are expensed, right there in public, and they are still considered a good idea.

camgunzApr 3, 2026
Nah I think the advice generally is "ignore options."
zdc1Apr 3, 2026
People who sell lottery tickets, on average, do better than those who buy them. The same applies to stock options. Which is why "bonus" options are fine, but "buying" them by taking ESOP over potential salary, can be a bad choice.
AnthonyMouseApr 3, 2026
> The specific lie discussed was the idea that granting options was not somehow an "expense" and could be excluded from the accounts.

Stock options for the company's own stock are kind of weird because the company can issue its own stock, which puts them in a much different position than someone selling uncovered calls.

An uncovered call is a potentially unbounded liability. If you issued someone options to buy 10,000 shares for $10 each and then the price went up to $1000, you could be on the hook to have to buy $10M in shares and then sell them for $100,000, i.e. you'd take a $9.9M future loss, and the risk of that is a significant liability.

Whereas if you have 10,000 shares and agree to sell them for $10 each and then the the price goes up to $1000 before they pay you, you don't actually owe anyone that extra money, you just failed to make the $9.9M gain you otherwise would have. It's the same as if you'd sold (or issued new) shares for $10 immediately. But we don't generally book "opportunity cost of selling shares for the current market price" as an expense, do we?

ekjhgkejhgkApr 2, 2026
> Interesting that this quote was initially about stock options at tech companies. It turned out that stock options did become nearly universal in tech compensation, and companies that granted them outcompeted companies that did not. So the management that was ostensibly “doing a massive blag at the expense of shareholders” wasn’t really, time vindicated their practices and things like option backdating and not treating them as an expense weren’t even really necessary, but it took a few years. It wasn’t obvious in 2002 that this is how it would play out.

I happen to have read probably everything that Warren Buffet wrote on this subject, and in my opinion your take is confused at best.

First, you say that “stock options did become nearly universal“. No, they were already nearly universal at the time that this conversation was happening. I remember that Warren Buffet was quoting, going by memory, something like all but 3 out of 500 S&P DONT companies do it, or nasdaq or whatever index he was talking about. The fact that almost all companies do it doesnt mean its the right thing to do and if almost no company did it, Buffet wouldnt be complaining about it.

Second, you say “companies that granted them outcompeted companies that did not“. I literally have no idea how you came to this conclusion since, like I said, at the time this conversation was going on almost all company did it. Not because the companies that didnt do it died out, but because companies that didnt do it switching into doing it.

Third, and most important, I believe you misunderstand what the conversation is about. Expensing stock options is not a competitive advantage. Granting stock options might be, the rationale that paying management and staff more attracts the best people is an argument worth having. But the conversation isnt about whether its a good idea to grant stock options, the conversation is about which entry you should put your stock options when preparing financial statements. The author says clearly that this is about accounting, but you missed that. Theres no competitive advantage in doing one way or the other. The reason why Buffet complains about them is that A) it makes harder to discern from financial statements how much staff is costing the shareholders, not that its a competitive advangage or disadvantage, and B) if theres a cost that you need to pay in order to run the business, thats called an expense, and by your own argument you need stock options to run the business, therefore those are expenses and thats how they should be labeled in the income statement. The argument of companies doing it is that “earnings“ is bigger if there are things which are expenses but you dont call them that. Its literally saying that pnl = P - L but you know what, its bigger if I just report the P and hide the L.

asa400Apr 3, 2026
> and companies that granted them outcompeted companies that did not

What are you basing this claim on?

nostrademonsApr 3, 2026
That of the top 10 companies in the S&P 500 [1], all but Broadcom and Berkshire Hathaway give generous stock options, and also that of the top 10 in 2000 [2], only one (Microsoft), maybe 2 (Cisco) did. If you look at change in index composition, or even total earnings by company, you'll see a very steady and dramatic replacement of companies that did not spread the wealth through stock options & RSUs with companies that did.

[1] https://www.slickcharts.com/sp500

[2] https://www.visualcapitalist.com/ranked-the-largest-sp-500-c...

raw_anon_1111Apr 3, 2026
Conflating stock options and RSUs?
sublinearApr 2, 2026
> My reasoning was that Powell, Bush, Straw, etc, were clearly making false claims and therefore ought to be discounted completely, and that there were actually very few people who knew a bit about Iraq but were not fatally compromised in this manner who were making the WMD claim

At the risk of missing the point, I have to say that knowing what we know now, this is a very poor heuristic. Predicting a lack of WMD was not only correct by mere coincidence, but also irrelevant to the decisions made about the war in Iraq.

What is this blog post even saying? When you can't distinguish a lie, trust the room vibes? Seeking comfort won't give you any answers or get you closer to the truth.

Not enough people ask "why". They instead argue about effectiveness or correctness. At some point you have to determine whether you're chasing the truth to make a decision or just for its own sake. In the vast majority of cases what you want is a decision that will produce the desired results. That's the real reason why lies happen and why merely knowing the truth doesn't get you anywhere and often nobody cares.

EDIT: for the sanity of any late replies. My bad. I replaced the part about AI with something I thought was more interesting.

awesome_dudeApr 2, 2026
> Right now, we have a similar situation with AI. Not enough people are asking why AI is being pushed so hard. Instead they pointlessly bicker about its effectiveness.

We know why it's being pushed so hard - people need a return on all that money being burnt.

It's effectiveness is argued about because it's not clear one way or the other where things are, where they are heading, and where they will end up.

There has been a strong push for AI/AGI since before computing, so every time there's a breakthrough to the next level there's a hypewagon doing the rounds, followed by a "oh, actually it's not there yet" - and this time, like every other time, we go through a "is this the time? It's so tantalisingly close"

Are we actually there now? Emphatically no.

Are we at a point where it's usable and improving our lives - yes, with a PILE of caveats.

Edit: I wanted to add

There's always "True believers" whenever there is a fork in the road, and con artists looking to take advantage of them, but that happens whether there is a genuine breakthrough, or not - the hype is never a guide on whether the breakthrough exists OR not, so purely being a sceptic isn't worthwhile (IMO)

projektfuApr 2, 2026
It pretty clearly says, "Do not give liars the benefit of the doubt with respect to their current claims." If you want to believe there are WMDs in Iraq, do it because you have evidence, or at least the word of trustworthy people. Don't assume that there has to be a little fig leaf WMD in Iraq because the Emperor wouldn't really go out in public naked.

Was it immaterial to the fact that we were going to war, regardless of the effectiveness of the "sell"? Yes, that's true, but it gives a lot of cover to the Bush administration that so many people, including 110 Democratic congressmen, voted for the authorization to use military force.

Why is it being re-posted now? Who knows... AI, Iran, whatever.

pjc50Apr 2, 2026
> WMD was not only correct by mere coincidence, but also irrelevant to the decisions made about the war in Iraq.

This was the stated purpose of the war! If Bush and Blair had said "there are no WMD in Iraq", the war would not have happened.

mnmnmnApr 2, 2026
“Truth doesn’t get you anywhere” dumbest shit I ever heard. Are you 10?
wat10000Apr 3, 2026
How was predicting a lack of WMD correct by mere coincidence? He ignored the blatant liars, believed people with a good record on the subject, and got it right as a result. That's not coincidence, that's an excellent heuristic.

It is a bit of a weird article, though. Correctly predicting Iraq isn't some amazing feat. All it required was being paying some vague attention to the available facts. The question is not, how did some people get it right. The question is, how did so many people not?

appstorelotteryApr 2, 2026
Having worked in public advocacy advertising, I’d frame it like this: “Good ideas don’t need lies” is a compelling ideal but in practice, public acceptance isn’t a reliable signal of truth or societal benefit. It depends on incentives, narratives, and how information is presented.

History shows that even harmful or suboptimal ideas (like coal power) can gain widespread support if presented persuasively, while genuinely beneficial ideas can struggle if they’re complex or unintuitive.

A useful heuristic is: if an idea relies on misleading claims to survive scrutiny, that’s a warning sign. But public acceptance itself is not proof of goodness or correctness.

In short: persuasion and truth are related—but far from identical.

amarantApr 2, 2026
I think you just reinforced the articles point. Coal power needs lots of lies to justify it, as per your own statement.

That is in fact because coal energy is a terrible idea. It has 0 upsides compared to renewable alternatives, and is on the whole worse than even other non-renewable alternatives.

If you have to lie to make it sound good, that's probably because it isn't actually good

appstorelotteryApr 2, 2026
I think we’re still talking past each other. I’m not arguing that any specific idea is good.

My point is just that public acceptance itself isn’t reliable evidence either way: ideas can gain support (or fail) for reasons other than their actual merit.

amarantApr 2, 2026
That's true. The inversion of the quote is that good ideas require lies to lose public acceptance.

Well, there's a lot of lies flying around lately. So it happens.

rdiddlyApr 2, 2026
You just raised another example of a bad idea that needed lies to gain public acceptance.
appstorelotteryApr 2, 2026
I’m not making a claim about whether any particular idea is good or bad.

I’m pointing out that the process by which ideas gain acceptance is somewhat independent from their actual quality, so acceptance alone isn’t a strong signal.

mnmnmnApr 2, 2026
lol you don’t understand at all
appstorelotteryApr 2, 2026
I might not have been clear. I’m separating two questions: (1) whether an idea is actually good or true, and (2) how easily it gains acceptance. My point is just that (2) doesn’t reliably answer (1).
aidenn0Apr 3, 2026
I think you are missing the point:

Some good ideas might need a whole lot of marketing to catch on. Some bad ideas might need very little. The quote merely argues that if you must deceive people for an idea to catch on, the idea is not good. A corollary is that if you are tempted to lie in your advocacy, you should probably reexamine what you are advocating for.

harikbApr 2, 2026
Somewhat counter quote...

"Don’t worry about people stealing your ideas. If your ideas are any good, you’ll have to ram them down people’s throats." -- Howard Aiken

...to mean that, usually, the good ideas are the crazy sounding ones...

jayd16Apr 2, 2026
Well there's a survivor bias that I think plays into the quote.

If its a good idea that's obvious, it's already used widely. If its not obvious, you'll still have to convince people. None of that requires lots of lies, though.

didgetmasterApr 2, 2026
This is what scares me the most about AI. You have a handful of really big companies trying to outdo each other as they race to implement it and deploy it as quickly as possible.

To try and justify their outrageous capital spending on data centers; they are incentivised to exaggerate its current capabilities and also what it will be capable of 'soon'.

There is no time to evaluate each step to make sure it is accurate and going in the right direction, before setting it loose on the public.

jcgrilloApr 2, 2026
I guess a counterpoint might be Apple's "strategy". Scare quotes because I truly don't know if it was deliberate or just a happy accident. But somehow they've managed to not get so intensively exposed to the downside risk--if the wild claims about AI don't pan out they're not going to lose very much compared with the other megacorps.
GigachadApr 3, 2026
Apples plan has been pretty obvious. They invested in small locally running features that provide small utility rather than massive hosted models that cost a fortune and aren’t profitable.

There also doesn’t seem to be much risk in falling behind. If you wait longer you can skip buying the now obsolete GPUs and training the now obsolete models.

strongpigeonApr 3, 2026
They invested a ton in their Private Cloud Compute though, but are barely using their capacity.

https://9to5mac.com/2026/03/02/some-apple-ai-servers-are-rep...

mnmnmnApr 2, 2026
Burden of proof is on the cucks who ever believed a simp like Dubya in the first place. I’m more curious how could THEY get everything so WRONG. All those dumb marks who led to the murder of a million Iraqis should show us their pathetic reasoning; trusting an obvious fool is never defensible.
ForHackernewsApr 2, 2026
Good maxim with general applicability: cryptocurrency, wars in the middle east, online age verification checks.
roenxiApr 2, 2026
It is also a useful trick to keep in mind the opposite of critical thinking - following the herd. Just copying everyone around you is often a great strategy. So good that even if everyone around you are making mistakes it can still be the dominant strategy (there is a reason a lot of people who don't like war are cowed into silence when war fever descends). Most people are using it.

That implies that it is ridiculously easy to be right when everyone else is wrong. People aren't trying to be right. Any sort of principle-based analysis easily outperforms the herd. When leaders in society start lying that is indeed one of those situations. Pretty much any situation where everyone knows something and the hard statistics are telling a different story is.

The more pressing problem is how to go from a lovable Cassandra to someone who can preempt major events and convince the herd to not hurt itself in its confusion. Coincidentally that is how markets work, people who have a habit of being right are given full powers to overrule the mob and just do what they want. Markets don't care if everyone believes something. They care if people who got the calls right last time believe something.

In this case, the US hasn't seen a good outcome to a war since something like WWII and even there they waited until the war was mostly over and the major participants in the European theatres were exhausted before getting involved. The record is pretty bad. Iraq was an easy call to anyone who cares about making accurate predictions.

pinkmuffinereApr 2, 2026
> That implies that it is ridiculously easy to be right when everyone else is wrong

I think this true but misleading — conditioned on other people going with the herd in the wrong direction, it is easy to be right. However, often the herd is going in a right (or at least acceptable) direction. The continual effort to check if the herd is going in the right direction _is not_ easy. If a magic eight ball could alert you “hey the herd is wrong right now, take a closer look”, that would be great! But we have no such magic eight ball.

wredcollApr 3, 2026
Very often the fact that the herd is going in that direction causes it to become the right direction, e.g. the stock market.
qseraApr 3, 2026
>magic ball

The comment mentioned it..

>principle-based analysis..

energy123Apr 3, 2026
> the US hasn't seen a good outcome to a war since something like WWII

Korean War, Panama, and Gulf War were rather successful, resulting in a (more) US-aligned and prosperous South Korea/Panama/Gulf. Without these wars, South Korea wouldn't exist, Panama would probably still be a dictatorship, Saddam Hussein would control Kuwait and the US would have significantly less influence among the GCC.

roenxiApr 3, 2026
[delayed]
rc_mobApr 2, 2026
my opinion is that lies have no relevance to if the idea is good or bad. liars lie and honest people are honest. ideas are not people and do not care who it is that thought them up.
whackernewsApr 2, 2026
> This site uses cookies from Google to deliver its services and to analyse traffic. Your IP address and user agent are shared with Google, together with performance and security metrics, to ensure quality of service, generate usage statistics and to detect and address abuse.

I’m kind of conflicted and confused faced with this message on this site, especially with the content of the post. Can someone explain why I’m feeling annoyed?

6510Apr 3, 2026
It's a generic blogger blogspot cookie banner. It's a free blogging platform but you can attach your own domain to it. (not sure about hosting)

For example: http://fototour.blogspot.com

convexlyApr 2, 2026
The flip side is that good ideas with honest framing often lose to bad ideas with better marketing. Being right isn't enough if you can't communicate it and most people don't have the patience to evaluate the honest version.
KennyBlankenApr 3, 2026
Is "better marketing" a euphemism for "more willing to misrepresent the other side, turn a practical issue into an emotional one, and generally trot out every logical fallacy they can think of"?

Because that's what I see constantly on social media in response to progressive ideas the rest of the world has largely accepted but apparently just can't possibly work in the United States...

cm11Apr 3, 2026
I'd add (not saying you said otherwise) that marketing bad ideas well isn't quite the same as good communication. I guess a funny thing is that the more naive or blind or optimistic one is, the more one might wiggle their way out of some definitions of “liar.” If they're good at lying to themselves, maybe it doesn’t count as lying to others.
phpnodeApr 2, 2026
the author of this post has a book called Lying for Money which I'd definitely recommend.
fn-moteApr 3, 2026
Date is actually (2004). The (2008) was just a single paragraph update (now immaterial) added at the top in response to making the news back then.
idontwantthisApr 3, 2026
Something interesting about my experience with the Iraq War was that, as a 9 year old living in DC at a wealthy, liberal private school, everyone knew from the beginning it was all a lie. I only learned fairly recently that a vast majority of the country thought invading Iraq was a good idea.
mlazosApr 3, 2026
Idk I think they kind of do, tackling climate change is a good idea and at this point I’m ready to lie if it will make ppl do something
r14cApr 3, 2026
I think addressing climate change is more of a policy issue. I think most people are onboard with addressing climate change (modulo some cranks who are open to idea that the earth is flat and oil company ceos ig). It just doesn't translate into policy due to corruption.
pmg101Apr 3, 2026
I'm not sure. Don't underestimate the power of inertia.

I bought an EV last year and it's definitely been a "good idea" for me. Luxurious ride, fuel costs a tenth, doesn't stink.

Seems such an obvious upgrade it slightly confuses me take-up hasn't been quicker among people who like me can charge at home.

IshKebabApr 3, 2026
> it slightly confuses me take-up hasn't been quicker among people who like me can charge at home

No big mystery: a) most people keep cars for 20 years or more, b) most people don't buy new cars - they're too expensive.

TepixApr 3, 2026
What‘s the source for the 20 years? Seems unlikely to me.

b) You can also buy used EVs. I bought one last year, 2 years old, for 56% of the original price.

n4r9Apr 3, 2026
Is there not a big image factor? Like, people don't want to be seen as the type of person who'd drive an electric.
pjc50Apr 3, 2026
The average car in the UK is ten years old: https://www.racfoundation.org/media-centre/average-car-in-th... ; while I have an 18 year old car, I'm an outlier on my street.

It's true that most people don't buy new, but there's still 20% of purchases new, and a steady flow of EVs into the used market.

ErroneousBoshApr 3, 2026
I looked into buying an EV but it turns out it would be a little over four times as expensive as running an old Landrover with a massive V8 engine.

So I still run an old Landrover with a massive V8 engine.

imtringuedApr 3, 2026
EVs got really good in the last two years. If you bought anything that wasn't a Tesla ten years ago you were faced with some pretty bad compromises if you insisted on driving an EV.
pjc50Apr 3, 2026
The original Leaf looks like a Nokia 3210 compared to a current-gen iPhone. The tech has been moving at high speed and continues to do so.
thinkingemoteApr 3, 2026
What are the arguments being used to sell electric cars?

I would say that investing oneself and one's money in an idea changes how one feels about the choices that was taken. I would say that those not invested might think that some of the arguments might have untruths more than those who have invested.

(For me inertia does play a role too! "Why fix what's not broken")

pjc50Apr 3, 2026
This feels irrelevant - it's not an idea about which lots of lies are being told in favor? (quite a lot of .. misapprehensions against, though)
austin-cheneyApr 3, 2026
I completely disagree with the title. Good ideas should be either self evident or at least become clearly apparent with strong evidence. Typically that is not the case.

People are social critters that typically fear originality, and not by just a little bit either. Most people find originality repulsive no matter how well qualified a good idea may be. Instead, most people look towards social validation. A socially validated bad idea has vastly superior acceptance for most people as compared to any good idea.

We all like to think we are rational actors, but most of us aren’t and never will be.

renewiltordApr 3, 2026
This explains why Fauci lying to the American people about masks so that they'd be "available for healthcare workers" landed so poorly.