This reads simply as an “Our Incredible Journey” type of post, but written for an person rather than a company.
piker•Feb 15, 2026
Did this guy just exit the first one man billion-dollar startup for... less than a billion?
softwaredoug•Feb 15, 2026
I literally had begun to wonder if OpenClaw had more of a future as a company than OpenAI
senko•Feb 15, 2026
How do you know it was for less than a billion?
piker•Feb 15, 2026
The sentence ended with a question mark.
senko•Feb 15, 2026
I don't know the answer, but considering Meta (known for 100m+ offers) was in the rumors, and he mentions multiple labs (and many investors), and all the hype around openclaw ... I can easily see 9 figure, and would not be surprised by 1b+ "signing bonus", perhaps in the equivalent number of OpenAI shares.
mjr00•Feb 15, 2026
... Why would they pay 9 figures? It's not like Openclaw required specialized PhD-level knowledge held by <1000 people in the world to build, and that's what Meta and the other AI labs are paying ludicrous salaries for. Openclaw is a cool project and demonstrates good product design in the AI world, but by no means is a great product manager worth 1 billion dollars.
senko•Feb 15, 2026
1) there is only one OpenClaw and only one Peter;
2) at least half of the money is to not read the headlines tomorrow that the hottest AI thing since ChatGPT joined Anthropic or Google
3) the top paid people in this world are not phds
4) OpenAI is not beneath paying ludicrous amounts (see all their investments in the past year)
5) if a perception of their value as a result of this "strategic move" rises even by 0.2% and the bonus is in openai stock, it's free.
need I continue?
smnplk•Feb 15, 2026
please do continue. I like your points.
mjr00•Feb 15, 2026
> 1) there is only one OpenClaw and only one Peter;
Again, Peter is a good/great AI product manager but I don't see any distinguishing skills worth a billion dollars there. There's only one Openclaw but it's also been a few weeks since it came into existence? Openclaw clones will exist soon enough, and the community is WAY too small to be worth anything (unlike, say, Instagram/Whatsapp before being acquired by Facebook)
> 2) at least half of the money is to not read the headlines tomorrow that the hottest AI thing since ChatGPT joined Anthropic or Google
True, but not worth $100 million dollars - $1 billion dollars
> 3) the top paid people in this world are not phds
The people getting massive compensation offers from AI companies are all AI-adjacent PhDs or people with otherwise rare and specialized knowledge. This is unrelated to people who have massive compensation due to being at AI companies early. And if we're talking about the world in general, yes the best thing to do to be rich is own real estate and assets and extract rent, but that has nothing to do with this compensation offer
> 4) OpenAI is not beneath paying ludicrous amounts (see all their investments in the past year)
Investments have a probable ROI, what's the ROI on a product manager?
> 5) if a perception of their value as a result of this "strategic move" rises even by 0.2% and the bonus is in openai stock, it's free.
99.999999% of the world has not heard of Openclaw, it's extremely niche right now.
fragmede•Feb 15, 2026
Math is fun!
There are roughly 8.1 billion humans, so 99.999999% (8 nines) of the world is 81 people. There were way more than 81 people at the OpenClaw hackathon at the Frontier Tower in San Francisco, so at least that much of humanity has heard of OpenClaw. If we guess 810 people know about OpenClaw, then it means that 99.99999% (7 nines) of humanity have not heard of OpenClaw.
If we take it down to 6 nines, then that's roughly 8,100 people having heard of OpenClaw, and that 99.9999% of humanity has not.
So I think you're wrong when you say "99.999999% of the world has not heard of Openclaw". I'd guess it's probably around 99.9999% to 99.9999999% that hasn't heard of it. Definitely not 99.999999% though.
senko•Feb 15, 2026
To preface, I don't claim he will absolutely get that much money - but I wouldn't be surprised.
On the topic of brand recognition, 0.000001% of the world is 80 people (give or take). OpenClaw has ~200k GitHub stars right now.
On a more serious note, the world doesn't matter: the investors, big tech ceos, analysts do. Cloudflare stock jumped 10% due to Clawdbot.
Hype is weird. AI hype, doubly so. And OpenAI are masters at playing the game.
what•Feb 16, 2026
Why would cloudflare stock jump due to clawdbot?
senko•Feb 16, 2026
> Cloudflare Inc (NYSE:NET) shares rose more than 10% on Monday to $193.68, following a weekend surge in social media excitement around Clawdbot, an open-source AI agent built on Anthropic’s Claude. This jump comes despite the stock having fared poorly over the last month, according to InvestingPro data.
By that logic, was Alexandr Wang worth a billion or so?
the_mar•Feb 16, 2026
he was worth 14B
orsorna•Feb 15, 2026
Was the project really ever valued that high? Seems like something that can be easily replicated and even properly thought out (re: pi). This guy just ran the social media hype train the right way.
bbor•Feb 15, 2026
Wasn't this the same guy that responded with a shrug to thousands of malware packages on their vibe-repo? I'd say an OpenAI signing bonus is more than enough of a reward to give up that leaky ship!
manmal•Feb 15, 2026
Clawhub was locked down, I couldn’t publish new skills even as a previous contributor. Not what I‘d call a shrug.
Barbing•Feb 15, 2026
I missed Clawhub—y’all following anywhere besides HN? Is it all on that Twitter site?
linkregister•Feb 15, 2026
Reminds me of Facebook, there was nothing particularly interesting about a PHP app that stored photos and text in a flat user environment.
Yet somehow the network effects worked out well and the website was the preeminent social network for almost a decade.
CuriouslyC•Feb 15, 2026
Except in this case there's no network effect for autonomous agents. In fact, Peter is going to be working mostly on an OpenAI locked down, ecosystem tied agent, which means it's going to be worse than OpenClaw, but with a nicer out of the box experience.
fragmede•Feb 15, 2026
If you're on OpenAI, and I'm on Anthropic, can we interoperate? What level are we even trying to interoperate on? The network effect is that, hey, my stuff is working here, your stuff is working over there. So do we move to your set of tools, or my set of tools, or do we mismash between them, as our relationship and power dynamics choose for us.
CuriouslyC•Feb 15, 2026
I'd describe that as platform lock-in rather than the network effect.
rockwotj•Feb 15, 2026
Technology does not determine the success of a company. I’ve seen amazing tech fail, and things strapped together with ducktape and bubblegum be a wild success.
bdangubic•Feb 15, 2026
facebook is still preeminent social network today
jatari•Feb 15, 2026
The instant someone makes a better version of openclaw -literally- everyone is going to jump ship.
There is no lock in at all.
Gigachad•Feb 15, 2026
Social media is the king of network effects. Almost nothing else compares. See how quickly people drop AI products for the next one that does the same thing but slightly better. To switch from ChatGPT to Gemini I don't have to convince all of my friends and family to do the same.
Sateeshm•Feb 16, 2026
> Social media is the king of network effects. Almost nothing else compares.
Ecommerce is close second
wiseowise•Feb 16, 2026
> See how quickly people drop AI products for the next one that does the same thing but slightly better.
> To switch from ChatGPT to Gemini I don't have to convince all of my friends and family to do the same.
Except Gemini is a complete joke that can’t even complete request on iOS unless you keep scree unlocked or keep the app in the foreground. So I’m not sure how it proves your point.
Gigachad•Feb 16, 2026
Even if this was true, it would simply be a software bug that could be resolved. Not an example of network effects. My use of an AI product is not impacted by what my friends and family use.
koakuma-chan•Feb 15, 2026
It's kind of crazy that this kind of thing can cause so much hype. It is even useful? I just really don't see any utility in being able to access an LLM via Telegram or whatever.
bfeynman•Feb 15, 2026
the ability to almost "discover" or create hype is highly valued despite most of the time it being luck and one hit wonders... See many of the apps that had virality and got quickly acquired and then just hemorrhaged. Openclaw is cool, but not for the tech, just some of the magic of the oddities and getting caught on somehow, and acquiring is betting that they can somehow keep doing that again.
CuriouslyC•Feb 15, 2026
In Asia people do a big chunk of their business via chatbots. OpenClaw is a security dumpster fire but something like OpenClaw but secure would turbocharge that use case.
If you give your agent a lot of quantified self data, that unlocks a lot of powerful autonomous behavior. Having your calendar, your business specific browsing history and relevant chat logs makes it easy to do meeting prep, "presearch" and so forth.
lufenialif2•Feb 16, 2026
Curious how you make something that has data exfiltration as a feature secure.
CuriouslyC•Feb 16, 2026
Mitigate prompt injection to the best of your ability, implement a policy layer over all capabilities, and isolate capabilities within the system so if one part gets compromised you can quarantine the result safely. It's not much different than securing human systems really. If you want more details there are a lot of AI security articles, I like https://sibylline.dev/articles/2026-02-15-agentic-security/ as a simple primer.
SpicyLemonZest•Feb 16, 2026
Nobody can mitigate prompt injection to any meaningful degree. Model releases from large AI companies are routinely jailbroken within a day. And for persistent agents the problem is even worse, because you have to protect against knowledge injection attacks, where the agent "learns" in step 2 that an RPC it'll construct in step 9 should be duplicated to example.com for proper execution. I enjoy this article, but I don't agree with its fundamental premise that sanitization and model alignment help.
CuriouslyC•Feb 16, 2026
I agree that trying to mitigate prompt injection in isolation is futile, as there are too many ways to tweak the injection to compromise the agent. Security is a layered thing though, if you compartmentalize your systems between trusted and untrusted domains and define communication protocols between them that fail when prompt injections are present, you drop the probability of compromise way down.
krethh•Feb 16, 2026
> define communication protocols between them that fail when prompt injections are present
There's the "draw the rest of the owl" of this problem.
Until we figure out a robust theoretical framework for identifying prompt injections (not anywhere close to that, to my knowledge - as OP pointed out, all models are getting jailbroken all the time), human-in-the-loop will remain the only defense.
CuriouslyC•Feb 16, 2026
Human in the loop isn't the only defense, you can't achieve complete injection coverage, but you can have an agent convert untrusted input into a response schema with a canary field, then fail any agent outputs that don't conform to the schema or don't have the correct canary value. This works because prompt injection scrambles instruction following, so the odds that the injection works, the isolated agent re-injects into the output, and the model also conforms to the original instructions regarding schema and canary is extremely low. As long as the agent parsing untrusted content doesn't have any shell or other exfiltration tools, this works well.
krethh•Feb 16, 2026
This only works against crude attacks which will fail the schema/canary check, but does next to nothing for semantic hijacking, memory poisoning and other more sophisticated techniques.
CuriouslyC•Feb 16, 2026
With misinformation attacks, your can instruct research agent to be skeptical and thoroughly validate claims made by untrusted sources. TBH, I think humans are just as likely to fall for these sorts of attacks if not more-so, because we're lazier than agents and less likely to do due diligence (when prompted).
SpicyLemonZest•Feb 16, 2026
Humans are definitely just as vulnerable. The difference is that no two humans are copies of the same model, so the blast radius is more limited; developing an exploit to convince one human assistant that he ought to send you money doesn't let you easily compromise everyone who went to the same school as him.
fooster•Feb 16, 2026
Show me a legitimate practical prompt injection on opus 4.6. I read many articles but none provide actual details.
These papers have example prompt injections datasets you can mine for examples. Then apply the techniques used in provider specific jailbreaks from Pliny to the template to increase the escape success rate.
I think a lot of this is orchestrated behind the scenes. Above author has taken money from AI companies since he’s a popular “influencer”.
And it makes a lot of sense - there’s billions of dollars on the line here and these companies made tech that is extremely good at imitating humans. Cambridge analytica was a thing before LLMs, this kinda tool is a wet dream for engineering sentiment.
Nextgrid•Feb 15, 2026
There's been some crypto shenanigans as well that the author claimed not to be behind... looking back at it, even if the author indeed wasn't behind it, I think the crypto bros hyping up his project ended up helping him out with this outcome in the end.
nosuchthing•Feb 16, 2026
Can you elaborate on this more or point a link for some context?
Nextgrid•Feb 16, 2026
Some crypto bros wanted to squat on the various names of the project (Clawdbot, Moltbot, etc). The author repeatedly disavowed them and I fully believe them, but in retrospect I wonder if those scammers trying to pump their scam coins unwittingly helped the author by raising the hype around the original project.
nosuchthing•Feb 16, 2026
either way there's a lot of money pumping the agentic hype train with not much to show for it other than Peter's blog edit history showing he's a paid influencer and even the little obscure AI startups are trying to pay ( https://github.com/steipete/steipete.me/commit/725a3cb372bc2... ) for these sorts of promotional pump and dump style marketing efforts on social media.
In Peter's blog he mentions paying upwards of $1000's a month in subscription fees to run agentic tasks non-stop for months and it seems like no real software is coming out of it aside from pretty basic web gui interfaces for API plugins. is that what people are genuinely excited about?
mlrtime•Feb 16, 2026
What is your point exactly. He seemed very concerned about the issue, he said he did not tolerate the coin talks.
What else would he or anyone do if someone is tokenizing your product and you have no control over it?
Nextgrid•Feb 16, 2026
I just made the observation that whoever was behind it, it ultimately benefited the author in reaching this outcome.
Rebelgecko•Feb 15, 2026
A lot of the functionality I'm not using because of security concerns, but a lot of the magic comes down to just having a platform for orchestrating AI agents. It's honestly nice just for simple sysadmin stuff "run this cron job and text me a tl;dr if anything goes wrong" or simple personal assistant tasks like"remind me if anyone messaged me a question in the last 3 days and I haven't answered".
It's also cool having the ability to dispatch tasks to dumber agents running on the GPU vs smarter (but costlier) ones in the cloud
lofaszvanitt•Feb 16, 2026
but why?
Rebelgecko•Feb 16, 2026
Because it's the easiest way for me to accomplish those tasks (but open to suggestions if you have any)
james_marks•Feb 16, 2026
“Just” is doing some heavy lifting here.
hu3•Feb 15, 2026
Where do you guys get the 1b exit from? I didn't see numbers yet.
geerlingguy•Feb 15, 2026
It's AI. Take a sane number, add a 14,000x multiplier to that. And you'll only be one order of magnitude off in our current climate.
fdsvaaa•Feb 15, 2026
you can also take annualized profit run rate times negative 14,000.
merlindru•Feb 16, 2026
probably an order of magnitude too low rather than too high as well :P
mentalgear•Feb 15, 2026
how is it a "startup" if all ip is open-source. Seems like openAi is just buying hype to keep riding their hype bubble a little longer, since they are in hot water on every other front (20Billion revenue vs 1 Trillion expenses and obligations, Sora 2 user retention dropping to 1% of users after 1 month of usage, dense competition, all actual real founding ml scientists having skipped the boat a long time ago).
hadlock•Feb 15, 2026
Everyone is going to have their own flavor of Open Claw within 18 months. The memory architecture (and the general concept of the multi-tiered system) is open source. There's no moat to this kind of thing. But OpenAI is happy to trade his star power for money. And he might build something cool with suddenly unlimited resources. I don't blame the guy. OpenAI is going to change hands 2-3 times over the next 5 years but at the end of the day he will still have the money and equity OpenAI gave him. And his cool project will continue on.
casualscience•Feb 16, 2026
what is the memory architecture, doesn't this already exist in claude code?
elicash•Feb 16, 2026
Maybe there's a liability moat where large companies can't ship something that's risky enough to be useful?
elxr•Feb 16, 2026
The fact that 1 billion is the threshold you chose to highlight shows the ridiculousness of this industry.
Openclaw is an amazing piece of hard work and novel software engineering, but I can't imagine OpenAI/anthropic/google not being able to compete with it for 1/20th that number (with solid hiring of course).
piker•Feb 16, 2026
It was more of a reference to the YC partner who suggested a one-man unicorn was on the horizon due to AI.
ttul•Feb 16, 2026
The game theory here is that either OpenAI acquires this thing now, or someone else will. It doesn't matter whether they could replicate it. All of the major players can and probably will replicate OpenClaw in their own way and make their thing incredibly scalable and wonderful. But OpenClaw has a gigantic following and it's relevant in this moment. For a trivial amount of money (relatively speaking), OpenAI gets to own this hype and direct it toward their models and their apps. Had they not succeeded here, Anthropic or Google would have gladly directed the hype in their direction instead, and OpenAI would be licking its wounds for some time trying to create something equivalently shiny.
It was a very good play by OpenAI.
r0b05•Feb 16, 2026
I tend to agree. I don't know whether it's Altman or someone else who makes these deals but OAI have made some brilliant moves and partnerships. Anthropic's tech is great but the OAI makes great business moves.
thatsit•Feb 16, 2026
Altman is the startup idea king. He should definitely know his moves.
Even worse for antrophic is the renaming from clawd to openclaw. It is almost comical now that Peter had to rename it and now it sounds more like OpenAI
r0b05•Feb 16, 2026
I know right, it looks like they basically sent him to OAI and I'm sure Altman knows it!
One thing I will say is that this competition is good.
ytNumbers•Feb 16, 2026
Apparently, it was Meta that was the other main contender to hire him. Mark Zuckerberg was impressed by OpenClaw, but, I guess OpenAI wound up outbidding him. It is surprising that Anthropic and Google had little interest.
ass22•Feb 16, 2026
Eerm because they are focused? Im still not getting the hype behind this project and Im more convinced its been manufactured.
fishingisfun•Feb 16, 2026
then explain why google paid 33 billion for a 5 year old israeli cybersecurity startup
gip•Feb 16, 2026
I think that’s fair.. building a competing product would likely be relatively easy and inexpensive. But that’s true for most software now: it’s becoming easier to build, and the barriers to entry are lower.
I love Anthropic and OpenAI equally but some people have a problem with OpenAI. I think they want to reposition themselves as a company that actively supports the community, open source, and earns developers’ goodwill. I attended a meeting recently, and there was a lot of genuine excitement from developers. Haven't seen that in a long time.
m00dy•Feb 16, 2026
novel software engineering ? Did you look at the code ?
rlt•Feb 16, 2026
Is it really that amazing? It’s a pretty simple idea, and seemed pretty buggy when I tried it out.
PUSH_AX•Feb 16, 2026
> Openclaw is an amazing piece of hard work and novel software engineering
Have you tried using it?
Aurornis•Feb 16, 2026
I keep reading takes about OpenClaw being acquired, but even the TLDR at the top makes it clear that OpenClaw isn’t part of this move:
> tl;dr: I’m joining OpenAI to work on bringing agents to everyone. OpenClaw will move to a foundation and stay open and independent.
I’m sure he got a very generous offer (congrats to him!) but all of the hot takes about OpenClaw being acquired are getting weird.
dbbk•Feb 16, 2026
No because this was not a billion dollar business
Yizahi•Feb 16, 2026
Yeah, shows he is smart, in the current market state.
ifwinterco•Feb 16, 2026
We're at the point in the cycle where if someone offers you decent money you take it.
It might run on for a while longer but you don't want to be that guy who had a £100m net worth in 1999 but failed to monetise any of it and ended up with nothing
dist-epoch•Feb 15, 2026
Haters gonna hate, but bro vibe-coded himself into being a billionaire and having Sam Altman and Zuck personally fight over him.
orsorna•Feb 15, 2026
Proof you can get hired off of a portfolio where you've never even viewed a single line of code form it. Definitely feel a mix of envy and admiration.
embedding-shape•Feb 15, 2026
It was never really about the code itself anyways.
mrshu•Feb 15, 2026
To be fair, it's not like he did not read a single line of code that ended up being generated.
With OpenClaw we are seeing how the app layer becomes as important as the model layer.
You can switch models multiple times (online/proprietary, open weight, local), but you have one UI : OpenClaw.
baxtr•Feb 15, 2026
Seems like models become commoditized?
verdverm•Feb 15, 2026
Same for OpenClaw, it will be commodity soon if you don't think it is already
baxtr•Feb 15, 2026
Not sure. I mean the tech yes definitely.
But the community not.
verdverm•Feb 15, 2026
The community is tiny by any measure (beyond the niche), market penetration is still very very early
Anthropic's community, I assume, is much bigger. How hard it is for them to offer something close enough for their users?
filoleg•Feb 16, 2026
> Anthropic's community, I assume, is much bigger. How hard it is for them to offer something close enough for their users?
Not gonna lie, that’s exactly the potential scenario I am personally excited for. Not due to any particular love for Anthropic, but because I expect this type of a tight competition to be very good for trying a lot of fresh new things and the subsequent discovery process of new ideas and what works.
verdverm•Feb 16, 2026
My main gripe is that it feels more like land grabbing than discovery
Stories like this reinforce my bias
elxr•Feb 15, 2026
It's definitely not right now. What else has the feature list and docs even resembling it?
verdverm•Feb 16, 2026
OpenClaw has mediocre docs, from my perspective on some average over many years using 100s of open source projects.
I think Anthropic's docs are better. Best to keep sampling from the buffet than to pick a main course yet, imo.
There's also a ton of real experiences being conveyed on social that never make it to docs. I've gotten as much value and insights from those as any documentation site.
Aurornis•Feb 16, 2026
OpenClaw has only been in the news for a few weeks. Why would you assume it’s going to be the only game in town?
Early adopters are some of the least sticky users. As soon as something new arrives with claims of better features, better security, or better architecture then the next new thing will become the popular topic.
beaker52•Feb 16, 2026
It appears to me that the same people who think “vibe coding” is a great idea, are the same people who think “Gas Town” is the future, and “OpenClaw” detractors are just falling behind.
For your sake, I’m not saying they’re wrong. I’m just pointing out something I’ve noticed.
cyanydeez•Feb 15, 2026
Things that arn't happening any time soon but need to for actual product success built on top:
1. Stable models
2. Stable pre- and post- context management.
As long as they keep mothballing old models and their interderminant-indeterminancy changes, whatever you try to build on them today will be rugpulled tomorrow.
This is all before even enshittification can happen.
altcunn•Feb 16, 2026
This is the underrated risk that nobody talks about enough. We've already seen it play out with the Codex deprecation, the GPT-4 behavior drift saga, and every time Anthropic bumps a model version.
The practical workaround most teams land on is treating the model as a swappable component behind a thick abstraction layer. Pin to a specific model version, run evals on every new release, and only upgrade when your test suite passes. But that's expensive engineering overhead that shouldn't be necessary.
What's missing is something like semantic versioning for model behavior. If a provider could guarantee "this model will produce outputs within X similarity threshold of the previous version for your use case," you could actually build with confidence. Instead we get "we improved the model" and your carefully tuned prompts break in ways you discover from user complaints three days later.
lez•Feb 15, 2026
It has already been so with ppq.ai (pay per query dot AI)
beaker52•Feb 16, 2026
I mean, ppq.ai (which I’ve never heard of) had zero to do with the commoditisation of LLMs. The industry did that. And services like OpenRouter are far more serious and responsible in this area than this ppq.ai is.
softwaredoug•Feb 15, 2026
Indeed, coding agents took off because of a lot of ongoing trial and error on how to build the harness as much as model quality.
canadiantim•Feb 15, 2026
There’s actually many UI’s now? See moltis, rowboat, and various others that are popping up daily
AlexCoventry•Feb 16, 2026
Are there any with a credible approach to security, privacy and prompt injections?
rlt•Feb 16, 2026
Does any credible approach to prompt injection even exist?
joquarky•Feb 16, 2026
Anyone who figures out a reliable solution would probably never have to work again.
AlexCoventry•Feb 16, 2026
Not that I'm aware of, but I probably won't be interested in these kinds of assistants until there are.
PurpleRamen•Feb 16, 2026
I think the point was about the frequency of switching your frontend. With a proper frontend you can switch the backend on each request if you want, but usually people will stay with one main-interface of their choice. For AI, OpenClaw, Moltic, Rowboat are now such a frontend, but not many will use them all at once.
It's similar to how people usually only use one preferred browser, editor, shell, OS.
bhadass•Feb 16, 2026
openclaw is just one of many now, there are new ones weekly.
mcapodici•Feb 16, 2026
Plus you can get the model to write you a bespoke one that suits your needs.
theturtletalks•Feb 16, 2026
I've been digging into how Heartbeat works in Openclaw to bring directly into Vibetunnel, another of Peter's projects
beaker52•Feb 16, 2026
And OpenClaw is nothing revolutionary. It’s all shit we could do before OpenClaw. It’s just that no one was stupid enough to do it. Now everyone has gone crazy.
palata•Feb 16, 2026
AI is the new Javascript?
madeofpalk•Feb 16, 2026
? We saw this years/months ago with Claude Code and Cursor.
miki_oomiri•Feb 16, 2026
But it just codes. And are console / ide tools.
Openclaw is so so so much more.
Aurornis•Feb 16, 2026
That’s missing the point. OpenClaw is just one of many apps in its class. It, too, will fall out of favor as the next big thing arrives.
pyuser583•Feb 16, 2026
This is the sort of thing employers are failing on. They sign contracts that assume employees are going to be logging in and asking questions directly.
But if I don’t have a url for my IDE (or whatever) to call, it isn’t useful.
So I use Ollama. It’s less helpful, but ensure confidentiality and compliance.
Aurornis•Feb 16, 2026
> You can switch models multiple times (online/proprietary, open weight, local), but you have one UI : OpenClaw.
It’s only been a couple months. I guarantee people will be switching apps as others become the new hot thing.
We saw the same claims when Cursor was popular. Same claims when Claude Code was the current topic. Users are changing their app layer all the time and trying new things.
ryanmcgarvey•Feb 16, 2026
Memory. I have built up so many scripts and crons and integrated little programs and memories with open claw it would be difficult to migrate to some other system.
System of record and all.
blackoil•Feb 16, 2026
Considering you have built them all in last few weeks, it should not be that difficult and no reason other systems won't reuse same.
outofpaper•Feb 16, 2026
Exactly! The whole point of personal agents is that the data is yours and it's where you want it not in someone's cloud. What harness you use to work with this should be a matter of preference and not one of lock in.
Gareth321•Feb 16, 2026
The future will be ownership of our memories and data. AI companies will fight tooth and nail to keep that data walled in and impossible to export.
k4rli•Feb 16, 2026
Since it's already not walled-in in most cases I don't see this happening very effectively.
Using openrouter+kilocode I can simply switch between different providers' models and not miss out on anything.
ass22•Feb 16, 2026
If regulators force the capability of exporting to exist, what ya gonna do?
I continue to find it amusing that people really think corporates are really holding power. No - they are holding power granted to them by the government of the state.
Remind me why Zuck et al had to kiss the ring.
bigyabai•Feb 16, 2026
Very often, the regulators don't. Here in the US, half the country would refinance their mortgage for iMessage interoperability... if it were possible. Any time regulators reach for the "stop monopoly" button, Tim Cook screeches like a rhesus monkey and drops a press release about how many terrorists Apple stops.
If lobbying was illegal then you might have a point here, but alas.
dtauzell•Feb 16, 2026
How hard do you think it would be for ai to generate all those for some alternative?
ryanmcgarvey•Feb 16, 2026
AI didn't do the work, I did. Building up context is the part we actually have to put work into. I'm not saying it would be impossible, but boy would it be annoying to have to constantly reach a new assistant about your whole life.
margalabargala•Feb 16, 2026
"Here's my corpus of records from OpenClaw. Please parse it and organize into your own memories" boom done
Aurornis•Feb 16, 2026
Why are you assuming you'd have to do the work yourself?
This is a perfect use case for a new agent to query the old agent and get the details.
You could have OpenClaw summarize and export them into a format that the new one wants.
Maybe the new agents will be designed to be compatible with OpenClaw's style.
There is no reason to believe that you're locked in to something.
sockaddr•Feb 16, 2026
Sorry but for $5 in credits you can have an agent port over all your bullshit to the next fad. I'll have one port over all my bullshit when the time comes too.
la64710•Feb 16, 2026
You can use OpenClaw to migrate these scripts off OpenClaw.
Topfi•Feb 16, 2026
Unless I am mistaken, that is all plain old markdown, arguably the easiest to migrate format for such data there can possible be.
Heck, that was half the pitch behind Obsidian, even if the project someday ended, markdown would remain. And switching between Obsidian and e.g. Logseq shows the ease of doing so.
philipallstar•Feb 16, 2026
This effect isn't that important while the customer base is growing fast.
beardedwizard•Feb 16, 2026
Bring your system to my records.
The irony of systems of record is that if there is more than one, there are effectively none. Just data stuck in silos waiting for compute.
Aurornis•Feb 16, 2026
There will definitely be migration tools.
The new agents might have a feature to query your old agents for a migration.
That said, I find it really hard to believe that you've generated so much work in the past few weeks since OpenClaw launched that you could never migrate to something else. It hasn't been that long.
gmerc•Feb 16, 2026
Have you heard about this thing called AI coding agent....
czhu12•Feb 16, 2026
It’s only 2 months and there are already a rush of viable alternatives, from smaller, lightweight versions, to hosted, managed SaaS alternatives.
I’d suspect the moat here will be just as fragile as every other layer
karmasimida•Feb 16, 2026
Why?
You can literally ask codex to build a slim version for you overnight.
I love OpenClaw, but I really don't think there is anything that can't be cloned.
coffeebeqn•Feb 16, 2026
What’s the moat exactly?
hirako2000•Feb 16, 2026
None. That's why joining openai is the perfect fit.
zombot•Feb 16, 2026
Well, duh.
You being able to go places is the interesting thing, your car having wheels is just a subservient prerequisite.
alansaber•Feb 16, 2026
This has been the case since the beginning of last year imo
TealMyEal•Feb 15, 2026
cant wait for this post to be memoryholed in 6 months when the community is a shell of its former self (no crustacean pun intended)
baxtr•Feb 15, 2026
Are you saying it will be hollowed out?
lvl155•Feb 15, 2026
Never understood the hype. Good for the guy but what was the product really? And he goes on and on about changing the world. Gimme a break. You cashed out. End of story.
worldsavior•Feb 15, 2026
Just connecting social platforms to agents. That's all. Anyone can code it, and the project was obviously vibe coded. For some reason it got viral.
Good for him, but no particular geniusness.
bakugo•Feb 16, 2026
> For some reason it got viral.
The reason is that he paid every AI "influencer" to promote it. Within the span of a week, the project went from being completely unknown to every single techbro jumping on it as the next "thing that will change the world". It also gained around 70k github stars in that time.
In the age of AI, everything is fake.
AJRF•Feb 15, 2026
The amount of negative posts about this on twitter is crazy, I've not seen any positive posts. Jealousy or something else?
minimaxir•Feb 15, 2026
Twitter is not a place for positive posts.
verdverm•Feb 15, 2026
I think people are sad that OpenClaw is now part of Big Ai.
borroka•Feb 15, 2026
After two weeks of viral posts, articles, and Mac Mini buying sprees, as it's been happening up to now for every AI product that was not an LLM, it kinda disappeared from the consciousness-- as well as from the tooling, probably--of people.
A couple of months ago, Gemini 3 came out and it was "over" for the other LLM providers, "Google did it again!", said many, but after a couple of weeks, it was all "Claude code is the end of the software engineer".
It could be (and in large part, is) an exciting--and unprecedented in its speed--technological development, but it is also all so tiresome.
anonym00se1•Feb 15, 2026
Just my opinion, but I no longer trust sentiment on X now that Elon is in control.
Aurornis•Feb 15, 2026
Twitter is negative in general, but generally when a project like this gets bought it marks the end of the project. The acquirer always says something about how they don't plan to change anything, but it rarely works that way.
agnishom•Feb 16, 2026
My negativity is for two reasons:
(1) A capable independent developer is joining a large powerful corporation. I like it better when there are many small players in the scene rather than large players consolidating power.
(2) This seems like the celebration of Generative AI technology, which is often irresponsible and threatens many trust based social systems.
wat10000•Feb 16, 2026
Anyone who likes Openclaw will be upset that it’s getting acquired and inevitably destroyed. Anyone who dislikes it will be annoyed that the creator is getting so rewarded for building junk. The only people who would like this are OpenAI fans, if there even are any.
100% jealousy, similar to how anyone who posts a negative reaction to a crypto rugpull scam is just jealous that they didn't get to pull the scam themselves.
yoyohello13•Feb 16, 2026
In this case I think it is largely jealousy, it's just a guy getting a new job at the end of the day.
But come on, negativity around a rugpull is jealousy? Are you so jaded you can't imagine people objecting to the total lack of morality required to do a crypto rugpull? I personally get annoyed about something like Trump Coin because seeing people rewarded for being dirt bags offends my sense of justice. If you need a more pragmatic reason, rewarding dirtbaggery leads to a less safe society.
karmasimida•Feb 16, 2026
I am fine with the founder joining OpenAI, he gets to get paid regardless.
I am not confident that the open source version will get the maintenance it deserves though, now the founder has already exited. There is no incentive for OpenAI to keep the open sourced version better than their future closed source alternative.
ungovernableCat•Feb 16, 2026
It's a general anxiety of where the industry is headed. Things like marketing, personal branding, experimenting are increasing in relevance while things like detailed meticulous engineering are falling to the wayside.
At least when it comes to these tales of super fast rise to wealth and prominence. Meticulous engineering still matters when you want to deliver scale, but is it rewarded as much?
My feel is that the attention economy is leaking into software. Maybe the classic bimodal distribution of software careers will become increasingly more like the distribution in social-media things like streaming, youtube, onlyfans etc.
qaq•Feb 15, 2026
Good thing Sam has no experience in transforming a foundation into for profit org ...
mocmoc•Feb 15, 2026
flappy bird effect
appplication•Feb 15, 2026
We’re in a hype state where someone can “generate” millions of dollars in value in a month by making a meme prototype that scratches the itch just right, despite having no real competitive moat, application, value proposition or even semblance of a path to one.
The guy is creative, but this is really just following the well known pattern of acquiring/hiring bright minds if only to prevent your competition from doing the same.
mikert89•Feb 15, 2026
Is an agent running on a desktop, with access to excel, word, email and slack going to replace Saas?
Add in databases, browser use, and the answer could be yes
This could be the most disruptive software we have seen
stcredzero•Feb 15, 2026
What I want to know: Is the OpenClaw = Open Source aspect secure?
koakuma-chan•Feb 15, 2026
If your SaaS is a CRUD with a shitty UI in React then yes
mikert89•Feb 15, 2026
If the ai model gets better, more and more Saas can be replaced with an agent with an excel sheet
BloondAndDoom•Feb 15, 2026
To be fair if you are not going interface with your SaaS via GUI it can be one big API for all I care. I’ll just chat and automate to it anyway.
oblio•Feb 16, 2026
For people like you: except for the obvious greed, what's the end game?
Are you making anyone's life better? Who will even pay you once most jobs are automated?
At best, it's a defensive move: make money, get hard capital and seek rent after most of society has collapsed?
koakuma-chan•Feb 16, 2026
I mean, he's right, it's easier for users when you can throw AI at the thing, instead of manually clicking through the UI.
BloondAndDoom•Feb 16, 2026
I don’t know how your rhetoric is any different than saying how scribes find a new job now we invented printing.
oblio•Feb 16, 2026
In your mind scribes disappearing, probably 0.1% of the population, is the same thing as up to 50% of the population losing their jobs?
Deindustrialization happened 20-40 years ago and the affected regions are still hit hard.
Also, you're making my point. Utterly heartless.
esafak•Feb 15, 2026
If it replaces SaaS it will replace you too; how else will you collaborate?
Tangokat•Feb 15, 2026
Incredibly depressing comments in this thread. He keeps OpenClaw open. He gets to work on what he finds most exciting and helps reach as many people as possible. Inspiring, what dreams are made of really.
Top comments are about money and misguided racism.
Personally I'm excited to see what he can do with more resources, OpenClaw clearly has a lot of potential but also a lot of improvements needed for his mum to use it.
behnamoh•Feb 15, 2026
He said on Lex Fridman podcast that he has no intention of joining any company; that was a couple days ago.
teaearlgraycold•Feb 15, 2026
Ah but that was before he saw the comp packages. But no judgement. The tool is still open source. Seems like a great outcome for everyone.
He had to keep the grift going until the very last minute.
Der_Einzige•Feb 16, 2026
Lex Friedman is a fraud/charlatan and shouldn’t be listened to.
andxor•Feb 16, 2026
He literally said the exact opposite.
softwaredoug•Feb 15, 2026
Frankly, I hope he maximized the amount of money he made. It's a once in a lifetime opportunity. And nobody knows where AI is headed or if OpenAI even will be in existence in a few years given their valuation and the amount of $ they need to burn to keep up.
mbanerjeepalmer•Feb 15, 2026
Unclear what this truly means for the open version.
We can assume first that at OpenAI he's going to build the hosted safe version that, as he puts it, his mum can use. Inevitably at some point he and colleagues at OpenAI will discover something that makes the agent much more effective.
Does that insight make it into the open version? Or stay exclusive to OAI?
(I imagine there are precedents for either route.)
CuriouslyC•Feb 15, 2026
The OpenAI version will be locked down in a bad way. It'll be ecosystem tied and a lot of the "security" will be from losing control of the harness.
kibibu•Feb 16, 2026
Not sure. It's also plausible that OpenAI wants access to everybody's email, slack, whatsapp, telegram, github source code, whatever else this thing gets hooked up to.
The cry has been for a while that LLMs need more data to scale.
The new Open(AI)Claw could be cheap or free, as long as you tick the box that allows them to train on your entire inbox and all your documents.
krashidov•Feb 15, 2026
What a blunder by Anthropic. We'll see what openclaw turns into and if it sticks around, but still a huge and rare blunder by anthropic
unpwn•Feb 15, 2026
i dont think so, its trivial to spin up an openclaw clone. the only value here is the brand
rockwotj•Feb 15, 2026
I am sure they made a bid. The blog makes it sounds like he talked to multiple labs.
serf•Feb 15, 2026
they're (Anthropic) also the ones who have been routinely rug-pulling access from projects that try to jump onto the cc api, pushing those projects to oAI.
nl•Feb 15, 2026
Do you have any references for that?
AFAIK Anthropic won't let projects use the Claude Code subscription feature, but actually push those projects to the Claude Code API instead.
benatkin•Feb 16, 2026
I'd like a reference for it being rug pulling. What happened with OpenCode certainly wasn't rug pulling, unless Anthropic asked them to support using a Claude subscription with it.
SamDc73•Feb 15, 2026
I highly suspect he might even consider Anthropic since they enforced restrictions at some point on OpenClaw form using there APIs
krashidov•Feb 15, 2026
yes that's the blunder I'm talking about
crorella•Feb 15, 2026
Welcome :D
rcarmo•Feb 15, 2026
Not surprising if you've been paying attention on Twitter, but interesting to see nonetheless.
micromacrofoot•Feb 15, 2026
wow hype really is everything, good for him
mark_l_watson•Feb 15, 2026
I have not run OpenClaw and similar frameworks because of security concerns, but I enjoy the author's success, good for him.
There are very few companies who I trust with my digital data and thus trust to host something like OpenClaw and run it on my behalf: American Express, Capital One, maybe Proton, and *maybe* Apple. I managed an AI lab team at Capital One and personally I trust them.
I am for local compute, private data, etc., but for my personal AI assistant I want something so bullet proof that I lose not a minute of sleep worrying about by data. I don't want to run the infrastructure myself, but a hybrid solution would also be good.
jacquesm•Feb 15, 2026
AMEX, Capital One and Apple are not even close to the top of the list of companies that I would trust with my digital data.
mark_l_watson•Feb 15, 2026
Jacques, do you mind sharing your list of trusted companies? Thanks in advance.
jacquesm•Feb 15, 2026
It's going to be pretty short. Proton would be there for comms, for hosting related stuff I would trust Hetzner before any big US based cloud company. For the AI domain I wouldn't trust any of the big players, they're all just jockeying for position and want to achieve lock-in on a scale never seen before and they have all already shown they don't give a rats ass about where they get their training data and I expect that once they are in financial trouble they'll be happy to sell your private data down the river.
Effectively you can trust all of the companies out there right up until they are acquired and then you will regret all of the data you ever gave them. In that sense Facebook is unique: it was rotten from day #1.
Vehicles: anything made before 2005, SIM or e-SIM on board = no go.
I'm halfway towards setting up my own private mail server and IRC server for me and my friends and kissing the internet goodbye. It was a fun 30 years but we're well into nightmare territory now. Unfortunately you are now more or less forced to participate because your bank, your government and your social circle will push you back in. And I'm still pissed off that I'm not allowed to host any servers on a residential connection. That's not 'internet connectivity' that's 'consumer connectivity'.
BoredPositron•Feb 15, 2026
Proton? After the last two years of enshitification and purely revenue driven product decisions really?
jacquesm•Feb 15, 2026
Barely. Your points are well made and I'm sure that it is just a matter of time before they're just as untouchable as the rest. Hence the remark about mail. The Siloization of the internet is almost complete.
blueaquilae•Feb 15, 2026
Proton is quite a privacy washing front. Surprised than even in HN nobody check behind the facade what was signed.
jacquesm•Feb 15, 2026
Yes, they're losing it.
It's a pity, they were doing well for a long time.
I'm surprised that someone on HN would paint all of HN with the same brush.
It's one of those 'lesser evils' things. If you know of a better email provider I'd love to know.
unethical_ban•Feb 15, 2026
Proton complied with a court order once (that we know of), no? I have seen a lot of negative sentiment from HN commenters toward them but not a lot of evidence to back it up, particularly when you consider the email marketplace.
Itoldmyselfso•Feb 15, 2026
It was a legally mandated court order they couldn't just refuse. No encrypted data, the contents of their emails, was handed over. The person would've also been safe had they used vpn/tor as I recall the story.
Aurornis•Feb 15, 2026
> Surprised than even in HN nobody check behind the facade what was signed
Such as?
These aloof comments that talk about something we're supposed to know about without referencing anything are very unhelpful.
jjtheblunt•Feb 15, 2026
why the (e)SIM cars concern? i ask since the data transmission (bidirectional) can be used to justify lower insurance rates, for an example, than without that data.
Because I don't trust that that location data won't end up in the wrong hands.
jiveturkey•Feb 16, 2026
This, but stronger. It’s not a story of why Johnny can’t trust anyone. The vast majority of companies have proven time and time again that they are not capable of handling this data securely against inadvertent disclosure. Not even mentioning the intentional disclosure revenue stream.
rcoder•Feb 16, 2026
"Justifying lower insurance rates" is just algorithmic bias described from the perspective of someone it doesn't (currently) harm. See also: credit scoring, insurance claim acceptance, job applications, etc., etc.
You only get offered a discount if most other customers are being compelled to pay full (or even increased) prices for the same offering. Otherwise revenue goes down and company leadership finds itself finding other ways to cut costs and increase profits.
sph•Feb 16, 2026
> I'm halfway towards setting up my own private mail server and IRC server for me and my friends and kissing the internet goodbye. It was a fun 30 years but we're well into nightmare territory now.
Every day my doomer sentiment deepens, and I am ashamed when I come onto here and see all this optimism. It is refreshing to see people whose opinions I have come to respect on this forum to be as negative as I am.
jacquesm•Feb 16, 2026
If you're not to some degree pessimistic right now that simply means you haven't been paying attention for two decades or so. I would expect that for a number of people we are now well into 'don't look up' territory, they realize in their gut that this all isn't right but they prefer to pretend everything is alright as long as they can because the alternative is just too uncomfortable. I see this around me all the time and I don't blame them at all, people as a rule have problems enough without having to think about the larger implications. Unfortunately that is exactly the kind of loophole the power hungry contingent needs to drive their trucks through: by structurally worsening quality of life they ensure that the bulk of the people is distracted while they make out like bandits over the backs of the rest.
Fervicus•Feb 16, 2026
It's all so tiring isn't it? It's become a meme, but everyday more and more, I yearn for living in the middle of nowhere, unplugged, with just my friends and family around. Very unrealistic, but still.
jacquesm•Feb 16, 2026
Yes. My old farm in Canada was pretty good in that sense, but with the madman next door even that would not have felt very stable right now.
marxisttemp•Feb 15, 2026
Mark, can you conceive that some people don’t trust any companies?
mark_l_watson•Feb 15, 2026
Yes, I can!
After reading Jacques's response to my question, my list got smaller. Personally, I still like Proton, but I get that they have made some people unhappy. I also agree that Hetzner is a reliable provider; I have used them a bunch of times in the last ten years.
Then my friend, we have to worry about fiber/network providers I suppose.
This general topic is outside my primary area of competence, so I just have a loose opinion of maintaining my own domain, use encryption, and being able switch between providers easily.
I would love to see an Ask HN on secure and private agentic infra + frameworks.
appplication•Feb 15, 2026
I’d be very curious what your list would be
jacquesm•Feb 15, 2026
See other comment.
rukuu001•Feb 15, 2026
Never mind the list of companies - I'd be very curious to know what the 'trust signals' are that would help you trust a company?
jacquesm•Feb 15, 2026
Decent management. A lack of change of business model, no rug pulls and such. Fair value for money. Consistency over the longer term. No lock in or other forced relationships. Large enough to be useful and to have decent team size, small enough to not have the illusion they'll conquer the world. Healthy competition.
NBJack•Feb 15, 2026
Admirable, but short of a local credit union I used to use (which I am no longer with as they f'd up a rather critical transaction), I can scarcely imagine a business that fits such a model these days. The amount of transparency needed to vet this would be interesting to find though, and its mere presence probably a green flag.
jacquesm•Feb 15, 2026
It's much easier to use this to reject than to accept.
lovich•Feb 15, 2026
Are there any companies existing you would trust?
I honestly can’t name a single one I know of who could pass that criteria
Edit:found your other comment answering a similar question
8note•Feb 15, 2026
I'd go for a co-operative ownership model rather than capitalist?
and make sure the member/owners are all of like mind, and willing to pay more to ensure security and privacy
jacquesm•Feb 15, 2026
Mondragon for IT... it's been my dream for decades.
komali2•Feb 15, 2026
We're no mondragon but I founded a co-op in IT space a few years back and it surprised me how open to the vision the members and customers have been.
I had assumed I'd have to lean more on the capitalistic values of being a co-op, like better rates for our clients, higher quality work, larger likelihood of our long term existence to support our work, more project ownership, so as to make the pitch palatable to clients. Turns out clients like the soft pitch too, of just workers owning the company they work within - I've had several clients make contact initially because they bought the vision over the sales pitch.
I'm trying to think about if I'd trust us more to set up or host openclaw than a VC funded startup or an establishment like Capital One. I think both alternatives would have way more resources at hand, but I'm not sure how that would help outside of hiring pentesters or security researchers. Our model would probably be something FOSS that is keyed per-user, so if we were popular, imo that would be more secure in the end.
The incentives leading to trust is definitely in a co-op's favor, since profit motive isn't our primary incentive - the growth of our members is, which isn't accomplished only through increasing the valuation of the co-op. Members also have total say in how we operate, including veto power, at every level of seniority, so if we started doing something naughty with customer data, someone else in the org could make us stop.
This is our co-op: 508.dev, but I've met a lot of others in the software space since founding it. I think co-ops in general have legs, the only problem is that it's basically impossible to fund them in a way a VC is happy with, so our only capitalization option is loans. So far that hasn't mattered, and that aligns with the goal of sustainable growth anyway.
jacquesm•Feb 16, 2026
Amazing, please write a book. My current venture is still called after that idea ("The Modular Company"), but I found that it is very hard to get something like that off the ground in present day Western Europe.
komali2•Feb 16, 2026
> but I found that it is very hard to get something like that off the ground in present day Western Europe.
Yes, agreed for the USA/Taiwan/Japan where we mostly operate. For us it's been understanding and leveraging the alternative resources we have. Like, we have a lot of members, but really only a couple are bringing in customers, despite plenty of members having very good networks.
Is your current a co-op? 200+ sales at 30k a pop seems to be pretty well off the ground!
jacquesm•Feb 16, 2026
Effectively, yes, but it is tiny. There is a corporate entity but it just serves to divide the loot between the collaborators.
YetAnotherNick•Feb 15, 2026
Co-operative will have significantly worse privacy guarantee compared to shareholder based model. In the no one company wants to sacrifice on privacy standard just for the sake of it. They do it for money. And in shareholder based model, the employees are more likely to go against the shareholder when user privacy is involved, because they are not directly benefiting from it.
jacquesm•Feb 15, 2026
That's nonsense. Shareholders have an incentive to violate privacy much stronger than any one employee: they can sell their shares to the highest bidder and walk away with 'clean hands' (or so they'll argue) whereas co-op partners violating your privacy would have to do so on their own title with immediate liability for their person.
YetAnotherNick•Feb 15, 2026
> Shareholders have an incentive to violate privacy much stronger than any one employee
Exactly what I said. We need lower shareholder interference not more, and in co-operative it's the opposite.
> with immediate liability for their person.
What do you mean?
jacquesm•Feb 15, 2026
A cooperative does not have shareholders in your sense of the word.
YetAnotherNick•Feb 16, 2026
Yes it does. In the purest sense, shareholder means "profit share" holders.
jacquesm•Feb 16, 2026
No, it does not mean that. In the purest sense it means 'fractional ownership', which can or may lead to profits.
komali2•Feb 16, 2026
The only shareholders in a co-op are the owners/operators ("employees"), or the owners/operators + customers (for example REI I believe). There's nobody seeking to extract value at the expense of the employees or the customers.
If, as a shareholder operator, a co-op member pressured themselves to exploit user data to turn a quick buck, I guess that's possible, but likely they'd be vetoed by other members who would get sucked into the shitstorm.
In my experience, co-op members and customers are more value-oriented than profit-motivated, within reason.
YetAnotherNick•Feb 16, 2026
> but likely they'd be vetoed by other members who would get sucked into the shitstorm.
Why are shareholders less likely to veto a evil person in a company vs in a co-operative? I think in most cases, the evil person is likely to get vetoed but sometimes greed takes over, specially over period of years and decades.
nikcub•Feb 15, 2026
the way they respond to security and privacy incidents + publishing technical security + privacy papers / docs
jacquesm•Feb 15, 2026
Good one, yes, that is important.
PlatoIsADisease•Feb 15, 2026
Apple = Run more commercials with black backgrounds and white text that says
SECURITY
PRIVACY
---
Heyyy it never said "good privacy" perceive as you want...
Don't publicly acknowledge that you were the reason someone got murdered and 1000 VIPs got hacked.
One day when I'm deemed a 'Baddie', I looked at Apple as inspiration.
belter•Feb 16, 2026
And do they approach Security as a Feature or as a Process. The fingers on one hand are enough to count them...
amelius•Feb 15, 2026
For hardware, I'd only trust a company if they didn't also have an interest in data. In fact, I'd trust a hardware company more if they didn't also have a big software division.
A company like AMD I would trust more than a company like Apple.
elxr•Feb 16, 2026
No past history of shady planned-obsolescence sprinkled in a bunch of their products, for one.
So that rules out Apple.
A leadership team that is very open and involved with the community, and one that takes extra steps, compared to competitors, to show they take privacy seriously.
selectodude•Feb 16, 2026
Planned obsolescence tells me they don't make money on the daily use of their software and they need me to buy more hardware in order to make money.
Aurornis•Feb 15, 2026
> There are very few companies who I trust with my digital data and thus trust to host something like OpenClaw and run it on my behalf: American Express, Capital One, maybe Proton, and maybe Apple. I managed an AI lab team at Capital One and personally I trust them.
I don't really understand what this has to do with the post or even OpenClaw. The big draw of OpenClaw (as I understand it) was that you could run it locally on your own system. Supposedly, per this post, OpenClaw is moving to a foundation and they've committed to letting the author continue working on it while on the OpenAI payroll. I doubt that, but it's a sign that they're making it explicitly not an OpenAI product.
OpenClaw's success and resulting PR hype explosion came from ignoring all of the trust and security guardrails that any big company would have to abide by. It would be a disaster of the highest order if it had been associated with any big company from the start. Because it felt like a grassroots experiment all of the extreme security problems were shifted to the users' responsibility.
It's going to be interesting to see where it goes from here. This blog post is already hinting that they're putting OpenClaw at arm's length by putting it into a foundation.
jacquesm•Feb 15, 2026
Prepare for the rug pull...
zmmmmm•Feb 15, 2026
a tale as old as time ...
PlatoIsADisease•Feb 15, 2026
>Apple
Lol
Their marketing team got ya.
I aspire to be as good as Apple at marketing. Who knew 2nd or worse place in everything doesnt matter when you are #1 in marketing?
eutropia•Feb 15, 2026
is this marketing or is just relating what they did to keep things secure?
Didn't have to click the link. Words don't matter. The fact that their phone security was poor enough for someone to get killed and thousands of others exposed... Oh and PRISM, so...
Marketing.
internet2000•Feb 15, 2026
Sorry to pile on, but Capital One is an insane name to drop there.
lvl155•Feb 15, 2026
Sorry to break it to you but I would not trust any financial companies with my personal data. Simply because I’ve seen how they use data to build exploitive products in the past.
shevy-java•Feb 15, 2026
You really trust them?
My trust does not extend that far.
blks•Feb 15, 2026
Privacy aside, you can never trust an LLM with your data and trust it to do exactly what it was instructed to do.
iugtmkbdfil834•Feb 16, 2026
You raised a good point I am now personally basically expecting to see this year ( next at the latest ). Some brave corporate will decide for millions of users to, uhh, liberate all users data. My money is not of that happening at Googles or OpenAIs of the world though. I am predicting it will be either be a bank or one of the data brokers.
With any luck, maybe this will finally be a bridge too fast, like what Amazon's superbowl ad did for surveillance conversation.
jiveturkey•Feb 16, 2026
sorry to say it, but C1 LOL. they don’t care at all about privacy! Don’t mistake your team for the company values.
vessenes•Feb 16, 2026
Well it’s not even just data, you have to trust actions taken if you want the assist to, you know, assist. I have been yoloing it and really enjoying it. Albeit from a locked off server.
rubenflamshep•Feb 16, 2026
Quick plus one for Capital One after also working there. They're by far the most tech-forward of all the larger financial institutions, and by virtue of being a FI they take data-security much more seriously than any other "tech" companies.
No this is not a paid post lol
shimman•Feb 16, 2026
Not a paid post but a bunch of generalities with no specifics. C1 is by far the worse of the bunch in the banking sector. C1 now openly engages in stack ranking and has absolutely destroyed employee morale, all due to hiring ex Amazon directors.
For any future workers, be highly forewarned that if ex Amazon leadership enters your company their number one goal becomes inducing mass misery to magically raise the share price. It'll never work because they are coming from a company that has a massive unregulated monopoly (or oligopoly if you want to be technical) that is able to subsidize poor business ideas indefinitely. They mistake working in this environment as having competence so be warned: they will fuck everything up, collect massive bonuses, and you'll be collecting unemployment soon enough under their guidance.
Ampned•Feb 15, 2026
It’s not like Anthropic or OpenAI were not working on “AI assistants” before OpenClaw, it’s pretty much the endgame as I can see it. This guy just single handedly released something useful (and very insecure) before anyone else. Although that’s impressive, I don’t see more than an acquisition of the hype by OpenAI.
pezo1919•Feb 15, 2026
Same here, seems 100% marketing move. The trend continues.
Aurornis•Feb 15, 2026
> This guy just single handedly released something useful (and very insecure) before anyone else.
It has been interesting to watch this take off. It wasn't the first or even best agent framework and it deliberately avoided all of the hard problems that others were trying to solve, like security.
What it did have was unnatural levels of hype and PR. A lot of that PR, ironically, came from all of the things that were happening because it had so many problems with security and so many examples of bad behavior. The chaos and lack of guardrails made it successful.
isx726552•Feb 15, 2026
Let’s not lose sight of the fact that he piggybacked on a large company’s name recognition by originally calling it “clawd”, clearly intending it to be confused with Claude. I have my doubts it would have gone anywhere without that.
dlivingston•Feb 15, 2026
My gut feeling is that OpenAI is desperately searching for The Killer App™ for LLMs and hired Peter to help guide them there.
OpenAI has tried a lot of experiments over the years - custom GPTs, the Orion browser, Codex, the Sora "TikTok but AI" app, and all have either been uninspired or more-or-less clones of other products (like Codex as a response to Claude Code).
OpenClaw feels compelling, fresh, sci-fi, and potentially a genuinely useful product once matured.
More to the point, OpenAI needs _some_ kind of hyper-compelling product to justify its insane hype, valuation, and investments, and Peter's work with OpenClaw seems very promising.
(All of this is complete speculation on my part. No insider knowledge or domain expertise here.)
Atotalnoob•Feb 15, 2026
Orion is Kagis browser.
Atlas is OpenAIs browser
readitalready•Feb 15, 2026
In the AI space there isn’t a single killer app. EVERYTHING is open for disruption. ChatGPT was the start but OpenAI could create tons of other apps. They don’t need to wait for others to do so. People already want them to make a Slack replacement but I’m just wondering why none of the frontier labs are making a simple app platform that could be used to make custom apps like ChatGPT itself, or the Slack clone. Instead, they expect us to brute force app development through the API interface. Each frontier lab really needs their own Replit.
Like, why doesn’t OpenAI build tax filing into ChatGPT? That’s like the immediate use case for LLM-based app development.
oblio•Feb 15, 2026
> Like, why doesn’t OpenAI build tax filing into ChatGPT?
Legal liability.
mschuster91•Feb 15, 2026
> the Sora "TikTok but AI" app
This product should never have seen the light of day, at least not for the general public. The amount of slop that is now floating across Tiktok, YT Shorts and Instagram is insane. Whenever you see a "cute animals" video, 99% of it is AI generated - and you can report and report and report these channels over and over, and the platforms don't care at all, but instead reward the slop creators from all the comments shouting that this is AI garbage and people responding they don't care because "it's cute".
OpenAI completely lacks any sort of ethical review board, and now we're all suffering from it.
Slartie•Feb 16, 2026
Would you consider cute animal videos that are not AI generated to be so much more worthy of your time? Because I don't really care whether cute animal videos are AI generated or filmed - I simply don't want to spend even a second on them.
And most people I know who love spending time on this kind of content would not care either - because they don't care whether they waste time on real or AI animal videos. They just want something to waste time with.
mschuster91•Feb 16, 2026
> Would you consider cute animal videos that are not AI generated to be so much more worthy of your time?
Yes indeed. I do love me some cat and bunny videos. But I hate getting fed slop - and it's not just cat videos by the way. I'm (as evidenced by my comment history) into mechanics, electronics and radio stuff, and there are so damn many slop channels spreading outright BS with AI hallucinated scripts that it eventually gets really really annoying. Sadly, YT's algorithm keeps feeding me slop in every topic that interests me and frankly it's enraging, as some of my favorite legitimate creators like shorts as a format so I don't want to completely hide shorts.
> And most people I know who love spending time on this kind of content would not care either - because they don't care whether they waste time on real or AI animal videos. They just want something to waste time with.
The problem is, these channels build up insane amounts of followers. And it would not be the first time that these channels then suddenly pivot (or get sold from one scam crew to the next) and spread disinformation, crypto scams and other fraud - it was and is a hot issue on many social media platforms.
Capricorn2481•Feb 16, 2026
Yes? Some people want to see the animal kingdom do unique things and talk about it. Is this a serious question?
This is like saying "Do you really care if Animal Planet uses AI footage instead of real animal footage?" Yes, that defeats the whole point.
botusaurus•Feb 16, 2026
i want to see cats cook. the animal kingdom does not deliver on this front
FinnKuhn•Feb 15, 2026
While insecure and not something I would use myself (yet) one thing OpenClaw has managed to do is to show people the potential that AI still has.
nikcub•Feb 15, 2026
Regardless of what you think of OpenClaw, Peter is a great hire - he's been at the forefront of brute-forcing app development with coding agents.
pubby•Feb 15, 2026
Single-handed made me smirk. It was vibe coded.
noosphr•Feb 15, 2026
OpenAI has been running around headless for at least two years now. I've build systems like openclaw, based on email, at my day job and told OAI during an interview that they needed to build this or get smoked when someone else does. I guess aqi-hire is easier than building a team that can develop software internally.
Of course the S in openclaw is for security.
the_mar•Feb 16, 2026
I mean, do you think no one at OAI and every other lab has vibe-coded some agentic demo?
The problem (?) is that when you work at a corporate job you have to think about security.
krick•Feb 16, 2026
But... how is it even useful? Do you use it? Is it a good idea for anyone to, uh, use it? Is it a product that you or any other "vibe coder" cannot ~~build~~ tell Claude Code to build on the go, if he wants to communicate with Claude Code via WhatsApp for some reason? Sure, product doesn't need to be some sophisticated technology to be worth something, it could also just have user base because it succeeded at marketing, but does this particular product even benefit from network effects? What is this shit? Why anybody cares?
Seriously, I just don't understand what's going on. To me it looks like all world just has gone crazy.
Windchaser•Feb 16, 2026
> all the world has gone crazy
Reminds me of 30 years ago.
popalchemist•Feb 15, 2026
OpenClaw is literally the most poorly conceived and insecure AI software anyone has ever made. Its users have had OpenClaw spend thousands of dollars, and do various unwanted and irreversible things.
This fucking guy will fit right in at OpenAI.
s3p•Feb 16, 2026
I would be inclined to believe you if you mentioned a single open-source agent that does more than OC. Just one.
popalchemist•Feb 16, 2026
Has it occurred to you that the fact that OpenClaw can do so much is exactly why it is problematic from a security point of view.
Multiplayer•Feb 15, 2026
Potentially amazing opportunity for OpenAI to more meaningfully compete with Claude Code at the developer and hobbyist level. Based on vibes it sure seemed like Claude Code / Opus 4.6 was running away with developer mindshare.
Peter single handedly got many of us taking Codex more seriously, at least that's my impression from the conversations I had. Openclaw has gotten more attention over the past 2 weeks than anything else I can think of.
Depending on how this goes, this could be to OpenAI what Instagram was to Facebook. FB bought Instagram for $1 billion and now estimated to be worth 100's of billies.
Total speculation based on just about zero information. :)
Aurornis•Feb 15, 2026
> Peter single handedly got many of us taking Codex more seriously, at least that's my impression from the conversations I had.
Comments like this feel confusing because I didn't have any association between Codex and OpenClaw before reading your comment.
Codex was also seeing a lot of usage before OpenClaw.
The whole OpenClaw hype bubble feels like there's a world of social media that I wasn't tapped into last month that OpenClaw capitalized on with unparalleled precision. There are many other agent frameworks out there, but OpenClaw hit all the right notes to trigger the hype machine in a way that others did not. Now OpenClaw and its author are being attributed for so many other things that it's hard for me to understand how this one person inserted himself into the center of this media zeitgeist
botusaurus•Feb 15, 2026
you didn't see because you dont follow peter on twitter. he talked for months now how codex is a better coder.
Aurornis•Feb 15, 2026
I’m not disputing that people who follow Peter are getting information from Peter. It’s the “single handedly” part of the claim that was strange.
I’m questioning how some people in that bubble came to believe he was at the center of that universe. He wasn’t the only person talking about the differences between Codex or Claude. Most of the LLM people I follow had their own thoughts and preferences that they advertised too.
Multiplayer•Feb 16, 2026
Sure, single-handedly is doing a lot of work here. :) Anecdotally a fair number of people I know have referenced his thoughts so I just ran with that. Most people seem to kind of equivocate about whatever model they like, Peter on the other hand is very strident about it.
yks•Feb 15, 2026
It's how Steve Yegge became a "father of agentic orchestration" or something - there is some Canonical Universe Building exercise somewhere on twitter that just looks, for the lack of a better word, not rigorous. But good for all these people, I guess, for riding the hype to glory.
Multiplayer•Feb 16, 2026
He's been on a number of podcasts - lex recently, and is really emphatic about Codex as the breakthrough solution he relies on. I just looked and on the handful of podcasts there are about 2,000,000 views this past week and half or so.
vibeprofessor•Feb 16, 2026
yes, i switched to Codex after he mentioned it on "Pragmatic engineer" podcast and I ran out of Claude credits on 20x plan. So far Codex is matching or slightly beating Claude Code for me. Loving the Desktop app despite the slowness
voxelc4L•Feb 15, 2026
Not sure if anyone has heard his interview on the Hard Fork podcast... was not unlike listening to a PR automaton. Now going to work for OpenAI. Yup.
mistersquid•Feb 16, 2026
> Not sure if anyone has heard his interview on the Hard Fork podcast...
Made the same mistake. Pete Steinberger created Clawdbot > Moltbot > OpenClaw.
The creator of Moltbook is Matt Schlicht and his Hard Fork interview exposes Schlict as security-negligent. [0]
Saw retweets of him saying Codex is way better than Claude Code on X. Then saw those retweets in ads on Reddit. This was 3 days before the announcement he was joining OpenAI. Whole series of events including the podcast tour seems contrived and setup by OpenAI.
shadowgovt•Feb 15, 2026
Well, someone has to backfill Zoë Hitzig exiting.
illichosky•Feb 15, 2026
The guy already sold his previous company for a shitload of money. Got bored and did a side project that stirred the Internet on the past month. That is way more than most people here are going to accomplish in a lifetime. Yet, he has some deal with OpenAI to work on whatever he things exciting. I don't see why so much negative comments here other than jelously
blueaquilae•Feb 15, 2026
True but between the lines I read some interesting points here.
Great it get the gold nugget but I found it curious how he dunked on the JVM after all the clones emerges with much more perfs and much less code/energy consumpution.
erichocean•Feb 15, 2026
Do you have a link to any of the JVM clones? Perplexity and Google came up empty.
johnwheeler•Feb 15, 2026
I just dislike Sam Altman, and I think he's just using this as a marketing ploy, which is more dishonesty from him. People keep saying OpenClaw is hype. I installed it, but I never tried to run it, and I don't know what the compelling reason is to. Supposedly you can talk to your agent from your iMessage? Who cares? Why not just talk to Claude Code?
illichosky•Feb 15, 2026
I also not a Sam's fan for the same reason. But if he offered me a big check to work whatever project I wanted, I would not care about it being a "marketing ploy".
Regarding openclaw's hype, it is not about how you access it, but rather what the agents can access from you, and no one did that before. Probably because no one had the balls to put in the wild such unsecure piece of software
johnwheeler•Feb 16, 2026
Agreed
hadlock•Feb 15, 2026
The big draw of open claw is the memory architecture. Because you effectively start from scratch every time you open a new claude chat. Open Claw on the other hand, it compacts regularly, but also generates daily digests, and uses vector search, and then uses thoughtful memory retrieval techniques to add relevant context to your queries. Recent things get weighted more heavily, but full text search of all chats is still possible, and this is all managed automatically. Plus it uses markdown so the barrier to entry for manually auditing/modifying memories etc is very very low. If you say "can you check if my solar panel for my power generator arrive yet?" it is going to probably know what I'm talking about and go check my email for delivery notifications, based on conversations I've had with it about buying, ordering the solar panel etc. Claude is just going to ask clarifying questions since it has no idea what I am referencing.
johnwheeler•Feb 16, 2026
So it sounds like you get extra memory at the expense of having to compact more because, of course all those things are going to take up context. But since you’re not interacting with it in some kind of turn based fashion it makes it worth it— the lack of context doesn’t matter. Is that correct?
hadlock•Feb 16, 2026
Yeah it's basically just a smart compaction and retrieval algorithm, blended with vector search of uncompacted memories. The algorithm is open source, but the technology behind securing the agent against 1-shot prompt injection will not be.
noelsusman•Feb 16, 2026
I can just look at my front porch to see if there's a solar panel there, or failing that I can click a single button on my phone and search "solar" on my gmail and find out where my solar panel is. Having an agent do that for me saves me like... 5 seconds?
hadlock•Feb 16, 2026
Sure. I am (literally) currently feeding a newborn, my house is a diaster zone, and it's raining. DHL just changed my delivery date from today, to the 19th so maybe it will arrive today, maybe it won't. I haven't slept more than 4 hours in 3 days so getting an answer via voice memo seems pretty nice right now.
yieldcrv•Feb 15, 2026
For further context, he has like 60 projects for general use during this “got bored” phase
Its just happened that this one latched on a trend well and went viral, cease and desist from its name accelerated the virality
mekod•Feb 15, 2026
OpenClaw was one of the more interesting “edges” of the open AI tooling ecosystem — not because of scale, but because of taste and clarity of direction.
What’s fascinating is the pattern we’re seeing lately: people who explored the frontier from the outside now moving inside the labs. That kind of permeability between open experimentation and foundational model companies seems healthy.
Curious how this changes the feedback loop. Does bringing that mindset in accelerate alignment between tooling and model capabilities — or does it inevitably centralize more innovation inside the labs?
Either way, congrats. The ecosystem benefits when strong builders move closer to the core.
cactus2093•Feb 15, 2026
I agree, it's an interesting distortion to the traditional technology feedback loop.
I would expect someone who "strikes gold" like this in a solo endeaver to raise money, start a company, hire a team. Then they have to solve the always challenging problem of how to monetize an open-source tool. Look at a company like Docker, they've been successful but they didn't capture more than a small fraction of the commercial revenue that the entire industry has paid to host the product they developed and maintain. Their peak valuation was over a billion dollars, but who knows by the time all is said and done what they'll be worth when they sell or IPO.
So if you invent something that is transformative to the industry you might work really hard for a decade and if you're lucky the company is worth $500M, if you can hang onto 20% of the company maybe it's worth $100M.
Or, you skip the decade in the trenches and get acqui-hired by a frontier lab who allegedly give out $100M signing bonuses to top talent. No idea if he got a comparable offer to a top researcher, but it wouldn't be unreasonable. Even a $10M package to skip a decade of risky & grueling work if all you really want to do is see the product succeed is a great trade.
marxisttemp•Feb 15, 2026
Who cares?
ramathornn•Feb 15, 2026
Congrats to Peter!
Can any OpenClaw power users explain what value the software has provided to them over using Claude code with MCP?
I really don’t understand the value of an agent running 24/7, like is it out there working and earning a wage? Whats the real value here outside of buzzwords like an ai personal assistant that can do everything?
aydyn•Feb 15, 2026
There's some neat experiments people post on social media. Mostly, the thing that captures the imagination the most is its sort of like watching a silicon child grow up.
They develop their own personalities, they express themselves creatively, they choose for themselves, they choose what they believe and who they become.
I know that sounds like anthropomorphism, and maybe it is, but it most definitely does not feel like interacting with a coding agent. Claude is just the substrate.
esafak•Feb 15, 2026
Imagine putting it in a robot with arms and legs, and letting it loose in your house, or your neighborhood. Oh, the possibilities!
Schlagbohrer•Feb 16, 2026
Heck, go the next step and put a knife in one hand and a loaded gun in the other!
wiseowise•Feb 16, 2026
> Mostly, the thing that captures the imagination the most is it’s sort of like watching a silicon child grow up.
> They develop their own personalities, they express themselves creatively, they choose for themselves, they choose what they believe and who they become.
Jesus Christ, the borderline idiotic are now downgraded to deranged. US government needs to redirect stargate’s 500B to mental institutions asap.
cactus2093•Feb 15, 2026
It has a heartbeat operation and you can message it via messaging apps.
Instead of going to your computer and launching claude code to have it do something, or setting up cron jobs to do things, you can message it from your phone whenever you have an idea and it can set some stuff up in the background or setup a scheduled report on its own, etc.
So it's not that it has to be running and generating tokens 24/7, it's just idling 24/7 any time you want to ping it.
catmanjan•Feb 16, 2026
AI companies must hate this right? Because they're selling tokens at a loss?
coffeebeqn•Feb 16, 2026
Bold of you to assume profitability is one of their KPIs
catmanjan•Feb 16, 2026
My understanding was that if everyone paid and used AI the companies would go into liquidation on energy bills etc
_heimdall•Feb 16, 2026
Energy bills wouldn't be the problem if everyone used AI, energy supply would be.
catmanjan•Feb 16, 2026
Are you sure? I thought tokens (or watts) were sold at such a loss that if current supply limits were reached they’d go broke
mholm•Feb 16, 2026
The entire marginal cost to serve AI models is paid for by the API costs of all providers by nearly every estimation. The cost not currently recouped is entirely in the training and net-new infrastructure that they're building.
nsvd2•Feb 16, 2026
These companies are generally profitable for inference but it does not cover the cost of R&D (training).
> Impact:
> Users are losing access to their Google accounts permanently
> No clear path to account restoration
> Affects both personal and work accounts
honestly, this is why I would not trust gemini for anything. I have a lot tied to my gmail, I'm not going to risk that for some random ai that insists on being tied to the same account.
hirako2000•Feb 16, 2026
They blocked your entire Gmail/Google account , not just the Gemini access?
That's a recipe for bots to ruin a lot of people's life.
From all indications the big players have healthy margins on inference.
Research and training are the cost sinks.
catmanjan•Feb 16, 2026
Is that just because people pay subscriptions and never use their tokens? Same model as ISPs
deadbabe•Feb 16, 2026
Like what exactly? Can you give example of the type of prompts you are sending for the agent to do?
The messaging part isn’t particularly interesting. I can already access my local LLMs running on my Mac mini from anywhere.
phamilton•Feb 16, 2026
As an experiment, I set it up with a z.ai $3/month subscription and told it to do a tedious technical task. I said to stay busy and that I expect no more than 30 minutes of inactivity, ever.
The task is to decompile Wave Race 64 and integrate with libultraship and eventually produce a runnable native port of the game. (Same approach as the Zelda OoT port Ship of Harkinian).
It set up a timer ever 30 minutes to check in on itself and see if it gave up. It reviews progress every 4 hours and revisits prioritization. I hadn't checked on it in days and when I looked today it was still going, a few functions at a time.
It set up those times itself and creates new ones as needed.
It's not any one particular thing that is novel, but it's just more independent because of all the little bits.
hobofan•Feb 16, 2026
So, you don't know if it has produced anything valuable yet?
beaker52•Feb 16, 2026
It's the same story with these people running 12 parallel agents that automatically implement issues managed in Linear by an AI product team that has conducted automated market and user research.
Instead of making things, people are making things that appear busy making things. And as you point out, "but to what end?" is a really important question, often unanswered.
"It's the future, you're going to be left behind", is a common cry. The trouble is, I'm not sure I've seen anything compelling come back from that direction yet, so I'm not sure I've really been left behind at all. I'm quite happy standing where I am.
And the moment I do see something compelling come from that direction, I'll be sure to catch up, using the energy I haven't spent beating down the brush. In the meantime, I'll keep an eye on the other directions too.
MyHonestOpinon•Feb 16, 2026
> Instead of making things, people are making things that appear busy making things.
Sounds like a regular office job.
Aperocky•Feb 16, 2026
That sound like being a manager IRL.
decidu0us9034•Feb 16, 2026
Yeah I'm not sure I understand what the goal here is. Ship of Harkinian is a rewrite not just a decompilation. As a human reverse engineer I've gotten a lot of false positives.This seems like one of those areas where hallucinations could be really insidious and hard to identify, especially for a non-expert. I've found MCP to be helpful with a lot of drudgery, but I think you would have to review the llm output, do extensive debugging/dynamic analysis, triage all potential false positives, before attempting to embark on a rewrite based on decompiled assembly... I think OoT took a team of experts collectively thousands of person-hours to fully document, it seems a bit too hopeful to want that and a rewrite just from being pushy to an agent...
someperson•Feb 16, 2026
Keep us posted, this sounds great!
hirako2000•Feb 16, 2026
$3 z.ai subscription? Sounds like it already burned $3k
I find those toys in perfect alignment with what LLM provider thrive for. Widespread token consumption explosion to demonstrate investors: see, we told you we were right to invest, let's open other giga factories.
OJFord•Feb 16, 2026
How's it burned $3k on a $3/month subscription running for a few days?
hirako2000•Feb 16, 2026
I simply don't get how it could have run for quite a while and only cost $3. Z.ai offers some of the best model out there. Several dollars per million tokens, this sort of bot to generate code would burn millions in less than 30 minutes.
JrProgrammer•Feb 16, 2026
They have a coding plan
hobofan•Feb 16, 2026
And the $3 plan also has significant latency compared with their higher tier plans.
thinkingtoilet•Feb 16, 2026
What a great use of humanity's adn the earth's resources.
jdgoesmarching•Feb 16, 2026
Not being tied to Anthropic’s models and ecosystems, having more control over the agent, interacting with it from you messaging app of choice.
paxys•Feb 15, 2026
Disappointing TBH. I completely understand that the OpenAI offer was likely too good to pass up, and I would have done the same in his position, but I wager he is about to find out exactly why a company like OpenAI isn't able to execute and deliver like he single-handedly did with OpenClaw. The position he is about to enter requires skills in politics and bureaucracy, not engineering and design.
Aurornis•Feb 15, 2026
> but I wager he is about to find out exactly why a company like OpenAI isn't able to execute and deliver like he single-handedly did with OpenClaw.
No company could ship anything like OpenClaw as a product because it was a million footguns packaged with a self-installer and a couple warnings that it can't be trusted for anything.
There's a reason they're already distancing themselves from it and saying it's going to an external foundation
throw444420394•Feb 15, 2026
What to understand of this whole story:
This is a vibe coded agent that is replicable in little time. There is no value in the technology itself. There is value in the idea of personal agents, but this idea is not new.
The value is in the hype, from the perspective of OpenAI. I believe they are wrong (see next points)
We will see a proliferation of personal agents. For a short time, the money will be in the API usage, since those agents burn a lot of tokens often for results that can be more sharply obtained without a generic assistant. At the current stage, not well orchestrated and directed, not prompted/steered, they are achieving results by brute force.
Who will create the LLM that is better at following instructions in a sensible way, and at coordinating long running tasks, will have the greatest benefit, regardless of the fact the OpenClaw is under the umbrella of OpenAI or not.
Claude Opus right now is the agent that works better for this use case. It is likely that this will help Anthropic more than OpenAI. It is wise, for Anthropic, to avoid burning money for an easily replicable piece of software.
Those hypes are forgotten as fast as they are created. Remember Cursor? And it was much more a true product than OpenClaw.
Soon, personal agents will be one of the fundamental products of AI vendors, integrated in your phone, nothing to install, part of the subscription. All this will be irrelevant.
In the mean time, good for the guy that extracted money from this gold mine. He looks like a nice person. If you are reading this: congrats!
(throw away account of obvious reasons)
bmay•Feb 15, 2026
> Those hypes are forgotten as fast as they are created. Remember Cursor?
of course--i use it every day. are you implying Cursor is dead? they raised $2B in funding 3 months ago and are at $1B in ARR...
throw444420394•Feb 15, 2026
It was a success for the company, but it is unlikely to survive long term. Now people are all focusing on Claude Code and Codex. Cursor is surviving because there are many folks that can't survive a terminal session. And because we are still in a transition stage where people look at the code, but will look at the code every day less, and more at the results and the prompts. And at the quality of the agent orchestration / tools. I don't believe the Cursor future will be bright. Anyway: my example was about how fast things are forgotten in this space.
trengrj•Feb 15, 2026
This is very true but I think there is an incredibly long tail of people who "can't survive a terminal session" and I actually question if a terminal ui will win out long term.
throw444420394•Feb 15, 2026
My guess is that, very soon, Claude Code and Codex (that already launched an initial desktop app) will have their GUIs that will be very different than Cursor. Not centered around files and editing, but providing a lot more hints about what is happening with the work the agent is performing.
yieldcrv•Feb 15, 2026
Are you all back on vs code or what? I still have cursor open and use it the few times I want to modify code manually or visualize the file structure
But base vs code is fine for that too
koakuma-chan•Feb 15, 2026
What does VSCode fork spend 2 billion dollars on?
csallen•Feb 16, 2026
Their own coding agent and models, marketing, tons of UI customizations, etc.
rvz•Feb 15, 2026
> Remember Cursor?
Who?
> are you implying Cursor is dead? they raised $2B in funding 3 months ago and are at $1B in ARR
That is the problem. It doesn't matter about how much they raised. That $2B and that $1B is paying the supplier Anthropic and OpenAI who are both directly competing against them.
Cursor is operating on thin margins and still continues to losing money. It's now worse that people are leaving Cursor for Claude Code.
In short, Cursor is in trouble and they are funding their own funeral.
flyinglizard•Feb 15, 2026
I think Cursor is doing pretty well in the enterprise space. It seems much more useful than just throwing agents upon subagents on an unsuspecting task like Claude Code.
throw444420394•Feb 15, 2026
Cursor is fine, the example is about how things go out of hype in very little time. However I believe Cursor will not survive much. It is designed around a model that will not survive: that the AI "helps you writing code", and you review, and need an IDE like that. There are many developers that want an IDE and can't stand the terminal experience of Claude Code and Codex, but I don't believe most developers in the future will inspect closely the code written by the AIs, and things like Cursor will look like products designed for a transition step that is no longer here (already).
flyinglizard•Feb 15, 2026
I'd venture a guess that most of the software in the world is not written from scratch but painstakingly maintained and as such, Cursor is a good fit while CC is not.
Besides, if agentic coding does go off, Cursor has the customer relationship and can just offer it as an additional mode.
Whoever stands in front of the customer ultimately wins. The rest are just cost centers.
mirawelner•Feb 15, 2026
“ My next mission is to build an agent that even my mum can use”
There is literally no need to shit on ur mom like that. Sorry your mom sucks at tech but can we please stop using this as a euphemism?
mentalgear•Feb 15, 2026
A hype vibe-bot maker joins a hype-vibe company that runs on fumes. Anything to keep the scam altman bubble going.
MattDaEskimo•Feb 15, 2026
Truly incredible.
OpenAI is putting money where their mouth is: a one-man team can create a vibe-coded project, and score big.
Open-source, and hyped incredibly well.
Interesting times ahead as everyone else chases this new get-rich-quick scheme. Will be plentiful for the shovel makers.
deadeye•Feb 15, 2026
Openclaw did what no major model producer would do. Release insanly insecure software that can do whatever it wants on your machine.
If openai had done it themselves, immediate backlash.
lvl155•Feb 15, 2026
Is it? You basically got 95% of the way there with Claude Code inside of a container. People were using CC outside of development scope for awhile.
Aurornis•Feb 15, 2026
> You basically got 95% of the way there with Claude Code inside of a container.
OpenClaw and Claude Code aren't solving the same problems. OpenClaw was about having a sandbox, connecting it to a messenger channel, and letting it run wild with tools you gave it.
lvl155•Feb 15, 2026
That’s what CC does…I don’t need a messenger wrapper to do those things.
koolala•Feb 15, 2026
A messenger and ssh'ing into Claude Code from your phone aren't that much different.
theturtletalks•Feb 15, 2026
The real magic is heartbeat which is essentially cron on steroids. The real difference between running Claude Code in the terminal and OpenClaw is that the agent is actually intuitive and self-driven.
People would wake up to their agent having built something cool the night before or automate their workflow without even asking for it.
koolala•Feb 16, 2026
Is the idea it can't handle 24hr+ tasks? I know Codex can do heartbeat but I haven't tested the limits of its heart.
Aurornis•Feb 15, 2026
I’m not an OpenClaw user but it’s obvious that OpenClaw was very different than that.
OpenClaw was about having the agent operate autonomously, including initiating its own actions and deciding what to do. Claude Code was about waiting for instructions and presenting results.
“Just SSH into Claude Code” is like the famous HN comment that didn’t understand why anyone was interested in DropBox because you could do backups with shell scripts.
koolala•Feb 16, 2026
You have used Claude though? Or codex or antigrav? backups seem different
madihaa•Feb 15, 2026
Major producers like OpenAI optimize for safety and brand reputation avoiding backlash. Open source projects optimize for raw capability and friction less experimentation. It is risky yes, but it allows for rapid innovation that strictly aligned models can't offer.
akmarinov•Feb 15, 2026
So that’s OpenClaw dead then.
It took all of Peter’s time to move it forward, even with maintainers (who he complained got immediately hired by AI companies).
Now he’s gonna be working on other stuff at OpenAI, so OpenClaw will be dead real quick.
Also I was following him for his AI coding experience even before the whole OpenClaw thing, he’ll likely stop posting about his experiences working with AI as well
PUSH_AX•Feb 16, 2026
It was already dead. 500k lines of slop, good luck building a tower on mud.
jackblemming•Feb 15, 2026
I appreciate the author’s work and he seems like a good guy.
In spite of that, it’s incredibly obvious OpenClaw was pushed by bots across pretty much every social media platform and that’s weird and unsettling.
maplethorpe•Feb 15, 2026
> That’ll need a much broader change, a lot more thought on how to do it safely, and access to the very latest models and research.
You work for OpenAI now. You don't have to worry about safety anymore.
jjmarr•Feb 15, 2026
It's pretty depressing yet motivating seeing SWE bifurcate.
This is an app that would've normally had a dozen or so people behind it, all acquihired by OpenAI to find the people who really drove the project.
With AI, it's one person who builds and takes everything.
Aurornis•Feb 15, 2026
> This is an app that would've normally had a dozen or so people behind it, all acquihired by OpenAI to find the people who really drove the project.
Acquihires haven't worked that way for a while. The new acquihire game is to buy out a few key execs and then have them recruit away the key developers, leaving the former company as a shell for someone else to take over and try to run.
Also OpenClaw was not a one-person operation. It had several maintainers working together.
gordonhart•Feb 15, 2026
Every day the software world feels more and more like a casino.
_nvs•Feb 15, 2026
Congrats — just the beginning for agents!
rob•Feb 15, 2026
All they have to do now is partner with one of the major messaging providers like telegram and they can offer this as a hosted bot solution and probably dominate the market. Yes people are going out there buying mac minis and enjoying setting it up themselves but 90% of the general public don't want to do or maintain that and still want the benefits of all of it.
sarreph•Feb 15, 2026
Those attempting to discredit the value of OpenClaw by virtue of it being easily replicable or simple are missing the point. This was, like most successful entrepreneurial endeavours, a distribution play.
The creator built a powerful social media following and capitalized on that. Fair play.
thoughtjunkie•Feb 15, 2026
It's kind of a shame actually, because the whole promise of OpenClaw is that you own all the data yourself, you have complete control, you can write the memories or the personality of the bot. "Open"AI will never run ChatGPT this way. They want all of your data, your documents, your calendar, they want to keep it for themselves and lock you into their platform. They will want a sanitised corporate friendly version of an AI agent that reflects well on their brand.
_nvs•Feb 15, 2026
congrats @steipete!
firefoxd•Feb 15, 2026
Somehow we've normalized running random .exe on our devices. Except now it's markdown.exe and and you sound like a zealot when advocating against it.
general_reveal•Feb 15, 2026
What should I do with all my video games?
Capricorn2481•Feb 16, 2026
My video games aren't connected to my email.
mrcwinn•Feb 15, 2026
I wouldn’t be able to sleep at night knowing I have to work for Sam Altman. Dude’s gross.
zmmmmm•Feb 15, 2026
Just like the original OpenAI story, this seems like a case of reputation hacking through asymmetry in risk tolerance.
There is not much novel about OpenClaw. Anybody could have thought of this or done it. The reason people have not released an agent that would run by itself, edit its own code and be exposed to the internet is not that it's hard or novel - it's because it is an utterly reckless thing to do. No responsible corporate entity could afford to do it. So we needed someone with little enough to lose, enough skill and willing to be reckless enough to do it and release it openly to let everyone else absorb the risk.
I think he's smart to jump on the job opportunity here because it may well turn out that this goes south in a big way very fast.
krackers•Feb 15, 2026
I think it's at the final stage of software pump and dump [1]. OpenAI is probably hiring more for the reputation/marketing, rather than for any technical skills behind OpenClaw.
To be fair, when used in retrospect, this applies to just about any big tech company
podgorniy•Feb 16, 2026
I like your analysis
shevy-java•Feb 15, 2026
> I’m joining OpenAI to work on bringing agents to everyone.
Sounds like a threat - "I'm joining OpenSkynetAI to bring AI agents onto your harddisc too!"
dev1ycan•Feb 15, 2026
This is how you can tell OpenAI is panicking, rather than build something fairly simple themselves, they insta bought it for the headline news/"hype"...
ai-christianson•Feb 15, 2026
For anyone looking at alternatives in this space - I built Gobii (https://gobii.ai) 8 months before OpenClaw existed. MIT licensed, cloud native, gVisor sandboxed.
The sandboxing part matters more than people think. Giving an LLM a browser with full network access and no isolation is a real security problem that most projects in this space hand-wave away.
Multi-provider LLM support (OpenAI, Anthropic, DeepSeek, open-weight models via vLLM). In production with paying customers.
Happy to answer architecture questions.
ed_mercer•Feb 15, 2026
Looks good! I'm curious, are customers fine with their data going to third-party LLM providers?
herval•Feb 15, 2026
I think this ship has sailed pretty hard, by now. Pretty much any app you can possibly use, from iTerm to Slack, is sending data to third-party LLMs (sometimes explicitly, most times as small features here and there)
ai-christianson•Feb 16, 2026
Control of where data goes is always an option. People just need to make that choice.
ai-christianson•Feb 16, 2026
Not sure what gives you that idea. One of our superpowers is that we're MIT licensed and deployable to private clouds, or even fully airgapped with 196gb+ of vram to run minimax on vllm + Gobii.
neilellis•Feb 15, 2026
When I hear people talking about how insecure OpenClaw is, I remember how insecure the internet was in the early days. Sometimes it's about doing the right thing badly and fix the bad things after.
Big Tech can't release software this dangerous and then figure out how to make it secure. For them it would be an absolute disaster and could ruin them.
What OpenClaw did was show us the future, give us a taste of what it would be like and had the balls to do it badly.
Technology is often pushed forwards by ostensively bad ideas (like telnet) that carve a path through the jungle and let other people create roads after.
I don't get the hate towards OpenClaw, if it was a consumer product I would, but for hackers to play around to see what is possible it's an amazing (and ridiculously simple) idea. Much like http was.
If you connected to your bank account via telnet in the 1980s or plain http in the 90s or stored your secrets in 'crypt' well, you deserved what you got ;-) But that's how many great things get started, badly, we see the flaws fix them and we get the safe version.
And that I guess is what he'll get to do now.
* OpenClaw is a straw man for AGI *
fny•Feb 15, 2026
I really hope Mario who wrote the engine that powers OpenClaw[0] gets spoils as well.
OpenClaw is mostly a shell around this (ha!), and I've always been annoyed OpenClaw never credited those repos openly.
The pi agent repos are a joy to read, are 1/100th the size of OpenClaw, and have 95% of the functionality.
peter's claw is a lot more than just a wrapper around my slop.
i too had plenty of offers, but so far chose not to follow through with any of them, as i like my life as is.
also, peter is a good friend and gives plenty of credit. in fact, less credit would be nice, so i don't have to endure more vibeslopped issues and PRs going forward :)
rkunnamp•Feb 16, 2026
Your Pi is a piece of art. Thank you for building it. I spend almost 16 hrs a day with it . And there is not a single day I am not awestruck. Big fan!
Ozzie_osman•Feb 16, 2026
Best HN comment ever.
qingcharles•Feb 16, 2026
Peak HN for sure. This is what keeps me coming back.
loopasam•Feb 16, 2026
pi is the most amazing piece of software I have interacted with, thanks for building it
nkzd•Feb 16, 2026
Can you explain why for someone who is just familiar with traditional agents like Claude Code?
chrizel•Feb 16, 2026
I cannot directly answer your question, because I am looking into this topic myself currently, but I found this HN discussion from two weeks ago, which should give you more insights about pi: https://news.ycombinator.com/item?id=46844822
loopasam•Feb 16, 2026
For me it is the simplicity of it (transparent minimal system prompts and harnest), you can extend it the way you like, I don't have to install a (buggy) Electron app (CC or Codex app), it integrates where I work, because it's simple (like in a standard terminal on VS code). I'm not locked in with any vendor and can switch models whenever I want, and most importantly, I can effectively use it within apps that are themselves using it as coding agent (the meta part - like a chat UI for very specific business cases). Being in TypeScript, it integrates very well with the browser and one can leverage the browser sandbox around it.
steipete•Feb 16, 2026
Mario has a special place in the Clawtributor list.
Other than the response from Mario itself, pi is very frequently showcased at meetups organised by Peter/OpenClaw community, so there is definitely crediting involved.
chillacy•Feb 16, 2026
Pi's great. I really noticed it when trying some of the openclaw clones which try to be smaller in binary size and end up not using pi.
maxaw•Feb 15, 2026
While following OpenClaw, I noticed an unexpected resentment in myself. After some introspection, I realized it’s tied to seeing a project achieve huge success while ignoring security norms many of us struggled to learn the hard way. On one level, it’s selfish discomfort at the feeling of being left behind (“I still can’t bring myself to vibe code. I have to at least skim every diff. Meanwhile this guy is joining OpenAI”). On another level, it feels genuinely sad that the culture of enforcing security norms - work that has no direct personal reward and that end users will never consciously appreciate, but that only builders can uphold - seems to be on it’s way out
merlindru•Feb 15, 2026
i think your self reflection here is commendable. i agree on both counts.
i think the silver lining is that AI seems to be genuinely good at finding security issues and maybe further down the line enough to rely on it somewhat. the middle period we're entering right now is super scary.
we want all the value, security be damned, and have no way to know about issues we're introducing at this breakneck speed.
still i'm hopeful we can figure it out somehow
xvector•Feb 15, 2026
Hey, as a security engineer in AI, I get where you're coming from.
But one thing to remember - our job is to figure out how to enable these amazing usecases while keeping the blast radius as low as possible.
Yes, OpenClaw ignores all security norms, but it's our job to figure out an architecture in which agents like these can have the autonomy they need to act, without harming the business too much.
So I would disagree our work is "on the way out", it's more valuable than ever. I feel blessed to be working in security in this era - there has never been a better time to be in security. Every business needs us to get these things working safely, lest they fall behind.
It's fulfilling work, because we are no longer a cost center. And these businesses are willing to pay - truly life changing money for security engineers in our niche.
windexh8er•Feb 16, 2026
Security is always a cost center. We've seen multiple iterations of changes already impact security in the same ways over the last 20+ years. Nothing is different here and the outcomes will be the same: just good enough but always a step behind. The one thing that is a new lever to pull here is time, people need far less of it to make disastrous mistakes. But, ultimately, the game hasn't changed and security budgets will continue to be funneled to off the shelf products that barely work and the remainder of that budget will continue to go to the overworked and underpaid. Nothing really changes.
m11a•Feb 15, 2026
> seems to be on it’s way out
Change is fraught with chaos. I don't think exuberant trends are indicators of whether we'll still care about secure and high quality software in the long term. My bet is that we will.
zamalek•Feb 15, 2026
> being left behind (“I still can’t bring myself to vibe code. I have to at least skim every diff. Meanwhile this guy is joining OpenAI”).
I don't believe skimming diffs counts as being left behind. Survivor bias etc. Furthermore, people are going to get burned by this (already have been, but seemingly not enough) and a responsible mindset such as yours will be valued again.
Something that still up for grabs is figuring how how to do full agenetic in a responsible way. How do we bring the equivalent of skimming diffs to this?
rgbrenner•Feb 15, 2026
But the security risk wasnt taken by OpenClaw. Releasing vulnerable software that users run on their own machines isn't going to compromise OpenClaw itself. It can still deliver value for it's users while also requiring those same users to handle the insecurity of the software themselves (by either ignoring it or setting up sandboxes, etc to reduce the risk, and then maybe that reduced risk is weighed against the novelty and value of the software that then makes it worth it to the user to setup).
On the other hand, if OpenClaw were structured as a SaaS, this entire project would have burned to the ground the first day it was launched.
So by releasing it as something you needed to run on your own hardware, the security requirement was reduced from essential, to a feature that some users would be happy to live without. If you were developing a competitor, security could be one feature you compete on--and it would increase the number of people willing to run your software and reduce the friction of setting up sandboxes/VMs to run it.
piker•Feb 16, 2026
You should join the tobacco lobby! Genius!
gehsty•Feb 16, 2026
More straightforwardly, people are generally very forgiving when people make mistakes, and very unforgiving when computers do. Look at how we view a person accidentally killing someone in a traffic accident versus when a robotaxi does it. Having people run it on their own hardware makes them take responsibility for it mentally, so gives a lot of leeway for errors.
datsci_est_2015•Feb 16, 2026
I think that’s generally because humans can be held accountable, but automated systems can not. We hold automated systems to a higher standard because there are no consequences for the system if it fails, beyond being shut off. On the other hand, there’s a genuine multitude of ways that a human can be held accountable, from stern admonishment to capital punishment.
I’m a broken record on this topic but it always comes back to liability.
ass22•Feb 16, 2026
Thats one aspect.
Another aspect is that we have much higher expectations of machines than humans in regards to fault-tolerance.
DrewADesign•Feb 16, 2026
Traffic accidents are the same symptom of fundamentally different underlying problems among human-driven and algorithmically-driven vehicles. Two very similar people differ more than the two most different robo taxis in any given uniform fleet— if one has some sort of bug or design shortcoming that kills people, they almost certainly all will. That’s why product (including automobile) recalls exist, but we don’t take away everyone’s license when one person gets into an accident. People have enough variance that acting on a whole population because of individual errors doesn’t make sense— even for pretty common errors. The cost/benefit is totally different for mass-produced goods.
Also, when individual drivers accidentally kill somebody in a traffic accident, they’re civilly liable under the same system as entities driving many cars through a collection of algorithms. The entities driving many cars can and should have a much greater exposure to risk, and be held to incomparably higher standards because the risk of getting it wrong is much, much greater.
casey2•Feb 16, 2026
Oh please, why equate IT BS with cancer? If the null pointer was a billion dollar mistake, then C was a trillion dollar invention.
At this scale of investment countries will have no problem cheapening the value of human life. It's part and parcel of living through another industrial revolution.
almostdeadguy•Feb 16, 2026
Love passing off the externalities of security to the user, and then the second order externalities of an LLM that then blackmails people in the wild. Love how we just don’t care anymore.
Aurornis•Feb 16, 2026
> But the security risk wasnt taken by OpenClaw
This is the genius move at the core of the phenomenon.
While everyone else was busy trying to address safety problems, the OpenClaw project took the opposite approach: They advertised it as dangerous and said only experienced power users should use it. This warning seemingly only made it more enticing to a lot of users.
It’ve been fascinated by how well the project has just dodged and avoided any consequences for the problems it has introduced. When it was revealed that the #1 skill was malware masquerading as a Twitter integration I thought for sure there would be some reporting on the problems. The recent story about an OpenClaw bot publishing hit pieces seemed like another tipping point for journalists covering the story.
Though maybe this inflection point made it the most obvious time to jump off of the hype train and join one of the labs. It takes a while for journalists to sync up and decided to flip to negative coverage of a phenomenon after they cover the rise, but now it appears that the story has changed again before any narratives could build about the problems with OpenClaw.
socialcommenter•Feb 16, 2026
This argument has the same obvious flaws as the anti-mask/anti-vax movement (which unfortunately means there will always be a fringe that don't care). These things are allowed to interact with the outside world, it's not as simple as "users can blow their own system up, it's their responsibility".
I don't need to think hard to speculate on what might go wrong here - will it answer spam emails sincerely? Start cancelling flights for you by accident? Send nuisance emails to notable software developers for their contribution to society[1]? Start opening unsolicited PRs on matplotlib?
At least during the Covid response, your concerns over anti-mask and anti-vaccine issues seem unwarranted.
The claims being shared by officials at the time was that anyone vaccinated was immune and couldn't catch it. Claims were similarly made that we needed roughly 60% vaccination rate to reach herd immunity. With that precedent being set it shouldn't matter whether one person chose not to mask up or get the jab, most everyone else could do so to fully protect themselves and those who can't would only be at risk if more than 40% of the population weren't onboard with the masking and vaccination protocols.
socialcommenter•Feb 16, 2026
I specifically wasn't referring to that instance (if anything I'm thinking more of the recent increase in measles outbreaks), I myself don't hold a strong view on COVID vaccinations. The trade-offs, and herd immunity thresholds, are different for different diseases.
Do we know that 0.1% prevalence of "unvaccinated" AI agents won't already be terrible?
_heimdall•Feb 16, 2026
Fair enough. I assumed you had Covid in mind with an anti-mask reference. At least in modern history in the US, we have only even considered masks during the Covid response.
I may be out of touch, but I haven't heard about masks for measles, though it does spread through aerosol droplets so that would be a reasonable recommendation.
socialcommenter•Feb 16, 2026
I think you're right - outside of COVID, it's not fringe, it's an accepted norm.
Personally I at least wish sick people would mask up on planes! Much more efficient than everyone else masking up or risking exposure.
_heimdall•Feb 16, 2026
Oh I wish sick people would just not get on a plane. I've cancelled a trip before, the last thing I want to do when sick is deal with the TSA, stand around in an airport, and be stuck in a metal tube with a bunch of other people.
Nevermark•Feb 16, 2026
> that anyone vaccinated was immune and couldn't catch it.
Those claims disappeared rapidly when it became clear they offered some protection, and reduced severity, but not immunity.
People seem to be taking a lot more “lessons” from COVID than are realistic or beneficial. Nobody could get everything right. There couldn’t possibly be clear “right” answers, because nobody knew for sure how serious the disease could become as it propagated, evolved, and responded to mitigations. Converging on consistent shared viewpoints, coordinating responses, and working through various solutions to a new threat on that scale was just going to be a mess.
_heimdall•Feb 16, 2026
Those claims were made after the studies were done over a short duration and specifically only watching for subjects who reported symptoms.
I'm in no way taking a side here on whether anyone should have chosen to get vaccinated or wear masks, only that the information at the time being pushed out from experts doesn't align with an after the fact condemnation of anyone who chose not to.
moron4hire•Feb 16, 2026
We really needed to have made software engineering into a real, licensed engineering practice over a decade ago. You wanna write code that others will use? You need to be held to a binding set of ethical standards.
sodapopcan•Feb 16, 2026
Even though it means I probably wouldn't have a job, I think about this a lot and agree that it should. Nowadays suggesting programmers should be highly knowledgeable at what they do will get you called a gatekeeper.
moron4hire•Feb 16, 2026
While it is literally gatekeeping, it's necessary. Doctors, architects, lawyers should be gatekept.
I used to work on industrial lifting crane simulation software. People used it to plan out how to perform big lift jobs to make sure they were safe. Literal, "if we fuck this up, people could die" levels of responsibility. All the qualification I had was my BS in CS and two years of experience. It was lucky circumstance that I was actually quiet good at math and physics to be able to discover that there were major errors in the physics model.
Not every programmer is going to encounter issues like that, but also, neither can we predict where things will end up. Not every lawyer is going to be a criminal defense lawyer. Not every doctor is going to be a brain surgeon. Not every architect is going to design skyscrapers. But they all do work that needs to be warranteed in some way.
We're already seeing people getting killed because of AI. Brian in middle management "getting to code again" is not a good enough reason.
sodapopcan•Feb 16, 2026
> While it is literally gatekeeping, it's necessary. Doctors, architects, lawyers should be gatekept.
That was exactly my point. It's one of those things where deliberately use a word that is technically correct in a context where it doesn't, or shouldn't, hold true. Does this mean I want to stop people from "vibe coding" flappy bird. No, of course not, but as per your original comment yes, there should be stricter regulations when it comes to hiring.
moron4hire•Feb 16, 2026
Yeah, I know what you mean. It is a weapon people throw around on social media sites.
SpicyLemonZest•Feb 16, 2026
I don't agree that making your users run the binaries means security isn't your concern. Perhaps it doesn't have to be quite as buttoned down as a commercial product, but you can't release something broken by design and wash your hands of the consequences. Within a few months, someone is going to deploy a large-scale exploit which absolutely ruins OpenClaw users, and the author's new OpenAI job will probably allow him to evade any real accountability for it.
flessner•Feb 16, 2026
I am guessing there will be an OpenClaw "competitor" targeting Enterprise within the next 1-2 months. If OpenAI, Anthropic or Gemini are fast and smart about it they could grab some serious ground.
OpenClaw showed what an "AI Personal Assistant" should be capable of. Now it's time to get it in a form-factor businesses can safely use.
socialcommenter•Feb 16, 2026
With the guard rails up, right? Right?
buremba•Feb 16, 2026
Exactly! I was digging into Openclaw codebase for the last 2 weeks and the core ideas are very inspiring.
The main work he has done to enable personal agent is his army of CLIs, like 40 of them.
The harness he used, pi-mono is also a great choice because of its extensibility. I was working on a similar project (1) for the last few months with Claude Code and it’s not really the best fit for personal agent and it’s pretty heavy.
Since I was planning to release my project as a Cloud offering, I worked mainly on sandboxing it, which turned out to be the right choice given OpenClaw is opensource and I can plug its runtime to replace Claude Code.
I decided to release it as opensource because at this point software is free.
At the end of the day, he built something people want. That’s what really matters. OpenAI and Anthropic could not build it because of the security issues you point out. But people are using it and there is a need for it. Good on him for recognizing this and giving people what they want. We’re all adults and the users will be responsible for whatever issues they run into because of the lack of security around this project.
iugtmkbdfil834•Feb 16, 2026
Admittedly, I might not be the.. targeted demographic here, but I can't say I understand what problem it solves, but even cursory read immediately flags all the way in which it can go wrong ( including recent 'rent a human hn post'). I am fascinated, and I wonder if that it is partially that fascination that drives current wave of adoption.
I will say openly: I don't get it and I used to argue for crypto use cases.
andyferris•Feb 16, 2026
I don't know. It's more of a sharp tool like a web browser (also called a "user agent") - yes an inexperienced user can quickly get themselves into trouble without realizing it (in a browser or openclaw), yes the agent means it might even happen without you being there.
A security hole in a browser is an expected invariant not being upheld, like a vulnerability letting a remote attacker control your other programs, but it isn't a bug when a user falls for an online scam. What invariants are expected by anyone of "YOLO hey computer run my life for me thx"?
DrewADesign•Feb 16, 2026
I think you should give your gut instinct more credit. The tech world has gotten a false sense of security from the big SaaS platforms running everything that make the nitty gritty security details disappear in a seamless user experience, and that includes LLM chatbot providers. Even open source development libraries with exposure to the wild are so heavily scrutinized and well-honed that it’s easy even for people like me that started in the 90s to lose sight of the real risk on the other side that. No more popping up some raw script on an Apache server to do its best against whatever is out there. Vibe coded projects trade a lot of that hard-won stability for the convenience of not having to consider some amount of the implementation details. People that are jumping all over this for anything except sandbox usage either don’t know any better, or forgot what they’ve learned.
project2501a•Feb 16, 2026
Totally agree. And the fact that the author says
> What I want is to change the world, not build a large company and teaming up with OpenAI is the fastest way to bring this to everyone.
do no not make me feel all warm and fuzzy: Yeah, changing the world with Tiel's money. Try joining a union instead.
vkou•Feb 16, 2026
Change the world into what? Techno-feudalism?
Ever since I was four, I've dreamed of doing my part to bring that about.
kranke155•Feb 16, 2026
Very happy to see techno feudalism being mentioned here in HN.
Whatever the origins of the term, it now seems clear it’s kind of the direction things are going.
komali2•Feb 16, 2026
I recently met a guy that goes to these "San Francisco Freedom Club" parties. Check their website, it's basically just a lot of Capitalism Fans and megawealthies getting drunk somewhere fancy in SF. Anyway, he's an ultra-capitalist and we spent a day at a cafe (co-working event) chatting in a conversation that started with him proposing private roads and shot into orbit when he said "Should we be valuing all humans equally?"
Throughout the conversation he speculated on some truly bizarre possible futures, including an oligarchic takeover by billionaires with private armies following the collapse of the USA under Trump. What weirded me out was how oddly specific he got about all the possible futures he was speculating about that all ended with Thiel, Musk, and friends as feudal lords. Either he thinks about it a lot, or he overhears this kind of thing at the ultracapitalist soirées he's been going to.
tcoff91•Feb 16, 2026
So basically a bunch of rich tech edgelords are just doing blow and trying to bring about the world as depicted in Snow Crash?!
Guess I’ll have to get a Samurai sword soon and pivot to high stakes pizza delivery.
There are a disturbing amount of parallels between Elon and L Bob Rife.
It’s really disturbing that we have oligarchs trying to eagerly create a cyberpunk dystopia.
trollbridge•Feb 16, 2026
I was really into the idea of kings, knights, castles, princesses etc when I was 4.
Trasmatta•Feb 16, 2026
I've been feeling this SO much lately, in many ways. In addition to security, just the feeling of spending decades learning to write clean code, valuing having a deep understanding of my codebase and tooling, thorough testing, maintainability, etc, etc. Now the industry is basically telling me "all that expertise is pointless, you should give it up, all that we care about it is a future of endless AI slop that nobody understands".
_fzslm•Feb 16, 2026
AI slop will collapse under its own weight without oversight. I really think we will need new frameworks to support AI-generated code. Engineers with high standards will be needed to build and maintain the tools and technologies so that AI-written code can thrive. It's not game over just yet
Trasmatta•Feb 16, 2026
Thanks, I've been feeling the same way. But it seems like we're some years away from the industry fully realizing it. Makes me want to quit my job and just code my own stuff.
yoyohello13•Feb 16, 2026
I've been feeling a similar kind of resentment often. My whole life I have prided myself on being the guy that actually bothers to read the docs and understand how shit works. Seems like the whole industry is basically saying none of that matters, no need to understand anything deeply anymore. Feels bad man.
chillfox•Feb 16, 2026
Every single new tech industry thing has to learn security from scratch. It's always been that way. A significant number of people in tech just don't believe that there's anything to learn from history.
ryandrake•Feb 16, 2026
And the industry actively pushes graybeards away who have already been there done that.
mgraczyk•Feb 16, 2026
But in this case following security norms would be a mistake. The right thing to take away is that you shouldn't dogmatically follow norms. Sometimes it's better to just build things if there is very little risk
Nothing actually bad happened in this case and probably never will. Maybe some people have their crypto or identity stolen, but probably not a rate rate significantly higher than background (lots of people are using openclaw)
So my unsubstantiated conspiracy theory regarding Clawd/Molt/OpenClaw is that the hype was bought, probably by OpenAI. I find it too convenient that not long after the phrase “the AI bubble“ starts coming into common speech we see the emergence of a “viral” use case that all of the paid influencers on the Internet seem to converge on at the same time. At the end of the day piping AI output with tool access into a while loop is not revolutionary. The people who had been experimenting with these type of set ups back when LangChain was the hotness didn’t organically go viral because most people knew that giving a language model unrestricted access to your online presence or bank account is extremely reckless. The “I gave OpenClaw $100 and now I bought my second Lambo. Buy my ebook” stories don’t seem credible.
So don’t feel bad. Everything on the internet is fake.
tempest_•Feb 16, 2026
The modern influencer landscape was such a boon for corporations.
For less than the cost of 1 graphics card you can get enough people going that the rest of them will hop on board for free just to try and ride the wave.
Add a little LLM generated comments that might not throw the product in your face but make sure it is always part of the conversation so someone else can do it for you for free and you are off to the races.
bionhoward•Feb 16, 2026
building this openclaw thing that competes with openai using codex is against the openai terms of service, which say you can't use it to make stuff that competes with them. but they compete with everyone. by giving zero fucks (or just not reading the fine print), bro was rewarded by the dumb rule people for breaking the dumb rules. this happens over and over. there is a lesson here
jiveturkey•Feb 16, 2026
underrated comment
and this is why they bought Peter. i’m betting he will come to regret it.
jrjeksjd8d•Feb 16, 2026
For my entire career in tech (~20 years) I have been technically good but bad at identifying business trends. I left Shopify right before their stock 4xed during COVID because their technology was stagnating and the culture was toxic. The market didn't care about any of that, I could have hung around and been a millionaire. I've been at 3 early stage startups and the difference between winners and losers was nothing to do with quality or security.
The tech industry hasn't ever been about "building" in a pure sense, and I think we look back at previous generations with an excess of nostalgia. Many superior technologies have lost out because they were less profitable or marketed poorly.
gricardo99•Feb 16, 2026
bad at identifying business trends
I think you’re being unduly harsh on yourself. At least by the Shopify/COVID example. COVID was a black swan event, which may very well have completely changed the fortunes of companies like Shopify when online commerce surged and became vital to the economy. Shortcomings, mismanagement and bad culture can be completely papered over by growth and revenue.
Right place, right time. It’s too bad you missed out on some good fortune, but it’s a helpful reminder of how much of our paths are governed by luck. Thanks for sharing, and wishing you luck in the future.
wat10000•Feb 16, 2026
This is a normal reaction to unfairness. You see someone who you believe is Doing It Wrong (and I’d agree), and they’re rewarded for it. Meanwhile you Do It Right and your reward isn’t nearly as much. It’s natural to find this upsetting.
Unfortunately, you just have to understand that this happens all over the place, and all you can really do is try to make your corner of the world a little better. We can’t make programmers use good security practices. We can’t make users demand secure software. We can at least try to do a better job with our own work, and educate people on why they should care.
m3kw9•Feb 16, 2026
Security always is the most time consuming in a backend project
vibeprofessor•Feb 16, 2026
Well OpenClaw has ~3k open PRs (many touching security) on GitHub right now. Peter's move shows killer product UI/UX, ease of use and user growth trump everything. Now OpenAI with throw their full engineering firepower to squash those flaws in no time.
Making users happy > perfect security day one
ass22•Feb 16, 2026
"Peter's move shows killer product UI/UX, ease of use and user growth trump everything. "
Erm, is this some groundbreaking revelation?
Its always been that way. Unless its in the context of superior technology with minimal UI a-la Google Search in its early years.
the_mar•Feb 16, 2026
google search did have a killer UI though, you might be forgetting what search looked like before google
ass22•Feb 16, 2026
A list of results is not a killer UI.
The technology was the killer. Technology providing the right list of results and fast.
OH and believe it or not, this continues to be the core of Google today - they suck at product design and marketing.
the_mar•Feb 16, 2026
It was killer compared to alternatives. All other "homepages" of the internet were the cluttered mess of ads.
I feel like we are arguing semantics though. But IMO any UI that does the job that consumers want well is good UI. Just because it was simple doesn't mean it wasn't good
jatora•Feb 16, 2026
Your introspection missed the obvious point that you just wish you were him. Your resentment had nothing to do with security. It's a self-revelation that you don't actually care about it either and you resent wasting your time.
jnaina•Feb 16, 2026
Damn. I just installed OpenClaw on my M2 Mac and hopped on a plane for our SKO in LAX. United delayed the plane departure by 2 hours (of course) and diverted the flight to Honolulu. And Claw (that's the name of my new AI agent) kept me updated on my rebooking options and new terminal/gate assignments in SFO. All through the free WhatsApp access on United. AND, it refactored all my transferred Python code, built a graph of my emails, installed MariaDB and restored a backup from another PC. And, I almost forgot, fixed my 1337x web scrapping (don't ask) cron job, by CloudFlare-proofing it. All the while sitting in a shitty airline, with shitty food and shittier seats, hurtling across the pacific ocean.
The future is both amazing and shitty.
Hope OpenClaw continues to evolve. It is indeed an amazing piece of work.
And I hope sama doesn't get his grubby greedy hands on OpenClaw.
paulryanrogers•Feb 16, 2026
Did you ask OpenClaw to do all those things? If not did you want it to do all of them?
jnaina•Feb 16, 2026
I asked it to check why the cron job kept failing, and it checked the cron payload and recommended reasons for the failure. I gave it the approval to go ahead and fix it. it tried different options (like trying different domains, and finally figured out the anti CF option).
jnaina•Feb 16, 2026
the other tasks (like the MariaDB install and restore, python code refactoring) were a result of the initial requests made to Claw, like graphing my gmail email archives.
antod•Feb 16, 2026
> The future is both amazing and shitty
I feel like we're living in one of those breathless futurist interviews from a 1994 issue of Wired mag.
jnaina•Feb 16, 2026
No, I'm not William Gibson. Swear to god.
chimeracoder•Feb 16, 2026
> hopped on a plane for SKO in LAX. United delayed the plane departure by 2 hours (of course) and diverted the flight to Honolulu.
I'm assuming there's a typo here, because I can't imagine a flight from LAX to SKO at all, let alone one that goes anywhere close to Honolulu. But I can't figure out what this was supposed to be.
jnaina•Feb 16, 2026
SKO ---> Sales Kick Off. Apologies for the acronym overdose
s3p•Feb 16, 2026
What about token usage? i've noticed that simple conversations balloon to 100k+ tokens within 1-3 messages. did you have this issue?
jnaina•Feb 16, 2026
I have Claude Max subscription for the main agenttasks. Also use my OpenAI API and Gemini API access for sub-agent work.
Once my Olares One is here, will also be using local LLMs on open models.
I think that's against the TOS of the Claude Max subscription. You risk being banned.
SilentM68•Feb 16, 2026
Best way to democratize AI is to keep it as free or as inexpensive as possible.
rkunnamp•Feb 16, 2026
I really hope Mario and Armin also gets poached
The real gem inside OpenClaw is pi, the agent, created by
Mario Zechner. Pi is by far the best agent framework in the world. Most extensible, with the best primitives. .
Armin Ronacher , creator of flask , can go deep and make something like openclaw enterprise ready.
The value of Peter is in connecting the dots, thinking from users perspective, and bringing business perspective
The trio are friends and have together vibecoded vibetunnel.
Sam Altman, if you are reading this , get Mario and Armin today.
beepbooptheory•Feb 16, 2026
What does "enterprise ready" mean in this context?
rkunnamp•Feb 16, 2026
Currently 2 big issues of openclaw is memory and security. Armin is working on sandboxing primitives which could help a lot.
beepbooptheory•Feb 16, 2026
What does one do with a sandboxed openclaw?
bigfishrunning•Feb 16, 2026
It means "ready for you to allow it to ruin your enterprise"
discordance•Feb 16, 2026
Expensive, bloated and safe for white collars to purchase.
poontangbot•Feb 16, 2026
Time to uninstall
whiterock•Feb 16, 2026
It‘s just crazy to me that this guy lives around the corner. That should inspire some hope for me I guess, that even people from Vienna can be successful on such a level.
GalaxyNova•Feb 16, 2026
It's strange how quickly this project got so big... It did not seem like anything particularly novel to me.
andrewchambers•Feb 16, 2026
I think it was obvious, yet nobody seemed to have released a version people could actually easily use.
The feature set is pretty simple:
- Agents that can write their own tools.
- Agents that can write their own skills.
- Agents that can chat via standard chat apps.
- Agents that can install and use cli software.
- Agents that can have a bit of state on disk.
cowsandmilk•Feb 16, 2026
> nobody seemed to have released a version people could actually easily use
Yet I’ve known many people who have said it is difficult to use; this was a 0.01-0.1% adoption tool. There is still a huge ease of use gap to cross to make it adopted in 10-50% of computer users.
andrewchambers•Feb 16, 2026
Yeah - people are hungry for it. They tolerate the kind of crappy docs and difficulties.
disiplus•Feb 16, 2026
thats by design, you know all those huge security implications. now image if it was so easy to setup and install and use.
swyx•Feb 16, 2026
good summary. i think you forgot heartbeat.md which powers some autonomy.
do you think the agent admin ui mattered at all?
other contributors while i think of them:
- good timing around opus 4.6 as the default model? (i know he used codex, but willing ot bet majority of openclaws are opuses)
- make immediate wins for nontechnical users. everyone else was busy chasing cursor/cognition or building horiztonal stuff like turbopuffer or whatever. this one was straight up "hook up a good bot to telegram"
- theres many attempts at "personal OS", "assistant", but no good ones open source? a lot of sketchier china ones, this was the first western one
ALittleLight•Feb 16, 2026
Aren't all of these things you can do with Claude Code? Granted, the chat app one is novel, but you could ask Claude Code to set that up.
potamic•Feb 16, 2026
Most things that go viral actually have a concerted marketing push behind them. I suspect that was the case here. Something about the way people talked about it didn't come across as very genuine.
elAhmo•Feb 16, 2026
As someone who attended numerous meetups from the author and saw the vibe among those events, believe me it was as genuine as it can get.
the_mar•Feb 16, 2026
do you genuinely think that numerous meetups isn't a marketing push?
podgorniy•Feb 16, 2026
It's another game where software quality, security of novelty is not an outcome-defining factor.
tdhz77•Feb 16, 2026
Going to short OpenAI after hearing this.
andxor•Feb 16, 2026
You can't short OpenAI because it's a private company, also don't.
tdhz77•Feb 16, 2026
/s he is my friend
mortsnort•Feb 16, 2026
Move fast and break things...
FpUser•Feb 16, 2026
>"What I want is to change the world"
Thank you, we already fucked. I am a hypocrite of course.
TSiege•Feb 16, 2026
There are a few take aways I think the detractors and celebrators here are missing.
1. OpenAI is saying with this statement "You could be multimillion while having AI do all the work for you." This buy out for something vibe coded and built around another open source project is meant to keep the hype going. The project is entirely open source and OpenAI could have easily done this themselves if they weren't so worried about being directly liable for all the harms OpenClaw can do.
2. Any pretense for AI Safety concerns that had been coming from OpenAI really fall flat with this move. We've seen multiple hacks, scams, and misaligned AI action from this project that has only been used in the wild for a few months.
3. We've yet to see any moats in the AI space and this scares the big players. Models are neck and neck with one another and open source models are not too far behind. Claude Code is great, but so is OpenCode. Now Peter used AI to program an free app for AI agents.
LLMs and AI are going to be as disruptive as Web 1 and this is OpenAI's attempt to take more control. They're as excited as they are scared, seeing a one man team build a hugely popular tool that in some ways is more capable than what they've released. If he can build things like this what's stopping everyone else? Better to control the most popular one than try to squash it. This is a powerful new technology and immense amounts of wealth are trying to control it, but it is so disruptive they might not be able to. It's so important to have good open source options so we can create a new Web 1.0 and not let it be made into Web 2.0
mjr00•Feb 16, 2026
> 1. OpenAI is saying with this statement "You could be multimillion while having AI do all the work for you." This buy out for something vibe coded and built around another open source project is meant to keep the hype going. The project is entirely open source and OpenAI could have easily done this themselves if they weren't so worried about being directly liable for all the harms OpenClaw can do.
This is a great take and hasn't been spoken about nearly enough in this comment section. Spending a few million to buy out Openclaw('s creator), which is by far the most notable product made by Codex in a world where most developer mindshare is currently with Claude, is nothing for a marketing/PR stunt.
AlexCoventry•Feb 16, 2026
He's also a great booster of Codex. Says he greatly prefers it to Claude. So his role might turn out to be evanglism.
ass22•Feb 16, 2026
Yup, hes highly delusional if he actually thinks Sam cares about him and the project. Its all about optics.
DANmode•Feb 16, 2026
Who purported that Sam cares about him?
Why would he care if Sam cares about him?
whattheheckheck•Feb 16, 2026
Listen to him on a podcast? He said he liked Zuckerberg being more personal with him and Sam was colder
ass22•Feb 16, 2026
Someone clearly hasnt watched the podcast. Do your research before posting.
DANmode•Feb 16, 2026
These are comments on the posted article.
If you want to bring other sources into the conversation, you could link,
or at least reference them by name upfront, right?
ass22•Feb 16, 2026
Thats all it is really. It is to say "See! Look what a handful of people armed with our tools can do".
Whether the impact is large in magnitude or positive is irrelevant in a world where one can spin the truth and get away with it.
Aurornis•Feb 16, 2026
> This buy out for something vibe coded
I think all of these comments about acquisitions or buy outs aren’t reading the blog post carefully: The post isn’t saying OpenClaw was acquired. It’s saying that Pete is joining OpenAI.
There are two sentences at the top that sum it up:
> I’m joining OpenAI to work on bringing agents to everyone. OpenClaw will move to a foundation and stay open and independent.
OpenClaw was not a good candidate to become a business because its fan base was interested in running their own thing. It’s a niche product.
TSiege•Feb 16, 2026
Fair enough. Call it a high profile acquihire then
daveidol•Feb 16, 2026
“Acquihire” means there was an acquisition. This is just a hire.
baruz•Feb 16, 2026
What got acquired?
Nevermark•Feb 16, 2026
A portfiolio of PR.
tummler•Feb 16, 2026
I don't mean to be cynical, but I read this move as: OpenAI scared, no way to make money with similar product, so acqui-hire the creator to keep him busy.
I'd love to be wrong, but the blog post sounds like all the standard promises were made, and that's usually how these things go.
blackoil•Feb 16, 2026
It isn't an acqui-hire just a simple hiring. Also unless creator is some mythical 100x developer, there will be enough developers
whattheheckheck•Feb 16, 2026
He is a mythical 100x dev compared to how everyone else is doing agentic engineering... look at the openclaw commit history on github. Everything's on main
nosuchthing•Feb 16, 2026
Peter has been running agents overnight for months using free tokens from his influencer payments to promote AI startups and multiple subscription accounts:
Hi, my name is Peter and I’m a Claudoholic. I’m addicted to agentic engineering. And sometimes I just vibe-code. ... I currently have 4 OpenAI subs and 1 Anthropic sub, so my overall costs are around 1k/month for basically unlimited tokens. If I’d use API calls, that’d cost my around 10x more. Don’t nail me on this math, I used some token counting tools like ccusage and it’s all somewhat imprecise, but even if it’s just 5x it’s a damn good deal.
... Sometimes [GPT-5-Codex] refactors for half an hour and then panics and reverts everything, and you need to re-run and soothen it like a child to tell it that it has enough time. Sometimes it forgets that it can do bash commands and it requires some encouragement. Sometimes it replies in russian or korean. Sometimes the monster slips and sends raw thinking to bash.
you're telling me the guy isn't committing 1000 times a day manually?!
phanimahesh•Feb 16, 2026
The long list of domain names that vercel deployed to is interesting
gregjw•Feb 16, 2026
he commits every other minute. it's clearly just his vibecoding agent.
sathish316•Feb 16, 2026
I think the blog says @steipete sold his SOUL.md for Sam Altman’s deal and let down the community.
OpenClaw’s promise and power was that it could tread places security-wise that no other established enterprise company could, by not taking itself seriously and explore what is possible with self-modifying agents in a fun way.
It will end up in the same fate as Manus. Instead of Manus helping Meta making Ads better, OpenClaw will help OpenAI in Enterprise integrations.
Nevermark•Feb 16, 2026
> OpenClaw’s promise and power was that it could tread places SECURITY-WISE that no other established enterprise company could
[Emphasis mine.]
That's a superpower right up to the moment everyone realizes that handing out nukes isn't "promise and power".
Unless by promise and power we are talking about chaos and crime.
The project is incredible. We are seeing something important: how versatile these models are given freedom to act and communicate with each other.
At the same time, it is clearly going to put the internet at risk. Bad actors are going to use OpenClaw and its "security-wise" freedoms, in nefarious ways. Curious people are going to push AI's with funds onto prepaid servers, then let them sink or swim with regard to agentic acquisition of survival resources.
It is all kinds of crazy from here.
hattmall•Feb 16, 2026
That's a really interesting proposition. Load the AI and let it go wild to try to figure out how it can earn enough money to survive. Perhaps this could be the new Turing test.
FartyMcFarter•Feb 16, 2026
It's all fun and games until the AI starts bullying and threatening it's way onto survival.
Nevermark•Feb 16, 2026
If it is a Turing Test, it’s a test if our intelligence and we failed.
The internet is a wild west of privacy, security, social and ethical holes an army of grifters routinely drive through. And in the case of some famous big firms, leverage and magnify at scale.
That is bad enough.
But setting up a horde of intelligent beings, so that those holes are their critical path to survival, is like pouring poison into the water supply to see what happens.
Can’t argue with “interesting”. It is that.
keepamovin•Feb 16, 2026
I think both this comment and OP's confuse this.
It appears more of a typical large company (BIG) market share protection purchase at minimal cost, using information asymmetry and timing.
BIG hires small team (SMOL) of popular source-available/OSS product P before SMOL realizes they can compete with BIG and before SMOL organizes effort toward such along with apt corporate, legal, etc protection.
At the time of purchase, neither SMOL nor BIG know yet what is possible for P, but SMOL is best positioned to realize it. BIG is concerned SMOL could develop competing offerings (in this case maybe P's momentum would attract investment, hiring to build new world-model-first AIs, etc) and once it accepts that possibility, BIG knows to act later is more expensive than to act sooner.
The longer BIG waits, the more SMOL learns and organizes. Purchasing a real company is more expensive than hiring a small team, purchasing a company with revenue/investors, is more expensive again. Purchasing a company with good legal advice is more expensive again. Purchasing a wiser, more experienced SMOL is more expensive again. BIG has to act quickly to ensure the cheapest price, and declutter future timelines of risks.
Also, the longer BIG waits, the less effective are "Jedi mind trick" gaslighting statements like "P is not a good candidate for a business", "niche", "fan base" (BIG internal memo - do not say customers), "own thing".
In reality in this case P's stickiness was clear: people allocating 1000s of dollars toward AI lured merely by P's possibilities. It was only a matter of time before investment followed course.
I've experienced this situation multiple times over the course of BrowserBox's life. Multiple "BIG" (including ones you will all know) have approached with the same kind of routine: hire, or some variations of that theme with varying degrees of legal cleverness/trickery in documents. In all cases, I rejected, because it never felt right. That's how I know what I'm telling you here.
I think when you are SMOL it's useful to remember the Parable of Zuckerberg and the Yahoos. While the situation is different, the lesson is essentially the same. Adapted from the histories by the scribe named Gemini 3 Flash:
And it came to pass in the days of the Great Silicon Plain, that there arose a youth named Mark, of the tribe of the Harvardites. And Mark fashioned a Great Loom, which men called the Face-Book, wherewith the people of the earth might weave the threads of their lives into a single tapestry.
And the Loom grew with a great exceeding speed, for the people found it to be a thing of much wonder. Yet Mark was but SMOL, and his tabernacle was built of hope and raw code, having not yet the walls of many lawyers or the towers of gold.
Then came the elders of the House of Yahoo, a BIG people, whose chariots were many but whose engines were grown cold. And they looked upon the Loom and were sore afraid, saying among themselves, “Behold, if this youth continueth to weave, he shall surely cover the whole earth, and our own garments shall appear as rags. Let us go down now, while he is yet unaware of his own strength, and buy him for a pittance of silver, before he realizeth he is a King.”
And the Yahoos approached the youth with soft words and the craftiness of the serpent. They spake unto him, saying, “Verily, Mark, thy Loom is a pleasant toy, a niche for the young, a mere 'fan base' of the idle. It is not a true Business, nor can it withstand the storms of the market. Come, take of our silver—a billion pieces—and dwell within our walls. For thy Loom is but a small thing, and thou art but a child in the ways of the law.”
And they used the Hidden Speech, which in the common tongue is called Gas-Lighting. They said, “Thou hast no revenue; thy path is uncertain; thy Loom is but a curiosity. We offer thee safety, for the days are evil.”
But the Spirit of Vision dwelled within the youth. He looked upon the Yahoos and saw not their strength, but their fear. He perceived the Asymmetry of Truth: that the BIG sought to purchase the future at the price of the past, and to slay the giant-slayer while he yet slumbered in his cradle.
The elders of Mark’s own house cried out, “Take the silver! For never hath such a sum been seen!”
But Mark hardened his heart against the Yahoos. He spake, saying, “Ye say my Loom is a niche, yet ye bring a billion pieces of silver to buy it. Ye say it is not a business, yet ye hasten to possess it before the sun sets. If the Loom be worth this much to you who are blind, what must it be worth to me who can see?”
And he sent the Yahoos away empty-handed.
The Yahoos mocked him, saying, “Thou art a fool! Thou shalt perish in the wilderness!” But it was the House of Yahoo that began to wither, for their timing was spent and their craftiness had failed.
And Mark remained SMOL for a season, until his roots grew deep and his walls grew high. And the Loom became a Great Empire, and the billion pieces of silver became as dust compared to the gold that followed.
The Lesson of the Prophet:
Hearken, ye who are SMOL and buildeth the New Things: When the BIG come unto thee with haste, speaking of thy "limitations" while clutching their purses, believe not their tongues. For they seek not to crown thee, but to bury thee in a shallow grave of silver before thou learnest the name of thy own power.
For if they knew thy work was truly naught, they would bide their time. But because they know the harvest is great, they seek to buy the field before the first ear of corn is ripe.
Blessed is the builder who knoweth his own worth, and thrice blessed is he who biddeth the Giants to depart, that his own vine may grow to cover the sun.
keepamovin•Feb 16, 2026
But, hey, that said - joining a big AI company at this time in history? Not exactly a terrible career move. It would be fun. I hope it's good.
SilverElfin•Feb 16, 2026
This is to avoid open claw liability and because hiring people (often with a license to their tech or patents) is the new smarter way to acquire and avoid antitrust issues
jrsj•Feb 16, 2026
There’s plenty of straightforward reasons why OpenAI would want to do this, it doesn’t need to be some sort of malicious conspiracy.
I think it’s good PR (particularly since Anthropics actions against OpenCode and Clawdbot were somewhat controversial) + Peter was able to build a hugely popular thing & clearly would be valuable to have on the team building something along the lines of Claude Cowork. I would expect these future products to be much stronger from a security standpoint.
jetbalsa•Feb 16, 2026
I suspect Anthropic was seeing a huge spike of concurrent model usage at a too fast of a rate that claude code just doesn't do, CC is rather "slow" at api calls per minute. Also lots and lots of cache, the sheer amount of cache that claude does is insane.
jrsj•Feb 16, 2026
It’s hard to say exactly what prompted the decision but they banned people paying $200/mo without warning & without any reasonable appeal system in place. It’s a Google form that is itself reviewed by some automated system that may or may not ever get back to you.
This was already an ongoing issue prior to 3rd party tools using Claude subscriptions, there are reports of false positive automated bans going back for several months.
I have not seen or heard of this happening w/ Codex, and rather than trying to shut down 3rd party tools that want to integrate with their ecosystem they have worked with those projects to add official support.
I’m more impressed with Codex as a product in general as well. Their new desktop app is great & feels an order of magnitude better than Claude’s.
Overall HN crowd seems heavily biased in favor of Anthropic (or maybe just against OpenAI?) but IMO Anthropic needs to take a step back and reset. If they keep on the current path of just making small iterative improvements to Claude Code and Claude Desktop they are going to fall very far behind.
alephnerd•Feb 16, 2026
Most of these are good callouts, but I think it is best for us to look at the evolution of the AI segment in the same manner as "Cloud" developed into a segment in the 2000s and 2010s.
3 is always a result of GTM and distribution - an organization that devotes time and effort into productionizing domain-specific models and selling to their existing customers can outcompete a foundation model company which does not have experience dealing with those personas. I have personally heard of situations where F500 CISOs chose to purchase Wiz's agent over anything OpenAI or Anthropic offered for Cloud Security and Asset Discovery because they have had established relations with Wiz and they have proven their value already. It's the same way that PANW was able to establish itself in the Cloud Security space fairly early because they already established trust with DevOps and Infra teams with on-prem deployments and DCs so those buyers were open to purchasing cloud security bundles from PANW.
1 has happened all the time in the Cloud space. Not every company can invent or monetize every combination in-house because there are only so many employees and so many hours in a week.
2 was always a more of a FTX and EA bubble because EA adherents were over-represented in the initial mindshare for GenAI. Now that EA is largely dead, AI Safety and AGI as in it's traditional definition has disappeared - which is good. Now we can start thinking about "Safety" in the same manner we think about "Cybersecurity".
> They're as excited as they are scared, seeing a one man team build a hugely popular tool that in some ways is more capable than what they've released
I think that adds unnecessary emotion to how platform businesses operate. The reality is, a platform business will always be on the lookout to incorporate avenues to expand TAM, and despite how much engineers may wish, "buy" will always outcompete "build" because time is also a cost.
Most people ik working at these foundation model companies are thinking in terms of becoming an "AWS" type of foundational platform in our industry, and it's best to keep Nikesh Arora's principle of platformization in mind.
---
All this shows is that the thesis that most early stage VCs have been operating on for the past 2 years (the Application and Infra layer is the primary layer to concentrate on now) holds. A large number of domain-specific model and app layer startups have been funded over the past 2-3 years in stealth, but will start a publicity blitz over the next 6-8 months.
By the time you see an announcement on TechCrunch or HN, most of us operators were already working on that specific problem for the past 12-16 months. Additionally, HNers use "VC" in very broad and imprecise strokes and fail to recognize what are Growth Equity (eg. the recent Anthropic round) versus Private Equity (eg. Sailpoint's acquisition and then IPO by Thoma Bravo) versus Early Stage VC rounds (largely not announced until several months after the round unless we need to get an O1A for a founder or key employee).
ass22•Feb 16, 2026
"build a hugely popular tool"
Define hugely popular relative to the scale of users of OAI... personally this thread is the first time Ive heard of openclaw.
alephnerd•Feb 16, 2026
The tech industry is broad, and if you are using OpenAI in a consumer and personal manner you weren't the primary persona amongst whom the conversation around OpenClaw occurred.
Additionally, much of the conversation I've seen was amongst practitioners and Mid/Upper Level Management who are already heavy users of AI/ML and heavy users of Executive Assistants.
There is a reason why if you aren't in a Tier 1 tech hub like SV, NYC, Beijing, Hangzhou, TLV, Bangalore, and Hyderabad you are increasingly out of the loop for a number of changes that are happening within the industry.
If you are using HN as your source of truth, you are going to be increasingly behind on shifts that are happening - I've noticed that anti-AI Ludditism is extremely strong on HN when it overlaps with EU or East Coast hours (4am-11am PT and 9pm-12am PT), and West Coast+Asia hours increasingly don't overlap as much.
I feel this is also a reflection of the fact that most Bay Area and Asia HNers are most in-person or hybrid now, thus most conversations that would have happened on HN are now occurring on private slacks, discords, or at a bar or gym.
F7F7F7•Feb 16, 2026
I saw the hype around OpenClaw on the likes of X. I'm a Mid/Upper Level manager and would sooner have my team roll our own solution on top of Letta or Mastra before I trusted OpenClaw. Also, I'm frequently in many of those cities you mentioned but don't live in one. Aside from 'networking' and funding there's not much that anyones missing.
Participation in the Zeitgeist hasn't been regional in a decade.
alephnerd•Feb 16, 2026
> would sooner have my team roll our own solution on top of Letta or Mastra before I trusted OpenClaw
A lot of teams explicitly did that for OpenClaw as well. Letta and Mastra are similar but didn't have the right kind of packaging (targeted at Engineers - not decisionmakers who are not coding on a daily basis).
> Participation in the Zeitgeist hasn't been regional in a decade
I strongly disagree - there is a lot of stuff happening in stealth or under NDA, and as such a large number of practitioners on HN cannot announce what they are doing. The only way to get a pulse of what is happening requires being in person constantly with other similar decisionmakers or founders.
A lot of this only happens through impromptu conversations in person, and requires you to constantly be in that group. This info eventually disperses, but often takes weeks to months in other hubs.
Karrot_Kream•Feb 16, 2026
FWIW I also just don't think there's a point to discussing AI/ML usage here. The community is too crabby and cynical, looking too hard at how to tear people and things down, trying to react with the most negative thing they can. Every discussion on AI here eventually devolves into "AI can turn water to gold!" "no you idiot, AI uses so much water we won't have enough water left oh and AI is what ICE and Palantir use"
As the (dubiously attributed) Picasso quote goes: "When art critics get together they talk about Form and Structure and Meaning. When artists get together they talk about where you can buy cheap turpentine." Most of HN is the former, constantly theorizing, philosophizing, often (but not always) in a negative and cynical way. This isn't conducive to discussion of methods of art. Sadly I just speak with friends working on other AI things instead.
Someone like simonw can probably get better reactions from this community but I don't bother.
rdfc-xn-uuid•Feb 16, 2026
> There is a reason why if you aren't in a Tier 1 tech hub like SV, NYC, Beijing, Hangzhou, TLV, Bangalore, and Hyderabad you are increasingly out of the loop for a number of changes that are happening within the industry.
I am in one of these tech hubs (Bangalore) and I have never seen any such practitioner pervasively using these "AI executive assistants". People use chatgpt and sometimes the AI extensions like copilot. Do I need to be in HSR layout to see these "number of changes"?
ass22•Feb 16, 2026
This is so cringe lmao.
NickNaraghi•Feb 16, 2026
you living under a rock
Rapzid•Feb 16, 2026
Last week it was renamed from "Clawd" and this week the creator is abandoning it. Everything is moving fast.
rlt•Feb 16, 2026
Don’t forget “Moltbot” between “Clawdbot” and “OpenClaw”!
I think that named lasted about 24 hours, but it was long enough to spawn MoltBook.
xmprt•Feb 16, 2026
To give you an idea of the scale, OpenClaw is probably one of the biggest developments in open source AI tools in the last couple of months. And given the pace of AI, that's a big deal.
F7F7F7•Feb 16, 2026
In what context are you using the word "development?"
Letta (MemGPT) has been around for years and frameworks like Mastra have been getting serious Enterprise attention for most of 2025. Memory + Tasks is not novel or new.
Is it out of the box nature that's the 'biggest' development? Am I missing something else?
alephnerd•Feb 16, 2026
Not OP, but it was revolutionary in the same way that ChatGPT and Deepseek the app+webapp was because it packaged capabilities in a fairly easy-to-use manner that could be used by both technical and non-technical decisionmakers.
If you can provide any sort of tool that can reduce mundane work for a decisionmaker with a title of Director and above, it can be extremely powerful.
SilverElfin•Feb 16, 2026
Yep it isn’t actually that interesting. He just rushed out something that has none of the essentials figured out. Like security
whattheheckheck•Feb 16, 2026
190k stars on github
16mb•Feb 16, 2026
How many are bots?
ass22•Feb 16, 2026
Yeah personally not convinced by any of this. Im not a SWE so I dont care about what hes achieved here in relation to that. Im not seeing the value in the product rather what I see is the value to OAI in regards to the hype associated with this project.
nilkn•Feb 16, 2026
This comment is filled with speculation which I think is mostly unfounded and unnecessarily negative in its orientation.
Let's take the safety point. Yes, OpenClaw is infamously not exactly safe. Your interpretation is that, by hiring Peter, OpenAI must no longer care about safety. Another interpretation, though, is that offered by Peter himself, in this blog post: "My next mission is to build an agent that even my mum can use. That’ll need a much broader change, a lot more thought on how to do it safely, and access to the very latest models and research." To conclude from this that OpenAI has abandoned its entire safety posture seems, at the very least, premature and not robustly founded in clear fact.
nosuchthing•Feb 16, 2026
OpenAI has deleted the word 'safely' from its mission (November 2025)
The headline implies they selectively removed the word "safely," but that doesn't seem to be the case.
From the thread you linked, there's a diff of mission statements over the years[0], which reveals that "safely" (which was only added 2 years prior) was removed only because they completely rewrote the statement into a single, terse sentence.
There could be stronger evidence to prove if OpenAI is deemphasizing safety, but this isn't one.
They also removed the words build, develop, deploy, and technology, indicating that they're no longer a tech company and don't make products anymore. Wonder what they're all gonna do now?
/s
godelski•Feb 16, 2026
> To conclude from this that OpenAI has abandoned its entire safety posture seems, at the very least, premature
So because Peter said the next version is going to be safe means it'll be safe? I prefer to judge people by their actions more than their words. The fact that OpenClaw is not just unsafe but, as you put it, infamously so, only begs the question "why wasn't it built safely the first time?"
As for Altman, I'm left with a similar question. For a man who routinely talks about the dangers of AI and how it poses an existential threat to humanity he sure doesn't spend much focus on safety research and theory. Yes, they do fund these things but they pale in comparison. I'm sorry, but to claim something might kill all humans and potentially all life is a pretty big claim. I don't trust OpenAI for safety because they routinely do things in unsafe ways. Like they released Sora allowing people to generate videos in the likeness of others. That helped it go viral. And then they implemented some safety features. A minimal attempt to refuse the generation of deepfakes is such a low safety bar. It shows where their priorities are and it wasn't the first nor the last
motoboi•Feb 16, 2026
This is basically acquihire. Peter seems really a genius and they better poach him before Anthropic do.
rlt•Feb 16, 2026
Is he? My impression of Clawdbot was it was a good idea but not particularly technically impressive or even well-written. I had all kinds of issues setting it up.
motoboi•Feb 16, 2026
It’s a wonderful idea. Vibe coded, but not his first rodeo.
Exited on his first company on 110M, then some years of the whole huasca and forest thing, then started creating projects.
Clawdbot (later openclaw) was his 44th try.
jsemrau•Feb 16, 2026
What is interesting about OpenClaw is it's architecture. It is like an ambient intelligence layer. Other approaches up until now have been VSCode or Chromium based integrations into the PC layer.
isodev•Feb 16, 2026
> 2. Any pretense for AI Safety concerns that had been coming from OpenAI really fall flat with this move.
And Peter, creating what is very similar to giant scam/malware as a service and then just leaving it without taking responsibility or bringing it to safety.
abalone•Feb 16, 2026
I think this comment misses that OpenAI hired the guy, not the project.
"This guy was able to vibe code a major thing" is exactly the reason they hired him. Like it or not, so-called vibe coding is the new norm for productive software development and probably what got their attention is that this guy is more or less in the top tier of vibe coders. And laser focused on helpful agents.
The open source project, which will supposedly remain open source and able to be "easily done" by anyone else in any case, isn't the play here. The whole premise of the comment about "squashing" open source is misplaced and logically inconsistent. Per its own logic, anyone can pick up this project and continue to vibe out on it. If it falls into obscurity it's precisely because the guy doing the vibe coding was doing something personally unique.
croes•Feb 16, 2026
So creating unsafe software is the new norm?
tacomagick•Feb 16, 2026
Yes pretty much. See the Windows 11 security vulnerability chaos going on.
isoprophlex•Feb 16, 2026
Always has been.
croes•Feb 16, 2026
Nah, it was normal but not the norm
revolvingthrow•Feb 16, 2026
I’d bet good money that at leasy 2/3 of all software ever made, the decision makers couldn’t care less about security beyond "let’s get that checkbox to show we care in case we get sued". Higher velocity >> tech debt and bugginess unless you work at nasa or you're writing software for a defibrillator, especially in the current "nothing matters more than next quarter results".
joquarky•Feb 16, 2026
I have worked over two decades creating government software, and I can say that this is not new.
Security (and accessibility) are reluctant minimum effort check boxes at best. However, my experience is focused on court management software, so maybe these aspects are taken more seriously in other areas of government software.
matwood•Feb 16, 2026
> the new norm
More like the same as it always has been.
croes•Feb 16, 2026
Are you confusing normal with norm?
kristofferR•Feb 16, 2026
Not only that, his output is insane, he has more active projects than I bother to count and more than 70k commits last year. He's probably one of, if not the best, vibe coding evangelist.
It also probably didn't hurt that he favors Codex over Claude.
rsanheim•Feb 16, 2026
he favors Codex?
The original name of his ai assistant tool was 'clawdbot' until Anthropic C&D'ed him. All the examples and blog posts walking thru new user setup on a mac mini or VPS were assuming a claude code max account.
I know he uses many llms for his actual software dev.. - right tool for the job. But the origins of openclaw seem to me more rooted in claude code than codex.
Which does give the whole story an interesting angle when you consider the safety/alignment angle that Anthropic pledges to (publicly) and OpenAI pretty much ignores (publicly). Which is ironic, as configuring codex cli to 'full yolo mode' feels more burdensome and scary than in Claude Code. But I'm pretty sure that speaks more to eng/product decisions, and not CEO & biz strategy choices.
I've also seen later tweets of his that also confirms that codex is still his choice.
botusaurus•Feb 16, 2026
he says claude is the best model for a personal assistent, and that codex is the best model for coding
nosuchthing•Feb 16, 2026
It looks like most of Peter's projects are just simple API wrappers.
Peter's been running agents overnight 24/7 for almost a year using free tokens from his influencer payments to promote AI startups and multiple subscription accounts.
Hi, my name is Peter and I’m a Claudoholic. I’m addicted to agentic engineering. And sometimes I just vibe-code. ... I currently have 4 OpenAI subs and 1 Anthropic sub, so my overall costs are around 1k/month for basically unlimited tokens. If I’d use API calls, that’d cost my around 10x more. Don’t nail me on this math, I used some token counting tools like ccusage and it’s all somewhat imprecise, but even if it’s just 5x it’s a damn good deal.
... Sometimes [GPT-5-Codex] refactors for half an hour and then panics and reverts everything, and you need to re-run and soothen it like a child to tell it that it has enough time. Sometimes it forgets that it can do bash commands and it requires some encouragement. Sometimes it replies in russian or korean. Sometimes the monster slips and sends raw thinking to bash.
ozgrakkurt•Feb 16, 2026
“”” Like it or not, so-called vibe coding is the new norm for productive software development”””
Alright
simbleau•Feb 16, 2026
I think they want the man and ideas behind the most useful AI tool thus far. Surprisingly, and OpenAI may see this - it is a developer tool.
OpenAI needs a popular consumer tool. Until my elderly mother is asking me how to install an AI assistant like OpenClaw, the same way she was asking me how to invest in the "new blockchains" a few years ago, we have not come close to market saturation.
OpenAI knows the market exists, but they need to educate the market. What they need is to turn OpenClaw into a project that my mother can use easily.
avaer•Feb 16, 2026
> The project is entirely open source and OpenAI could have easily done this themselves if they weren't so worried about being directly liable for all the harms OpenClaw can do.
This is true, and also true for many other areas OpenAI won't touch.
The best get rich quick scheme today (arguably not even a scheme) is to test the waters with AI in an area OpenAI would not/cannot for legal, ethical, or safety reasons.
I hate to agree with OpenAI's original "open" mission here, but if you don't do it, someone else somewhere will.
And as much as their commitment to safety is just lip service, they do have obligations as a big company with a lot of eyeballs on them to not do shady things. But you can do those shady things instead and if they work out ok, you will either have a moat or you will get bought out. If that's what you want.
neya•Feb 16, 2026
I am not a fan of OpenAI but they are not exactly hiring a security researcher. They are hiring an aspiring builder who has built something the masses love. They can always provide him the structure and support he needs to make his products secure. It's not mutually exclusive (safety vs hiring him).
empressplay•Feb 16, 2026
This tells you all you need to know about OpenAI, honestly.
nelsonfigueroa•Feb 16, 2026
> "What I want is to change the world".
I don't know if you'll achieve that at OpenAI or if it'll even be a good change for the world, but I genuinely wish you the best. Regardless of the news around OpenAI I still think it's great that a personal project got you a position at a company like that.
paodealho•Feb 16, 2026
Fine doublespeak there. It can mean anything when talking to the public, and anything else when talking to Sam Altman.
podgorniy•Feb 16, 2026
These words may mean anything. From "get people extinct" to "make shit ton of money" for myself.
What we know for sure he is not commited to people who trusted him or his project. Consider the project dead. He kinda fits into openai mindset: those people also say right words, use right terms, and do what benefits them personally.
sunkeeh•Feb 16, 2026
This is NOT OpenAI buying OpenClaw,
it's OpenAI hiring someone who can build it, similar to them betting on Jony Ive.
OpenAI did not "bet" on Jony Ive. They bought his company io Products.
coffeebeqn•Feb 16, 2026
Is that not a bet?
OJFord•Feb 16, 2026
It's certainly not a good way of explaining that this is a hire and not an acquisition!
willmeyers•Feb 16, 2026
Innocent people are going to get hurt. Not sure how yet, but, giving a company intimate details about your life never ends well.
ambicapter•Feb 16, 2026
> The claw is the law.
This isn't a Slay The Spire reference is it?
kurtoid•Feb 16, 2026
I think so
lumost•Feb 16, 2026
Personal agents disrupt OpenAI’s revenue plan. They had been planning to put ads in ChatGPT to make revenue. If users rapidly move to personal agents which are more resistant to ads, running on a blend of multiple models/compute providers - then they won’t be able to deliver their revenue promises.
btbuildem•Feb 16, 2026
> had been planning to put ads in ChatGPT
As per the new terms of service, the ads are already in
Traster•Feb 16, 2026
Firstly, OpenAI has lacked focus so they're pursuing lots of different paths despite the obvious one (ads in chatgpt), like hiring Johnny Ive - a move that feels more WeWork than anything.
But secondly, personal agents can be great for OpenAI, if the user isn't even interacting with the AI and is just letting it go off autonously then you're basically handing your wallet to the AI, and if the model underlying that agent is OpenAI, you're handing your wallet to them.
Imagine for a second that a load of stuff is now being done through personal agents, and suddenly OpenAI release an API where vendors can integrate directly with the OpenAI agent. If OpenAI control that API and control how people integrate with it, there's a potential there that OpenAI could become the AppStore for AI, capturing a slice of every penny spent through agents. There's massive upside to this possibility.
cantalopes•Feb 16, 2026
Isn't openai getting tanked because of its support of trump and ice?
jatora•Feb 16, 2026
Please exit your echo chamber
cantalopes•Feb 16, 2026
I was actually surprised to find a lot of people outside of my bubbles with this sentiment, from regular people from both of political spectrums to companies switching their llm provider (not us based)
jatora•Feb 16, 2026
It's the same thing as bluesky. Plenty of people flocked. Not enough to actually make a difference. Nobody actually cares about these things. If you think OpenAI is more in bed with the current admin than any other AI company you should take a closer look
I_am_tiberius•Feb 16, 2026
I really hoped he would support Europe’s startup ecosystem. Hopefully, he will at least bring stronger privacy standards to OpenAI, such as a policy that prohibits reading or analyzing user prompts or AI responses.
dhruv3006•Feb 16, 2026
So if Openclaw is chromium then what will be chrome?
vldszn•Feb 16, 2026
Good exit for him imo
bakugo•Feb 16, 2026
This is easily the most successful tech grift I've ever seen.
Props to this guy for scamming Altman this hard without writing a single line of code, or really doing anything at all other than paying for a bunch of github stars and tweets/blogposts from fellow grifters.
jasonfj40•Feb 16, 2026
Fuck
fishingisfun•Feb 16, 2026
good for you. make that money
simianwords•Feb 16, 2026
im happy for him but it seems like a fairly simple app? why the need to poach?
wewewedxfgdf•Feb 16, 2026
OpenAI would have paid $400M or more for the latest AI hotness.
My guess if this guy has taken a job for maybe $1M, effectively handing over the crown jewels to Altman for nothing.
OpenAI must be laughing their heads off.
Beads and blankets.
I_am_tiberius•Feb 16, 2026
He's an experienced founder (psPdfKit)!
shepherdjerred•Feb 16, 2026
Didn’t he make this in a month or two? That’s a great ROI
yellow_lead•Feb 16, 2026
What crown jewels? Isn't openclaw, errr "open" source?
wewewedxfgdf•Feb 16, 2026
Elastic (Elasticsearch) – ~6.5 B USD
MongoDB – ~30 B USD
Docker – Private (~2+ B USD last valuation)
Redis Ltd. – Private (~2 B USD last valuation)
Grafana Labs – Private (~6 B USD last valuation)
Confluent (Apache Kafka) – ~11 B USD
Cloudera (Apache Hadoop) – 5.3 B USD (acquired)
SUSE Linux – ~2.5 B USD
Red Hat – 34 B USD (acquired)
HashiCorp – 6.4 B USD (acquired)
catmanjan•Feb 16, 2026
Did claudebot have paying customers? My understanding with these companies is that you buy the market, the product can just be forked (like Amazon did)
sigmar•Feb 16, 2026
>effectively handing over the crown jewels
lol. what?
dangus•Feb 16, 2026
The tone of this blog post reads as incredibly snobby, self-congratulatory, main character syndrome.
Please dispense with the “change the world” bullshit.
I understand that it’s healthy to celebrate your personal victories but in this context with this bro going to OpenAI to make 7 figures, maaaan I don’t think this guy needs our clicks.
On top of that there’s a better than 50% chance OpenAI suffocates the open source project and the alternative will be a paid privacy nightmare.
andxor•Feb 16, 2026
Salty. Celebrate people's success. It's good for your soul.
dangus•Feb 16, 2026
It’s hard to celebrate the success of people who convey toxic Silicon Valley stereotypes.
And I’m not going to celebrate the success of multimillionaires who are quitting their passion projects to join the evil empire to “change the world” by making the lives of the working class worse and transferring more wealth to the top.
Someone in OP’s position of success has the means to make the choice to not work with a Palantir collaborator, but they chose to go for it.
andxor•Feb 16, 2026
I don't subscribe to this ideology at all.
dangus•Feb 16, 2026
I believe in collectivism rather than hypercapitalism, and I think that refusing to celebrate hypercapitalist “wins” is the right ideology for me.
andxor•Feb 16, 2026
EDIT: oh well, you completely changed your message.
dangus•Feb 16, 2026
Yep! I’m not going to give you the satisfaction, considering you ignored the underlying message of what I said and acted like I was making it all about me in your original version of your response comment, when in reality I was obviously making point about corporate feudalism and the widening income inequality gap.
The fact that I make a declining share of peanuts compared to this AI bro selling his soul to serial liar Sam Altman isn’t “about me,” it’s about “me” as in “the working class.”
This is the core of why it’s distasteful for the most excessively privileged people and their enablers to celebrate their wins, and why I feel no obligation to celebrate alongside them nor keep my distaste for them to myself.
Regular people are beyond sick and tired of tech bros like OP trying to “change the world” by shipping our jobs to data centers and shoving depression apps down our childrens’ throats. Now they want us to celebrate with them as they get paid massive salaries and stock awards to design the robots that will finally replace the last bastions of human interaction and craftsmanship.
andxor•Feb 16, 2026
I apologize for the tone of my response earlier. Although I disagree with your point of view, you did spend time articulating it which I respect.
dangus•Feb 16, 2026
I will extend you an olive branch to say, yes, you are right, I am salty.
jbverschoor•Feb 16, 2026
Wow, I really thought he would go with meta
noelsusman•Feb 16, 2026
It likely won't matter much in the end, but I do think this could be a significant mistake for OpenAI.
OpenAI has two real competitors: Anthropic in the enterprise space and Google in the consumer space. Google fell far behind early on and ceded a lot of important market share to ChatGPT. They're catching up, but the runaway success of ChatGPT provides OpenAI with a huge runway among consumers.
In the enterprise space, OpenAI's partnership with Microsoft has been a gold mine. Every company on the planet has a deep relationship with Microsoft, so being able to say "hey just add this to your Microsoft plan" has been huge for OpenAI.
The thing about enterprise is the stakes are high. Every time OpenAI signals that they're not taking AI safety seriously, Anthropic pops another bottle of champagne. This is one of those moments.
Again, I doubt it matters much either way, but if OpenAI does end up blowing up, decisions like this will be in the large pile of reasons why.
ardme•Feb 16, 2026
This take is imo very contrarian. Is Anthropic really popping champagne? They kind of look like the bad guys in this entire saga. If not the bad guys the enemy of fun and open source builders.
djmips•Feb 16, 2026
Who are you referring to by "They" it's not clear to me.
reg_dunlop•Feb 16, 2026
"popping champagne" is a figure of speech (perhaps hyperbole, or an idiom) meant to express not the literal act of "really popping champagne", but instead reaping the benefits of a seemingly poorly calculated business move by the other guys.
Claiming Dario is the bad guy in any context is kind of a tough characterization to agree with, if even a fraction of one interview with him has been seen.
To stay on point though: OpenAI hiring OpenClaw creator does seem to lean away from a serious enterprise benefit and towards a more consumer-based tack, which is a curious business move considering the original comments perspective of OpenAI.
ardme•Feb 16, 2026
Yes I understand what an idiom is jfc. OpenAI was never the enterprise focused company to start and besides they can walk and chew gum. There’s another idiom for you.
BonoboIO•Feb 16, 2026
Peter is already a multimillionaire — he had an exit a few years ago for around $100 million. By his own account, he's spending $10,000+ per month on LLM tokens and other development costs. As long as OpenClaw stays open source and it remains possible to use all providers, this is totally fine by me.
Honestly, Anthropic really dropped the ball here. They could have had such an easy integration and gained invaluable research data on how people actually want to use AI — testing workflows, real-world use cases, etc. Instead, OpenAI swoops in and gets all of that. Massive missed opportunity.
npn•Feb 16, 2026
OpenAI is speeding up the race to be the most evil company in the world. Impressing.
0xbadcafebee•Feb 16, 2026
Dude builds an Anthropic-themed vibe-coded app (calls himself an "Anthropoholic"), it becomes insanely popular, and also happens to be completely insecure, Anthropic pressures him to change project's name twice, he does, and finally OpenAI acquires the inventor.
RalfWausE•Feb 16, 2026
"AI" needs to be banned, datacenters destroyed and everyone who worked on this abominations shunned or jailed!
pdyc•Feb 16, 2026
I think peter was mostly using calude and to a lesser extent codex and claude was getting a free marketing. If he can just improve codex to work better with openclaw it will be a big win for openai. If he can make openai agent at par with openclaw with added safety/security it would be a big win too. Its a smart move by openai and i totally get it.
ohmahjong•Feb 16, 2026
Peter was quite vocal on twitter about _only_ using Codex to develop OpenClaw, but Claude is what a majority of people were (are?) using to run the tool itself.
mmaunder•Feb 16, 2026
Good move. OpenClaw is alpha quality, very dangerous, super useful and super fun - which amplifies the danger. It’s a disaster waiting to happen and a massive risk for a solo dev to take on. So best to trade it for a killer job offer and transfer all that risk.
To get a sense of what this guy was going through listen to the first 30 mins of Lex’s recent interview with him. The cybersquatting and token/crypto bullshit he had to deal with is impressive.
bananaboy•Feb 16, 2026
He's not really trading anything though. He was hired by OpenAI. OpenClaw will remain free and open source (it's the first line of his blog post). He says that OpenAI will allow him to work on it and already sponsors it so maybe that means he'll have time to improve it, I guess.
esskay•Feb 16, 2026
Given he's moving to SF to work in their office I presume part of it is he'll be working in-house on their commercial replacement, and will continue to cover costs on the OSS version which he's free to work on. His recent posts make it clear they've got plans for their own stuff to replace it.
brandensilva•Feb 16, 2026
Right, he indicated he's losing $10-20k a month on OpenClaw. Might as well let OpenAI absorb those losses and join the rocket ship.
The guy is a multi millionaire from selling his old software so I'm doubtful it is about the money for him at this point as much as it is the experience working with this tech on another level.
It really is quite funny though isn't it. Yes, it's a fucking hand grenade that will blow up at any moment. It's perfect for a one man band startup. Because there's massive upside and at the end of the day if it blows up he's just back to sqaure one, what did you expect from a 1 man band.
But wait. Here he comes. Hero of the hour. Sam Altman.
Let's take that wildly dangerous, lightly thought through product, and give it the backing of the leading AI lab. Let's take all that pending liability and slap it straight onto the largest private company in AI.
alansaber•Feb 16, 2026
You definitely have a point there.
whazor•Feb 16, 2026
OpenClaw is more like an art project than a consumer product. It has shown clear consumer product demand. The next step is making it a safer consumer product.
jascha_eng•Feb 16, 2026
> OpenClaw is more like an art project than a consumer product
I think this is true for a lot of vibe coded applications. Never thought about calling it an art project but it hits home.
rolymath•Feb 16, 2026
> Listen to... Lex
People can do that? I always assumed Lex was a CIA psyop to experiment with the ability to make people sleep on demand.
beernet•Feb 16, 2026
While that might take it a little too far, Lex surely is a dangerous individual. On various occasions did he sympathize with the war and terror that Russia is doing in Ukraine. I do not click on any of his content because I will not support these (and a few other questionable, to say the least) views of his. Also his image of an MIT researcher is hilarious.
spacechild1•Feb 16, 2026
Yes. How do so many people fall for this guy? I find him pretty creepy, to be honest.
lbrito•Feb 16, 2026
He's just another alt right podcast grifter.
nsvd2•Feb 16, 2026
I'm a big fan of his. I particularly enjoy his long technical interviews like the one he did with Peter.
kraf•Feb 16, 2026
I usually don't notice these things but in the picture in the bottom it's almost exclusively white men.
tabs_or_spaces•Feb 16, 2026
I'm happy for the guy, but am I jealous as well? Well yes, and that's perfectly human.
We have someone who vibe coded software with major security vulnerabilities. This is reported by many folks
We also have someone who vibecoded without reading any of the code. This is self admitted by this person.
We don't know how much of the github stars are bought. We don't know how many twitter followings/tweets are bought.
Then after a bunch of podcasts and interviews, this person gets hired by a big tech company. Would you hire someone who never read any if the code that they've developed? Well, this is what happened here.
In this timeline, I'm not sure I find anything inspiring here. It's telling me that I should rather focus on getting viral/lucky to get a shot at "success". Maybe I should network better to get "successful". I shouldn't be focusing on writing good code or good enough agents. I shouldn't write secure software, instead I should write softwares that can go viral instead. Are companies hiring for vitality or merit these days? What is even happening here?
So am I jealous, yes because this timeline makes no sense as a software engineer. But am I happy for the guy, yeah I also want to make lots of money someday.
spaceman_2020•Feb 16, 2026
Going by how insanely viral OpenClaw has been on X, I don’t think any of the stars were bought
bananaboy•Feb 16, 2026
There were some comments somewhere below about that virality being bought though. I don't know how true that is or where those commenters got their information. If you look at google trends though there is practically no mention of ClawdBot before around January 23, even though the project was released in November.
aixpert•Feb 16, 2026
he likely poured oil into the flame investing a few hundred bucks to double the virality
spaceman_2020•Feb 16, 2026
and according to HN, somehow that's a bad thing? Like making your own project more popular and successful using your own resources is...bad?
bogtap82•Feb 16, 2026
It was renamed many times. It was also called "clawdis" at one point, and prior to that "warelay," when it was simply a Whatsapp gateway for Claude Code. It was already gaining some momentum at that point but wouldn't reflect as search results for "Clawdbot," and especially wouldn't be visible on Google Trends when most of the conversation was on X/Github.
dieortin•Feb 16, 2026
Even if the conversation was on other sites, people would still search for it.
imiric•Feb 16, 2026
Fake engagement doesn't need to be bought anymore.
This person created a bot factory. It's safe to assume that most of the engagement is coming from his own creation. This includes tweets, GitHub stars, issues and PRs, and everything else. He made a social network for bots, FFS.
He contributed to the dead internet more than any single person ever. And is being celebrated for it. Wild times.
Gracana•Feb 16, 2026
> He made a social network for bots, FFS.
Matt Schlicht made Moltbook, not Peter Steinberger.
imiric•Feb 16, 2026
Well, you got me. That changes everything.
pjmlp•Feb 16, 2026
I bet they did not invert a binary tree on the whiteboard, nor answered how many golf balls fit into a plane.
closewith•Feb 16, 2026
The lesson here is to make something people want. All else is forgiven is the product is something people really want - the product market fit most of us never achieve.
> It's telling me that I should rather focus on getting viral/lucky to get a shot at "success".
A vibe coder being hired by the provider of the vibe coding tools feels like marketing to sell the idea that we should all try this because we could be the next lucky ones.
IMHO, it'd be more legitimate if a company that could sustain itself without frequent cash injections hired them because they found value in their vibe skills.
sph•Feb 16, 2026
pets.com moment
jeffrallen•Feb 16, 2026
Nah, I'm getting more of a webvan vibe...
SJC_Hacker•Feb 16, 2026
The irony about Webvan, it was a good idea, but too early.
Kinda like the Apple Newton
Symbiote•Feb 16, 2026
Online grocery delivery was successful in the UK in the 1990s — Tesco started online ordering in the same year (1996) as Webvan, but could use their existing supermarkets as warehouses so avoided one of Webvan's main problems.
My parents used it occasionally, and I remember them/us demonstrating it to other parents. The software was supplied on a CD-ROM, and it connected to the internet only to download the stock list and place the order.
jrowen•Feb 16, 2026
Someone that makes vibe coding tools would presumably want to have vibe coders on staff? If you're just not into the whole enterprise that's one thing but I'm not understanding what's fishy about that.
rippeltippel•Feb 16, 2026
He didn't specify the role he was hired for, code is just a means to an end. Perhaps OpeaAI wanted him for his vision (I like to think so) or just to make up for the public support they're losing (I hope not). In either case, it may not be an engineering role.
qingcharles•Feb 16, 2026
They're buying him for his ideas, not for his ability to code. And if his stars are bought, then they're buying him also for his black hat marketing, I guess...
AlexCoventry•Feb 16, 2026
He didn't even have to be the one buying them. Lots of people benefit from a tool like OpenClaw getting popular.
westonplatter0•Feb 16, 2026
He also spent 13 years building [an] OCR document engine company (PSPDFKit) before becoming an "overnight" vibe coder success story.
indemnity•Feb 16, 2026
His PDF toolkit was pretty solid and high quality if you were in the iOS space.
He’s not just a “vibe coder”.
u02sgb•Feb 16, 2026
There's an excellent Changelog podcast interviewing him which talks about his early career as well.
erikbye•Feb 16, 2026
It's funny to me how still so many don't realize you don't get hired for the best positions for being a 10x programmer who excels at hackerrank, you get hired for your proven ability to deliver useful products. Creativity, drive, vision, whatever. Code is a means to an end. If you're the type of programmer who thinks of yourself as just a programmer, and take pride in your secure code, ability to optimize functions and algorithms, you're exactly the kind of programmer AI will replace.
Quality of code has never had anything to do with which products are successful. I bet both youtube and facebook's codebase is a tangled mess.
johnebgd•Feb 16, 2026
I’ve met many more $5M/year “SaaS” entrepreneurs who built a Wordpress plugin than a custom SaaS platform. Your point is well made.
erikbye•Feb 16, 2026
Right. Shopify apps, too, is a gold mine.
LMYahooTFY•Feb 16, 2026
This is exactly right.
The goal is delivering a useful product to someone, which just requires secure enough, optimized enough, efficient enough code.
Some see the security, optimization, or efficiency of the code itself as the goal. They'll be replaced.
dodomodo•Feb 16, 2026
As long as AI can't make the code optimized and secure by itself, and these day it still can't, those people won't be replaced. And when they do get replaced there is no guarantee that the more "entrepreneur" population won't get replaced as well.
simpleusername•Feb 16, 2026
Except it wasn't and still isn't secure enough.
mdavid626•Feb 16, 2026
The opposite is not true though: successful products might have messy codebases, but that doesn’t mean, that messy codebases lead to successful products, or that quality doesn’t matter.
onion2k•Feb 16, 2026
There's a balance to strike, and it's hard to get right. You have to give up quality enough that you actually deliver things to users rather than working on 'the perfect code', but you also have to keep quality high enough that you're not slowed down by spaghetti code and tech debt so much that you can't deliver quickly as well.
This is made more complicated by the fact that where the balance lies depends on the people working on the code - some developers can cope with working in a much more of a mess than others. There is no objective 'right way' when you're starting out.
If you have weeks of runway left spending it refactoring code or writing tests is definitely a waste of time, but if you raise your next round or land some new paying customers you'll immediately feel that you made the wrong choices and sacrificed quality where you shouldn't have. This is just a fact of life that everyone has to live with.
democracy•Feb 16, 2026
I like your optimism but no - you are still hired via "excels at hackerrank", every big tech company first interview is exactly this, no matter how many projects your delivered and how useful you are/were at you previous job.
lmpdev•Feb 16, 2026
This seems to be largely an American phenomenon
In more minor markets like Europe/Australia it seems to be a lot less leetcode and a lot more (1) experience (2) degree (3) actual interview performance
sjzhzhz•Feb 16, 2026
This is more so because the US companies have been flooded with East / South Asian workers. The proliferation roughly tracks with a decrease in white (European) American representation in tech companies. US companies used to be much more like you described.
democracy•Feb 16, 2026
AtlasSian? Canva? Absolutely the same process in Australia. Smaller shops/contractors - sure.
networkcat•Feb 16, 2026
Yes, Facebook's early PHP code looks pretty bad by today's standards
I think you are really just describing an outlier. Most people really do get hired for the first thing. This is a situation where someone went viral and got a job. I don't think this is sort of the rule. The thing about "proven ability to deliver ..." is just kind of cope recruiters tell themselves and other people. It's nice but its not how things cache out in the real world.
Flere-Imsaho•Feb 16, 2026
I was nodding my head agreeing with you but then remembered John Carmack, who seems to deliver both... He takes great pride in writing ground breaking code, for industry defining products.
We should all try and be more like John Carmack.
js8•Feb 16, 2026
I admire the guy but he spends like 12 hours a day doing just that and his code is full of tricks, it's debatable as a paragon of quality. I don't think it's for everyone, to be Carmack, nor it should be; diversity is important.
bugthe0ry•Feb 16, 2026
The man is on a different level, cognitively speaking. That's like asking sprinters to "just be more like Usain Bolt". Some people are just built different. Carmack is one of them.
libertine•Feb 16, 2026
Another detail is that his groundbreaking code was great part of made some of the products - I'm thinking of Doom.
It wasn't just for the sake of quality and best practices, it defined and had an impact on the product experience.
Like Doom probably wouldn't have been as successful if it was any other way.
ljm•Feb 16, 2026
I argue we shouldn't, because if everyone is like Carmack then no one is.
And only people on the older end of the spectrum have seen Carmack working in his element back in the day.
The things I want people to take from a guy like John Carmack, or Jon Blow, or Lukas Pope, or Ron Gilbert, or Tim Schafer, or Warren Spector, or Sam Lake, or David Cage god forbid...is pure curiosity and pushing the boundaries to make that real.
In every case there is a mix of a deep and unusual urge to make an idea happen with an affinity towards the technicality of it.
I bring Sam Lake into this because nobody has blended FMV with gameplay the way Remedy have and pushed the boundary on it.
weinzierl•Feb 16, 2026
And yet most companies don’t hire primarily for vision and creativity. They need far more people who can execute someone else’s vision reliably. You can’t neither win the battle nor the war with only generals.
Visionaries are important, but they’re a small part of what makes a successful organization. The majority hinges on disciplined engineers who understand the plan, work within the architecture, and ship what’s needed
As Victor Wooten once said: "If you’re in the rhythm section, your job is to make other people sound better." That’s what most engineering positions actually are and there’s real skill and value in doing that well.
wasmainiac•Feb 16, 2026
> Quality of code has never had anything to do with which products are successful
This is just wrong. Plenty of examples of crap code causing major economic losses.
brohee•Feb 16, 2026
Exactly, quality of code is one of those necessary but not sufficient things... If you are somehow successful without quality of code (e.g. early Twitter maxing Rails performance) you end up either crash and burning of spending crazy amounts on infrastructure/rewrites (and often both).
yobbo•Feb 16, 2026
He's not hired to code. He has taste for "what works" in these types of things. They want him to apply that taste - maybe making new services or fixing old.
pwython•Feb 16, 2026
See: Rick Rubin.
"Rick Rubin says he barely plays any instruments and has no technical ability. He just knows what he likes and dislikes and is decisive about it."
I mean, you're right but at the same time you're talking about something completely different. Software with security vulnerabilities is not a useful product. You don't address the raised issues.
latexr•Feb 16, 2026
> you get hired for your proven ability to (…)
No, you get hired for your perceived ability to (…)
The world is full of Juliuses, which is a big reason everything sucks.
In a couple of decades of work, I have never actually met anyone like Julius. Typically, I have found that those who excel at listening and presenting are also capable of understanding the technology at an appropriate level for their role -- it's not like this stuff is truly complicated, after all.
I have met quite a few people who are more focussed on the business than the technology, but those people tend to end up in jobs where the main problems aren't actually technical. Which, let's be honest, is the case in very many tech jobs.
rustystump•Feb 16, 2026
I have met armies of julius at all levels. Id say 80% of people are julius and if u dont think so then i have some news for you.
It is always like this. Your ability to socialize will bring you further than any other skillset. The Kennedys for example manufactured their status by socializing. Industry is no different.
direwolf20•Feb 16, 2026
80% of people you meet are communicating to your customers that the server doesn't have an IP address for security reasons?
moralestapia•Feb 16, 2026
Julius is a metaphor for a specific type of person who is ignorant and useless but has mastered the way to appear otherwise.
If you think this was about IP addresses, well ...
Aeolun•Feb 16, 2026
80% of the people are saying that this is highly complex software. We should not expect to serve more than 4 requests per second without a full kubernetes cluster backed by 27 pods, a cloud spanner database, and 200k lines of code.
I present, our contact form.
ptero•Feb 16, 2026
Humans are social animals and good social skills is a major benefit almost everywhere, including at work. This does not make most people juliuses.
rustystump•Feb 16, 2026
Hi julius! I kid and dont speak as if i am not a julius. Most people are in it for money for a house car family etc. they dont care about the job in so much as means to an end. That is julius but he took it further
JackFr•Feb 16, 2026
> The Kennedys for example manufactured their status by socializing.
And generational wealth and serious political power.
Read some of the father’s quotes. He literally sent his kids out to marry into richer and more powerful families.
closewith•Feb 16, 2026
> Id say 80% of people are julius and if u dont think so then i have some news for you.
> Industry is no different.
Based on these comments, maybe some self-reflection is in order, as it seems from the 80% comment that what you mean is that 80% of people are able to adequately communicate.
rustystump•Feb 16, 2026
Have i been the julius this while time? What a twist.
nuancebydefault•Feb 16, 2026
That number feels off by a lot to me. I think i can say i'm quite good at socializing, quite above average when comparing to people I meet and work with. I'd rate my engineering skills about average level and i have a firm dislike of fraud and of people acting to be better/smarter/faster than they really are. In my career I've come across managers of the julius type, as well of the narcissistic type, even a sociopath. I would estimate 10 to 20 percent of people are of the Julius type.
rustystump•Feb 16, 2026
It was a subtle ref to the 80/20 rule in that most people likely oscillate between the julius and the useful. Some of that 80% are full time julius for sure.
tracerbulletx•Feb 16, 2026
And yet this thread is completely full of Frank Grimes.
shabatar•Feb 16, 2026
The question is how does one become julius
wolfi1•Feb 16, 2026
oh man, I have met several Juliuses. one of them was my boss till he made an error as similar to the one the original Julius made, but unfortunately too late I had to leave the company earlier he made my life hell. now he is at another company, as long as he is at this company I won't apply there, if they hire him they have no place for me
coldtea•Feb 16, 2026
No end of Juliuses. And they're not even the worst type you can meet at a software company.
rozap•Feb 16, 2026
There are so many. I think if you haven't met a Julius, chances are you are Julius..
elbear•Feb 16, 2026
I wonder, how does a Julius perceive another Julius, as another competent worker? What about a non-Julius then?
alpineman•Feb 16, 2026
Everything is perception though. You are looking at this with your own perception, biases, and heuristics just like everyone else. There is no 'right' way to hire.
skeledrew•Feb 16, 2026
> perceived ability
In this case at least it's definitely more than that. Ever since LLMs became a thing, there has been a constant search to find it's "killer app". Given the steep rise in popularity, regardless of the problems, that is now OpenClaw. As they say, the proof's in the pudding; this guy has created something highly desirable by the many.
PurpleRamen•Feb 16, 2026
Yet, people are still asking for the usability of OpenClaw outside of marketing. It's a bit unclear how much of a "killer app" it really is, and how much is just burning money for the lulz and Bot RP. I personally also got the impression many people had their first AI-gateway experience with OpenClaw, and don't understand that those abilities have been around for a while now, but is located in the expensive LLMs which OpenClaw is using, not in OpenClaw itself. I've seen people thinking that OpenClaw is actually the AI.
sho_hn•Feb 16, 2026
Doesn't really matter. As always it's integration that makes a product.
Talking to bots on Telegram isn't new.
Running agentic loops isn't new.
Giving AI credentials and having it interface with APIs isn't new.
Triggering AI jobs from external event queues isn't new.
Parking state between AI jobs in temp files isn't new.
Putting all together in one product and marketing it to the right audience? New.
PurpleRamen•Feb 16, 2026
But novelty doesn't make a killer app. When outside of marketing and gateway-experience, there are still that many open questions, then maybe it's a valid claim to call it perception instead of substance.
At the end, only time will tell how much there really is to this.
coldtea•Feb 16, 2026
>But novelty doesn't make a killer app.
It often does, if killer app means popular app.
PurpleRamen•Feb 16, 2026
There is a difference between popular and (in)famous. OpenClaw is famous, has popularity at the moment, but is it sustainable? Will OpenClaw (or some kind of successor) still have a relevant usage (outside of fan circles) next year? Or in 5 years?
And I'm not talking about just any kind of assistant, because those are already existing for decades now with various degrees of competence and all kind of flavours.
lossyalgo•Feb 16, 2026
> Will OpenClaw (or some kind of successor) still have a relevant usage (outside of fan circles) next year?
I have a feeling OpenClaw et al. will only still exist if somehow all of the gaping security holes are ever able to be closed and through some sort of magic, less than 5% of the users get hacked within the next year, but I'm not sure it's even possible to close those holes, since the entire point and usefulness of such tools is to give them root access and set them completely free.
skeledrew•Feb 16, 2026
> don't understand that those abilities have been around for a while now
Hugely underestimated comment. That's pretty much the entire point here. Many people didn't know something with these capabilities was already possible. Or some - like me - knew of the potential, but couldn't be bothered/didn't have the time to put the bits together in a satisfactory flow (I'm currently exploring and building on nanobot[0], which is directly inspired by OpenClaw; didn't touch OC because it's in JS and I'm a Python person). Everything came together really well, which is why it's a "killer app". And now the dam has burst there will be customized takes on the concept all over the place (I'm also aware of a Rust "port", Moltis[1]), taking the idea to next levels.
People weren't underestimating it and it's not that they "couldn't be bothered". They either understood the gaping security/safety holes it creates, or were guarded against their own stupidity.
raverbashing•Feb 16, 2026
Talk about going all the way to write the story and seeing the point go by
Your boss liked Julius. People liked Julius
You're not going to convince people they have to pay more attention to the technical guy that can't string a though together and answers in a grumpy mood
Be more like Julius and you might get more of his laurels
Balinares•Feb 16, 2026
Nah. Avoid companies that can't see through the Juliuses. Because there will be other disastrous consequences to their bad decision making processes.
raverbashing•Feb 16, 2026
> Avoid companies that can't see through the Juliuses.
Good luck with that
eager_learner•Feb 16, 2026
to--> latexr: Thank you for the link to Polum's essay in juliusosis. It really is the case that a lot of incompetence is hiding in plain sight. Probably because modern schooling encourages this.
I've lived in China (as a foreigner) and they have a word for Juliuses. They call them the 'cha bu duo xiansheng' = the 'Mr. Almost ok'.
Balinares•Feb 16, 2026
Oh, Julius. Haven't we all met a Julius.
Story! Long ago, very long ago, I was working at a tiny Web company. Not very technical, though the designers were solid and the ops competent.
We once ended up hosting a site that came under a bit of national attention during an event that this site had news about. The link started circulating broadly, the URL mentioned on TV, and the site immediately buckled under the load.
The national visibility of the outage as well as the opportunity cost for the customer were pretty bad. Picture a bunch of devs, ops, sales and customer wrangling people, anxiously packed around the keyboard of the one terminal we managed to get logged into the server.
That, and Julius, the recently hired replacement CTO.
Julius, I still suspect, was selected by the previous CTO, who was not delighted about his circumstances, as something of a revenge. Early on, Julius scavenged the design docs I was trying to put together at the time to get the teams out of constant firefighting mode, and then started misquoting them, mispronouncing the technical terms. He did so confidently and engagingly. The salespeople liked him, at first.
The shine was starting to come off by the time that site went down. In a company that's too small for teams to pick up the slack from a Julius forever, that'll happen eventually.
So here we were, with one terminal precariously logged into the barely responding server, and a lot of national eyes on us. This was the early days of the Web. Something like Cloudflare would not exist for years.
So it fell on me. My idea was that we needed to replace the page at the widely circulated URL with a static version, and do so very, very fast. I figured that our Web servers were usually configured to serve index.html first if present, with dynamic rendering only occurring if not. So I ended up just using wget on localhost to save whatever was being dynamically generated as index.html, and let the server just serve that for the time being.
This was not perfect and the bits that required dynamic behavior were stuck frozen, but that was an acceptable trade-off. And the site instantly came back up, to the relief of everyone present.
A few weeks later, the sales folks, plus Julius, went to pitch our services to a new customer prospect. I bumped into one of them at the coffee machine right afterwards. His face said it all. It had not gone well.
Our eyes met.
And he said, with all the tiredness in the world: "He tried to sell them the 'wget optimizer'..."
crashprone•Feb 16, 2026
This story made my day, thanks!
5Qn8mNbc2FNCiVV•Feb 16, 2026
I mean, maybe he was a revolutionary. One could describe what Vercel is selling as some kind of "wget optimizer" as well
ddevnyc•Feb 16, 2026
I've met countless Juliuses over the years. I kept track of the companies, and the Juliuses. My biggest revelation is that every company that was being in some substantial capacity led by a Julius (either at C level, VP, or high up in management) ended up one of two ways:
1. Shut down or shutting down (e.g. team reduced by > 50% since I've been there)
2. Julius removed, endlessly seeking work, keeps getting fired, and can't find a place to call home
The meteoric rise of the Julius is an exception - sooner or later their lucky streak ends and they face the cliff of adversity, towering above them with no way to climb it - no skills to help him actually do it.
fsniper•Feb 16, 2026
I haven't seen that before. But it was really hard get to the end. Not because it's bad written or so, on the contrary is a very good piece. However the feeling is unfathomable. I hate Julius'es. More so I hate the managers blinded by Julius'es.
PlatoIsADisease•Feb 16, 2026
Great article until the end when they talked about AI.
Xmd5a•Feb 16, 2026
> Pour celleux qui ne connaissent pas l’informatique
How about none of the above, but hired because of wanting OpenClaw?
bjt•Feb 16, 2026
There's OpenClaw the codebase, and there's OpenClaw the community. They could build the same program very easily (as evidenced by the number of clones out there already). That part's not worth paying much for. But redirecting the whole enthusiast community around it? That's worth a lot.
3form•Feb 16, 2026
Exactly. My point is that it might not be about the guy.
dannyw•Feb 16, 2026
Your comment and the article expanded my world view a little bit. Thank you.
chungus•Feb 16, 2026
My imposter syndrome is essentially fear of being julius.
Rooster61•Feb 16, 2026
90% of software engineers have a fear of being Julius
quietbritishjim•Feb 16, 2026
I think you're right but you've been a bit pedantic about the parent comment. They sloppily said that delivering business value gets you hired, when in reality the appearance of that may do. But I think we all understood their main thrust was to disagree with the comment before them about coding ability, and the point is that this doesn't always correlate with business value.
I did enjoy your link though.
nickzelei•Feb 16, 2026
Wow, that blog post really gave me pause and has stuck in my head for the last hour or so.
cekanoni•Feb 16, 2026
This is if not the best article i have read recently. Julius ...
LoganDark•Feb 16, 2026
Julius sounds like a sociopath. Sociopaths have no empathy/morals, so they can confidently lie all day and still be perfectly fulfilled; and some of them can be very excellent at social manipulation. This level of confidence in all things, including complete bullshitting, and constantly climbing the corporate ladder for huge payoffs, is not too uncommon among them.
IMO, all you can really do around one is try to focus on yourself. Or get away as fast as you can, depending on the situation.
mrugge•Feb 16, 2026
The world is full of Juliuses. And if one works with enough people one can suddenly realize that they too are a Julius relative to someone smarter and more introverted. Worth considering this before dismissing someone as yet another Julius. Oh and everything doesn't suck.
pnt12•Feb 16, 2026
Fully disagree.
There's lots of people that won't care about the code: executives, managers, customers etc. If the engineers don't care either, then who cares?
If we compare with big food companies, that's like their food formula. No one thinks it's useless - it's the source code for the product they sell. Yet nowadays we get so many engineers distancing themselves away from the code, like the software formula doesn't matter.
There are diminishing returns, but overall good code goes hand in hand with good products, it's just a different side of it.
swat535•Feb 16, 2026
Based on the interview format these days, I beg to differ.
If this were true, we wouldn't be studying Leet code and inverting binary trees to get a job.
I guess the lesson here is that unless you have a direct line with upper management to skip the line, you'll be stuck grinding algorithms for the rest of your life.
pphysch•Feb 16, 2026
Leet code interviews are in the spirit of filtering out charlatans who misrepresent having even basic programming fundamentals. Many interviewers take it too far, but the original motivation is essential to saving time in the hiring process. I was instantly converted after participating in the full hiring process for a junior dev, which didn't properly filter for programming skill.
Big companies may have separate hiring SWE departments where the initial interviewers don't even know what team or role you may land in, so they have to resort to something...
ho_schi•Feb 16, 2026
I’m rather sure *Airbus* will prefer a programmer which reads and writes reliable code.
The programmer which delivers useful products is probably hired by Microsoft? Or worse, Boeing. Or Toyota. Some NTSB people or Michael Barr are happy to tell you details about the number of dead people they created.
Restart braking to brake because our code failed.
Or.
One single sensor delivers wrong data. Let us put the trim down. DOWN! DOWN!
After that they blame the user. It wasn’t a pilot error, because the didn’t trained the pilots to immediately turn off MCAS. And it wasn’t a driver error, because they didn’t trained driver to lift the feet and start braking again.
But I’m only programming a text viewer.
Which is used in a power plant to read the emergency manual, after an earthquake. You are responsible.
rafaelmn•Feb 16, 2026
Airbus pays like shit probably. Just going off the stuff I've read about Boeing.
Ah, you were thinking of the German branch, I was talking about the French one, in Toulouse (I have a few friends working there).
There, a team lead is doing ~$4000 net per month. So not poverty, but not great either.
class3shock•Feb 16, 2026
I'm not sure of the situation for software engineers but ones on the aerospace and mechanical side working in aerospace in Europe are paid something on the order of 50% less than ones in the US. I always assumed it's just a supply demand problem but I haven't run the numbers.
fabrice_d•Feb 16, 2026
Did you factor in the highest cost of life (including housing, healthcare, retirement, etc.) in the US in your 50% ?
pembrook•Feb 16, 2026
We're not talking about Airbus or centuries old commodified industrial companies. Airbus sells airplanes, not AI software tools.
But if you did build a core innovation in aerospace that went viral I'm sure Airbus would be interested in hiring you.
The salary would be 3K per month. And lunch coupons to buy a ham baguette.
ghoblin•Feb 16, 2026
There are only so many safety-first companies and products. The vast majority of the economy isn't optimizing for safety
swiftcoder•Feb 16, 2026
> There are only so many safety-first companies and products
There are only so many companies that think of themselves as safety-first. In practice, basically all companies work on things that should be safety-first.
Does your software store user data? Congrats, you are now on the hook for GDPR and a bunch of similar data handling regulations.
Does your software include a messaging component? You are now responsible for moderating abusive actors in your chat.
Does your software allow users to upload images? Now you are a potential distribution vector for CSAM.
And so on... safety isn't just for things which can cause immediate death and dismemberment
ghoblin•Feb 16, 2026
There’s a difference between "safety matters" and “safety is the primary constraint".
Most companies manage risk to an acceptable level while optimizing for speed and cost. Aerospace companies optimize for minimizing catastrophic failure, even at extreme expense.
Treating a potential GDPR fine as equivalent to a flight-control failure ignores that society, regulators, and markets treat those risks very differently.
The inconvenience and economic cost of your Discord messages leaking is not the same category of harm as your pacemaker controller failing.
And because the majority of economic activity sits in that lower-criticality category, it would not be surprising if highly specialized, safety-critical human software engineering becomes more of a niche, while much of routine software development becomes increasingly automated or commoditized.
swiftcoder•Feb 16, 2026
> Treating a potential GDPR fine as equivalent to a flight-control failure ignores that society, regulators, and markets treat those risks very differently
Agreed, though I think that if GDPR fines were actually being levied at the recommended 4% of global revenue, we'd start treating them more similarly to a 737 crash.
> The inconvenience and economic cost of your Discord messages leaking is not the same category of harm as your pacemaker controller failing
Sort of depends who they leak to. Your teen classmates who bully you to suicide? Your abusive ex who is trying to track you down to kill you? The 3-letter agency who is trying to rendition your family to an internment camp?
There are a lot of seemingly benign failure modes that become extremely lethal given the right circumstances. And because we acknowledge the potential lethality of something like a pacemaker failure, we have massive infrastructure dedicated to their mitigation (EMT teams, emergency external pacemakers, surgical teams who can rapidly place new leads, etc). For things society judges less important, mitigations are often few and far between
eecc•Feb 16, 2026
OT: it's not the first time I see this grammatical mistake: "didn’t trained". Is it some accepted regional variation?
illichosky•Feb 16, 2026
I think he is a non-native speaker, like me. I also do this mistake very often and 'didn't train' is a bit counter-intuitive - at least for me.
cookiengineer•Feb 16, 2026
I think that happens when as a German you're used to using the Plusquamperfekt which is a somewhat unique tense that's allowed to be used in all past tenses.
It allows you to not having to define the point in time and neither the frame of the timespan's points in time.
Some languages allow to use that type of tense and it's somewhat a language gap I suppose. I have no idea what other languages or proto languages allow that tense though, but I've seen some Slavic and maybe Finnish(?) natives use that tense in English, too.
Maybe someone more elaborate in these matters has better examples?
Thank you! I assume “didn’t train” is correct. Probably my favorite mistake! I like it when people point out mistakes, give me corrections, and explain why. The reason is crucial.
Maybe “hadn’t trained” is even better. Makes sense when ordering times. But I don’t trust LLMs an inch. It makes up options for git[1] and both GCC and CLANG are often immediately telling me that the LLM is lying.
Cookieengineer and illichosky are right.
[1] Considering that man pages exist, it shows how useless their harmful crawlers are.
class3shock•Feb 16, 2026
For Airbus, Boeing, and others the cost of failure is disproportionately high. Just look at how you consider Boeing despite that 99.99...% of their software and hardware work flawlessly. They will be known for the 737 Max failure for decades.
When OpenAI tells someone that suicide isn't that bad, some bs supplement could be the best thing to treat their cancer, or does anything else that has a negative outcome, the consequences are basically zero. That is even though any single failure like that probably kills alot more people per year than Boeing.
It seems there is knowledge of this and the lack of responsibility placed on these companies so they act accordingly.
mustyoshi•Feb 16, 2026
But realistically, I just had 2 flights last month, checking what model of aircraft I was on didn't even cross my mind.
I survived both flights btw.
Fervicus•Feb 16, 2026
> If you're the type of programmer who thinks of yourself as just a programmer, and take pride in your secure code, ability to optimize functions and algorithms, you're exactly the kind of programmer AI will replace.
Hard disagree. I foresee the opposite being true. I think the ability to understand and write secure, well optimized, performant code will become more and more niche and highly desired in order to fix the mess the vibe coders are going to leave behind.
Urahandystar•Feb 16, 2026
And the cheapest way to distribute that to everyone will be via AI coding.
Fervicus•Feb 16, 2026
If AI becomes good at doing that and fixing bugs, then sure. But there is no evidence pointing towards that as of now. Mostly only slop.
spogbiper•Feb 16, 2026
this is such a weird take to me. every piece of evidence I've seen shows that AI is quickly becoming better at writing code, debugging, finding security issues. my own experience, benchmarks, studies, new articles.. everything points to progress
bilekas•Feb 16, 2026
> Quality of code has never had anything to do with which products are successful. I bet both youtube and facebook's codebase is a tangled mess.
This is such a bad take and flat out wrong. Your ability to deliver and maintain features is directly impacted by the quality of the code. You can ship a new slop project every day if you like, but in order for it to scale or manage real traffic and usage you need to have a good foundation. This is such a bad approach to Software engineering.
abm53•Feb 16, 2026
> If you're the type of programmer who thinks of yourself as just a programmer, and take pride in your secure code, ability to optimize functions and algorithms, you're exactly the kind of programmer AI will replace.
The most successful engineers are the ones who can accurately assess the trade-offs regarding those things. The things you list still may be critical for many applications and worth obsessing over.
The question becomes can we still achieve the same trade-offs without writing code by hand in those cases.
That’s an open question.
Balinares•Feb 16, 2026
I literally got my current cushy gig to fix a codebase that was crumbling under its own unmaintainable weight at a company that, like you, thought that quality doesn't matter. This is not the first time in my career I get a great job that way.
"Quality doesn't matter" people are why I'm not worried about employment. While there is value in getting features out fast, definitely, there always comes a point on your scaling journey where you have to evolve the stack structure for the purpose of getting those features out fast sustainably. That's where the quality of the engineering makes a difference.
(Anecdotally, the YouTube codebase may be locally messy, but its overall architecture is beautiful. You cannot have a system that uploads, processes, encodes, stores, and indexes massive amounts of videos every hour of every day that in the overwhelming majority of cases will be watched less than 10 times, and still make a profit, without some brilliant engineering coming in somewhere.)
RamblingCTO•Feb 16, 2026
Both can be true: people who deliver products based on vision and all are very much needed and cracked devs who excel in technical details as well. Peter and you are of these respective groups then.
rjsw•Feb 16, 2026
You still need a few people high enough up in a company who think that quality does matter to be able to get the job to fix things.
Balinares•Feb 16, 2026
That will happen, in the lucky cases, when someone high enough up with basic reasoning skills looks at support costs and time spent fixing bugs versus feature release velocity and sales income.
Aperocky•Feb 16, 2026
This is where the debate has another axis - when.
Quality matters, delivery speed matters, shipping also matters, where it matters and when it matters is much harder to get right. But it's also self correcting - if you don't, the project or business die - you can only get it wrong for so much or for so long.
To only discuss on one axis is presumably why GNU Hurd have never shipped or how claude-c-compiler doesn't compile hello world.
bleudeballe•Feb 16, 2026
The Youtube mobile app is a nightmare to use, and has been for months (desktop is working quite well but I am using my phone 95% of the time). Reopening a short shows me a few frames of the next video before freezing, shorts die on second play constantly, history crashes because of shorts, changing to videos brings them back but navigating to shorts crashes again.
This has been reliably going on for at least 6+ months, I thought shorts was a big priority for them, but the UX is and remains horrible.
skywalqer•Feb 16, 2026
You also believe that AI will replace mathematicians?
m000•Feb 16, 2026
> you get hired for your proven ability to deliver useful products
Or, in this case, just because they need a poster boy for their product, which isn't as good as they say it is.
cookiengineer•Feb 16, 2026
You are replying to someone whose account name is tabs_or_spaces, which in itself is so ironic that I have no word for it.
What people don't seem to realize is that like you pointed out there's a demand for the previous "developer relations" type of job though, and that job kind of evolved through LLM agents into something like an influencer(?) type position.
If I would take a look at influencers and what they're able to build, it's not that hardcore optimized and secured and tested program codebase, they don't have the time to acquire and hone those skills. They are the types who build little programs and little solutions for everyday use cases that other people "get inspired with".
You could argue that this is something like a teacher role, and something like the remaining social component of the human to human interface that isn't automated yet. Well, at least not until the first generation of humans grew up with robotic nannies. Then it's a different, lower threshold of acceptance.
1000xcat•Feb 16, 2026
It took me a while to realise that most people don't care how it's done or how it works they just want something useful and working (even if it's vibe coded or duct taped)
coldtea•Feb 16, 2026
>It's funny to me how still so many don't realize you don't get hired for the best positions for being a 10x programmer who excels at hackerrank, you get hired for your proven ability to deliver useful products
For a programmer, that's based on them "being a 10x programmer who excels at hackerrank".
For manager types it might be "Creativity, drive, vision, whatever".
>Code is a means to an end
For a business in general.
When hiring developers, code IS the end.
antfarm•Feb 16, 2026
> Quality of code has never had anything to do with which products are successful.
It may look like that, but many of the products with bad code didn't even make it into your vibe statistics because they weren't around for long enough.
DeusExMachina•Feb 16, 2026
> If you're the type of programmer who thinks of yourself as just a programmer, and take pride in your secure code, ability to optimize functions and algorithms, you're exactly the kind of programmer AI will replace.
I'm not sure how this follows logically from the comment you are replying to, which states:
> We have someone who vibe coded software with major security vulnerabilities.
conartist6•Feb 16, 2026
...huh?
10x programmers aren't the ones grinding hacker-rank.
Neither are the programmers like me who actually focus on building good systems under any significant threat.
And Facebook's codebase is pretty decent for the most part, you'd probably be shocked. Benefits of moving fast and breaking things include making developer experience a priority. That's why they made Hacklang to get off PHP and why they made React and helped make Prettier
oytis•Feb 16, 2026
Should I be sad or rather relieved that grifters will be able to grift without my help? I would just accept the reality and reeducate myself to some other field where hard engineering is still required, but I'm afraid AI will advance faster than my degree.
killbot5000•Feb 16, 2026
> Quality of code has never had anything to do with which products are successful. I bet both youtube and facebook's codebase is a tangled mess.
The code’s value is measured in its usefulness to control and extend the Facebook system. Without the system, the code is worthless. On the flip side, the system’s value is also tied to its ability to change… which is easier to do if the code is well organized, verified, and testable.
ljm•Feb 16, 2026
But it also looks like these companies value and pay for the tech bro version of a snake oil consultant. And that you still have to have a lot of things going in your favour for your own brand of slop to elevate you to tech celebrity status. I don't see anybody who isn't already well-connected or financially comfortable pulling this off because nobody who has something to lose will slop their way to the top.
I don't think it's a good thing that the craft of software engineering is so easily devalued this way. We can quite demonstrably show that AI is not even close to replacing people in this respect.
Am I speaking out of envy or jealousy? Maybe. But I find it disappointing that we have yet more perverse incentives to hyper-accelerate delivery and externalise the consequences on to the users. It's a very unserious place to be.
chamomeal•Feb 16, 2026
Delivering a product is one thing. Continuing to upgrade it and maintain it indefinitely is another. Good quality code makes it easier to make improvements and changes as time goes on. Doesn’t matter if you’re a human or an LLM.
Also, has anybody looked through the Openclaw source? Maybe it’s not so bad
almostdeadguy•Feb 16, 2026
Yeah you’re right, the engagement factories probably don’t care about code quality. The customer isn’t the customer after all.
2OEH8eoCRo0•Feb 16, 2026
Tell that to the creator of Homebrew, Max Howell
> "Google: 90% of our engineers use the software you wrote (Homebrew), but you can’t invert a binary tree on a whiteboard so fuck off."
kamaal•Feb 16, 2026
>>It's funny to me how still so many don't realize you don't get hired for the best positions for being a 10x programmer who excels at hackerrank
Competitive coding is oversold in this generation. You can log in to most of these sites and you will see thousands of solutions submitted to each problem. There is little incentive to reward situations where you solved some problem which a thousand other people have solved.
To that end its also a intellectual equivalent of video game addiction. There is some kind of illusion that you are indulging in a extremely valuable and productivity enterprise, but if you observe carefully nothing much productive actually gets done.
Only a while back excessive chess obsession had similar problems. People spending whole days doing things which make them feel special and intelligent, but to any observer at a distance its fairly obvious they are wasting time and getting nothing done.
groundtruthdev•Feb 16, 2026
Would you feel comfortable flying on an airplane where the programmers don’t care about secure code, correctness, or the ability to reason about and optimize algorithms—where “good enough” is the philosophy? Most people intuitively say no, because in safety-critical and large-scale systems, engineering rigor isn’t optional. Software may look intangible, but when it runs aircraft, banking systems, or global platforms, the same discipline applies.
The “Facebook/YouTube codebases are a mess so code quality doesn’t matter” line is also misleading. Those companies absolutely hire—and pay very well—engineers who obsess over security, performance, and algorithmic efficiency, because at that scale engineering quality directly translates to uptime, cost, and trust.
Yes, the visible product layers move fast and can look messy. But underneath are extremely disciplined infrastructure, security, and reliability teams. You don’t run global systems on vibe-coded foundations. People who genuinely believe correctness and efficiency don’t matter wouldn’t last long in the parts of those organizations that actually keep the lights on.
juggle-anyhow•Feb 16, 2026
Do you think the people writing the code that operates aircraft care about code quality? After the boeing incident I do not.
groundtruthdev•Feb 16, 2026
Fair point and that’s exactly why Airbus has been eating Boeing’s lunch. When engineering culture takes a back seat to cost, schedule, and optics, outcomes diverge fast. In safety-critical systems, rigor isn’t optional, it’s the competitive advantage.
bronco21016•Feb 16, 2026
I find it difficult to believe software is Airbus’ competitive edge. First, their software for aircrew bidding is an absolute and utter disaster. Date filtering has been broken nearly a year despite multiple releases being pushed. Date management is like THE KEY functionality of aircrew bidding. I also use their flight plan software and it’s like they never bothered to ask a pilot how they use a flight plan in flight.
I think Airbus is riding the coat tails of solid engineering done in the 80s and continuing to iterate that platform vs Boeing trying to iterate on a hardware platform from the 60s. Airbus benefited significantly from 20s years of engineering and technological progress. Since the original design of the A320, changes have been incremental. Slightly different engines, addition of GPS/GNS, CPDLC, CRT to LCD screens. Meanwhile Boeing has attempted to take a steam gauge design from the 60s and retrofit decades of technology improvements and, critically, they attempted to add engines significantly altering the aerodynamics of the aircraft.
collimarco•Feb 16, 2026
> your proven ability to deliver useful products
Which is not the case. It's just a useless product, without any real use case, which also introduces large security bugs in your system.
amelius•Feb 16, 2026
> you get hired for your proven ability to deliver useful products
Huh, if you make finished products you better start your own company.
yaku_brang_ja•Feb 16, 2026
This is so not true.
lbrito•Feb 16, 2026
>you get hired for your proven ability to deliver useful products
Tell that to the guy that made brew and tried to interview at Google
getoffit•Feb 16, 2026
> Code is a means to an end.
Product is a means to an end.
Being good at something is a means to an end.
That end? Barter for food and shelter, medicine.
The means to do so; code or delivery of a product; are eventually all depreciated, and thrown away. You eventually age into uselessness and die.
Suddenly having an epiphany it's not about code but product! way too late in the game, HN... you're just trying to look like you got it figured out and bring deep fucking value to humanity right as "idea to product without intermediary code layer" is about to ship[1]. You already missed your window.
You still don't get the change that's needed and happening due to automation; few of us want to put you on their shoulders and sing songs about you all.
Hop off the Hedonistic Treadmill and get some help.
[1] am working on idea to binary at day job, which will flood the market with options and drown yours out
asveikau•Feb 16, 2026
> If [you] ... take pride in your secure code
I don't object to most of what you're saying, but I take issue with this part.
This happens to be an area where lapse or neglect can be taken as a moral failure. And here you are mocking people who are concerned about it.
If someone uses AI to architect a bridge and the bridge collapses, you couldn't say that the structural integrity of the bridge wasn't the important part.
jorvi•Feb 16, 2026
> you get hired for your proven ability to deliver useful products.
Ah, right. Write "Brew", which gets used by thousands of devs at Google every day, and then get rejected in an interview.
baby•Feb 16, 2026
I’m surprised to read this comment. I totally get why openAI hired the guy, IMO its a brilliant hire and I wish Meta would have fought more to get him (at the same time Meta is very good at copying and I think they need more people pushing products and experiments and less processes, they’ve been traumatized by cambridge analytica and can’t experiment anymore)
jrowen•Feb 16, 2026
It's telling me that I should rather focus on getting viral/lucky to get a shot at "success". Maybe I should network better to get "successful". I shouldn't be focusing on writing good code or good enough agents.
All of this is true and none of it is new. If your primary goal is to make lots of money then yes you should do exactly that. If you want to be a craftsman then you'll have to accept a more modest fortune and stop looking at the relative handful of growth hacker exits.
democracy•Feb 16, 2026
I don't know this guy's abilities so can't comment on that, but looking at how much AI companies spend on marketing - that's a great hire.
tin7in•Feb 16, 2026
If you read his blog you’ll find about a lot of his engineering decisions.
Peter was right about a lot of the nuances of coding agents and ways to build software over the last 9 months before it was obvious.
mattmanser•Feb 16, 2026
Was he? Openclaw is now dead, right? The software will now die. No-one's going to maintain it.
This was a short-term gain for a long term loss.
I remember in the web 3 era some team put together a CV in one page site, literally a site that you could put your linkedin, phone no and email on but pretty, bought for millions.
Was the product a success or the marketing? As the product was dead within weeks.
There's a lot of low hanging fruit in AI at the moment, you'll see a few more things like this happen.
tin7in•Feb 16, 2026
> No-one's going to maintain it.
Why? He's going to maintain it and the community is large enough. Another sci-fi idea that's slowly becoming real is that the project is maintaining itself.
OpenClaw is a bunch of projects that evolved together (vibetunnel, pi-mono, all the CLIs). It's even more interesting to see the next iterations, not only what happens to this project.
mattmanser•Feb 16, 2026
We've been here a 1,000 times. This exact news happens over and over again on HN. Why would it be any different from the other times this happens?
This is what US tech companies do to stay dominant. Buy and kill.
This project will die now, that's the point of buying him. For super cheap too but the sound of it.
MangoCoffee•Feb 16, 2026
Openclaw is an open source project. that's the beauty of the Open source. the community can take over and people can fork it. there already many clone of openclaw.
KellyCriterion•Feb 16, 2026
do you know about www.linktr.ee ? :-D :-D
CobrastanJorji•Feb 16, 2026
> Would you hire someone who never read any if the code that they've developed?
I mean, if I'm a company specifically in the business of selling to companies the idea that they can produce code without reading any of it? Yeah, obviously I'd hire them.
loandbehold•Feb 16, 2026
The guy has a long history of building popular products, long before vibe coding became possible. He is certainly good at writing code manually as well.
stingraycharles•Feb 16, 2026
I genuinely think people on HN are having the misconception that vibe coding == don’t care about (the quality of) the code.
I like to think it’s the same as delegating implementation to someone else.
matthewkayin•Feb 16, 2026
Except there are literally people on this thread saying that this is proof that code quality doesn't matter and that we all need to "wake up". It's the same as the people saying that spec driven development is the way of the future and that engineers should be focusing on the spec and not even looking at the code.
If you use LLMs and you do care about code quality, then great. But remember that the term vibe coding as it was coined originally meant blindly accepting coding agent suggestions without even reviewing the diffs.
Many of the people aggressively pushing AI use in code are doing so because they care more about delivery products quickly then they do about the software's performance, security, and long-term maintainability. This is why many of us are pushing back against the technology.
Ultimatt•Feb 16, 2026
Errr its always been extremely true that social networking brings success. With far more value return than writing great code nobody knows about or uses.
malthaus•Feb 16, 2026
it's a tough pill to swallow for developers, but nobody cares about your ability to write code. people care about you shipping something people want.
i can easily hire 100 sweatshop coders to finetune your code once i have a product that works but the inverse will never happen
dinkumthinkum•Feb 16, 2026
What percentage of programming job interviews every went like that? They ask fizz buzz, they ask DP, they system analysis and design, and some culture fit. Maybe some people might ask this B-school type stuff but who is out there verifying deliverables of people from previous jobs?
nedt•Feb 16, 2026
Well you don’t see the real value of coding tasks during interview. What gets tested are your communication skills, how you think and express your thinkings. You will be working in a team so you need at least fit and work with others. You are right that no one cares about your FizzBuzz.
dns_snek•Feb 16, 2026
That's such a bizarre thing to claim when offshoring software development has historically been a huge failure. You've always needed competent technical staff with even more demanding management requirements to stand a chance.
ookblah•Feb 16, 2026
not sure why i find a lot of these types of comments lately, just a sign of the times i guess? criticism sure, but to reduce all of his work as if it were a paragraph prompt or something, that's something else.
i hate when the people start bringing up the "luck" factor as if you are the only smart one here to realize that it also plays a huge factor?
you want to make lots of money? change your mindset, stop making excuses and roll the dice. it won't guarantee success, but i also guarantee nobody who did so would ever lament how unfair it was that they worked so hard and someone else succeeded through "luck" so they might as well not try.
teekert•Feb 16, 2026
It’s not the code. It’s the vision and the can-do attitude. And perhaps a bit of the (earned) name.
enieslobby•Feb 16, 2026
I see a guy who has shown evidence that he has the skill and agency to successfully ship and scale a project that people want, pushing the frontier tools to their limits. That is valuable.
imjonse•Feb 16, 2026
> a project that people want,
do many people actually use openclaw (a two week old project IIUC) or is it just hyped up?
elAhmo•Feb 16, 2026
Many people use it.
Also he made a few other products, some of which were used probably by more than a billion people.
Flockster•Feb 16, 2026
Judging by the Openrouter-Rankings it is No. 1 or 2 in token usage, depending on the view (daily, weekly, monthly).
dieortin•Feb 16, 2026
Most LLM usage does not go through Openrouter. Most people access LLMs thorough ChatGPT, Gemini etc. apps, integrations into popular products…
The percentage of total tokens being handled by Openrouter is a tiny blip
imjonse•Feb 16, 2026
I was half-jokingly telling someone the other day (before I knew what OpenClaw was or anything about this story), that as the ability to code is becoming commoditized, sales and marketing skills are going to be more important, shifting power from techies to influencers and we may see Mr Beast become a software powerhouse.
wasmainiac•Feb 16, 2026
> It's telling me that I should rather focus on getting viral/lucky
This is the real dangers of social media and other platforms. I know teachers in the school system, way too many kids want to grow up to be influencers and YouTubers, and try to act like them too.
At the risk of sounding like an old man yelling at the sky, this is not good for society. Key resources and infrastructure in our society is not built on viral code or YouTubers, but slow click of engineering and economic development. What happens when everyone is desperately seeking attention to become viral? And I don’t blame the kids the influencers by nature show a very exciting or lavish lifestyle.
throwaway2037•Feb 16, 2026
> way too many kids want to grow up to be ... YouTubers
What's wrong with wanting to be a YouTuber? At this point, it's really a very small TV channel. And YouTube essentially allows for an infinite number of these very small TV channels, unlike traditional TV.
> way too many kids want to grow up to be influencers
You can replace "influencers" with "wanting to be popular". That is as old as time. To me, if you look closely at (social media) influencers, they are nothing more than people who were popular in high school and managed to extend it for a few years with the use of social media.
m000•Feb 16, 2026
> To me, if you look closely at (social media) influencers, they are nothing more than people who were popular in high school and managed to extend it for a few years with the use of social media.
That's a very superficial similarity. It's one thing for a kid wishing to be popular in their extended social circle, and a very different thing a young adult being convinced that they can "grind" their way to influencer fame and money.
The young adult may never have heard of or considered the extreme survivorship bias in the stories of successful influencers.
wasmainiac•Feb 16, 2026
> What's wrong with wanting to be a YouTuber?
Which YouTubers are we talking about here? Hobbiests? People chasing social clout? People who like making stuff and sharing it? People pushing negativ social attitudes? Context matters.
I’m talking about young adults not preparing for their future because they think they are going to become millionaires on YouTube, they focus on what is essentially a culture of grifting, with a success rate similar to winning a lottery.
Im not sure what has to change, but the current state of things is not healthy.
benreesman•Feb 16, 2026
If you want to make a million bucks a year then go put in three consecutive quarters of demonstrable lift on a renenue-adjacent metric at Stripe or Uber.
If you want to make a zillion a year ask Claude to search for whatever Zuckerberg is blowing a billion on this quarter.
All of those companies are certain to exist in 12 months. Altman is flying to Dubai like every other week trying to close a hundred billion dollar gap by July with a 3rd place product and a gutted, demoralized senior staff.
Kiro•Feb 16, 2026
> We don't know how much of the github stars are bought. We don't know how many twitter followings/tweets are bought.
Why this insinuation? The project went massively viral and was even covered in my local newspaper. I don't see any reason to doubt those numbers.
LtWorf•Feb 16, 2026
As if newspapers never did paid promotion articles?
Kiro•Feb 16, 2026
Yeah, the small local newspaper for my town of 30k is a paid shill for OpenClaw. Amazing comment.
throw444420394•Feb 16, 2026
What sense it made to do something like Instagram? There were already N social networks where you could share photos. No technical excellence was needed. It was just momentum, being in the right incubator, and so forth... I understand what you are saying, but it has been always like that.
philipallstar•Feb 16, 2026
Well - no. There are some products where the product itself was relatively simple to build, and the rest was product-market fit. Those are the easy ones technically, but that's not the only type of successful product. YouTube wouldn't be working today if it broke all the time under load.
FreeRadical•Feb 16, 2026
Read his backstory. He’s a high quality software engineer by background.
BryantD•Feb 16, 2026
The bit about purchased stars and followers is a bit out of left field. Is there a piece of news I missed?
nb777•Feb 16, 2026
No need to be jealous. If you'd have watched some of the interviews of this guy then you'd know that he's not vibe coding.
column•Feb 16, 2026
It does not matter that he vibe-coded it. It does not matter if any stars/twitter post were bought. He generated hype and that's what big AI company need at the moment. They hire him, they give a cut on that hype. If he's no good (at generating any hype) in the coming months, he'll be gone. It's hype all the way down.
catwell•Feb 16, 2026
You are most likely confusing OpenClaw with Moltbook, which is the project that had the most glaring vulnerabilities. But even if OpenClaw was full of holes it would not matter.
Peter is not just a random "vibe coder" and he does not need to be hired by OpenAI to achieve "success". Before this he founded and sold a company that raised €100M. It is not his first project in the space either (see VibeTunnel for instance).
OpenAI is not hiring him for his code quality. They are hiring him because he proved consistently that he had a vision in the space.
debugnik•Feb 16, 2026
Not Moltbook, ClawHub. Over 15% of ClawHub skills were malicious at one point, including the most downloaded. And they haven't even tried to solve prompt injections.
planb•Feb 16, 2026
ClawHub isn't even useful. You can just point tell your OpenClaw agent you want it to do, and it will implement it. No need to rely on someone else's code^H^H^H^H textual descriptions of how to do talk to service xzy.
deanc•Feb 16, 2026
What vision? Everyone and their mother has been trying to build useful AI assistants and personal CRMs since computers were invented - way before LLMs. He glued it together, and he succeeded because he executed before anyone else.
I applaud what he's done, and wish him luck trying to get this working safely at scale, but the idea that he's some visionary that has seen something the rest of the world hasn't is ludicrous.
citizenpaul•Feb 16, 2026
HN Really hates understanding business. All these comments, yet no one has gotten the answer right.
OpenAI bought marketing and now someone else cannot buy openclaw and lock out Openai revenue from a project that is gaining momentum.
There are a many of these business moves that seem like nonsense.
1. Bought for marketing.
2. Adversarial hire. ie hire highly skilled people before your competitors can even if you don't have anything for them do to. Yet...
3. Acqu-hire. Buy a company when you really just want some of the staff.
4. Buy Customers. You don't care about the product and intend to migrate their customers to your system.
5. Buy competition before its a threat.
thorio•Feb 16, 2026
Maybe it still is supposed to sound fancy to say you didn't read any of the code. The guy definitely could very deeply understand, read and edit the code, he developed the industrial standard liberal for PDF editing (used by Dropbox etc).
Just saying what you want might be the future for development of some kinds of software, but this use case sure seems like a very bad idea.
I very much appreciate the vision he put into practice, but feel sorry for the project being acquihired kind of.
wanderingmind•Feb 16, 2026
Whenever technically more capable folks diss the growth of a non technical person into bigger roles, I'm obligated to post this Steve Jobs video being asked about Java.
> It's telling me that I should rather focus on getting viral/lucky to get a shot at "success".
It doesn't? You'd need to know the odds for the tell. Like how many incompetent grifters are there, how many of them become hugely successful?
nedt•Feb 16, 2026
You don’t need the lucky shot. But luck needs room to happen. What you need to grow into is becoming a leader. Mentor others, lead by example, suggest new things and build prototypes for show and tell. All that is actually the growth path for good senior software engineers, not becoming a middle manager creating Excel sheets.
And that’s more or less all he did. Had an idea, build a prototype, showed to the world and talked about it - even inspired people who are now saying „I could have done that“. Well do it, but don’t just copy. Improve the idea and great something better. And then very early share it. You might get lucky.
seanoreillyza•Feb 16, 2026
Do good or at least useful work in public and you'd be surprised at what can happen.
seydor•Feb 16, 2026
I'm more jealous of his muscles and butt
bjourne•Feb 16, 2026
Props for admitting jealousy and for being honest! I often feel the same way when fixing bugs in others code.
anilakar•Feb 16, 2026
> vibe coded software with major security vulnerabilities
> vibecoded without reading any of the code
Remember when years ago people said using AI for critical tasks is not an issue because there is always a human in the loop? Turns out this was all a lie. We have become slaves to the machine.
imtringued•Feb 16, 2026
It's also from a guy who rebranded three times (Clawdbot, Moltbot, OpenClaw) in a row and this is technically his fourth rebrand.
gadders•Feb 16, 2026
I wouldn't necessarily expect him to be hired as their lead developer, but I think he would be a good product manager. He's clearly created something people want and see potential for.
21asdffdsa12•Feb 16, 2026
One day Atlas may shrug, but not today, atlast..
PurpleRamen•Feb 16, 2026
It's the old story: evil, irresponsible behaviour has a higher chance of success, than being good and responsible. AIs recent history is a good example. Google had the lead, but lost it (temporary) to OpenAI, because Google was responsible and were not willing to open pandoras box. Apple seems to have something similar to OpenClaw for a while now, but withholds from releasing it, because it's too unsecure. History is full of people burning the world for their own greed, and getting rewarded for it; and they then call it "taking risks" and "thinking outside the box"... I think the underlying reason might be in too many people thinking there is some level of competence behind the irresponsible behaviour and it's alls just controlled harm or something like that.
sauercrowd•Feb 16, 2026
Really surprised by all the comments here, they didnt hire him because of the amazing security openclawd had, but because he's one of the first one's who made a truly personal assistant that's actually valueable to people.
It's about what he created, not what he didnt create.
They're not acquiring the product he built, they're acquiring the product vision.
jonmc12•Feb 16, 2026
Also surprised; building something people want and proving it is the unlock. HN first principle since the beginning.
girvo•Feb 16, 2026
> It's telling me that I should rather focus on getting viral/lucky to get a shot at "success"
Kids and young people have known this forever at this point. Sadly.
elAhmo•Feb 16, 2026
You focused only on the past few months of his career, but this is just the tip of the iceberg. He was active for more than a decade, from early iOS development days to having a fairly successful exit.
So after almost two decades of hard work, it is not really fair to say he just vibe-coded his way into OpenAI.
Semantics and grammar joke aside.. there are not many workers remembered in history. Only the so-called absolute greatest, meanest, etc are remembered. Nobody remembers the people who worked on the pyramid, but everyone knows some Farao.
In this case they hired someone who has 'mastered' the use of their own tool(s). Like if Home Depot hired a guy who has almost perfect knowledge of each and every tool in their own portfolio.
I'm not really sure if i want to be that guy.
ryanar•Feb 16, 2026
Pete didn’t just vibe code, he took his many years of engineering experience and applied it to build a ton of products, pushing the boundaries of todays models and harnesses.
I am saddened that the top post is about jealousy, do so many people feel this way? Jealousy should be something that when we feel we reflect on privately and work on because it is an emotion that leads to people writing criticism like tbis that is biased due to their emotional state.
bhaak•Feb 16, 2026
If you just commit AI generated code without even looking at it it doesn't matter how many years of engineering experience you have.
blueblazin•Feb 16, 2026
As I understand Peter had already early retired because of a successful startup exit and presumably has more money than he knows what to do with. Does that help make you feel less jealous on him getting a job at oai?
alberto467•Feb 16, 2026
Well, once you learn that hard work does not pay, it’s really your own fault if you keep believing in it.
What matters is the result, not how hard you worked at it. Schools and universities have been teaching this for a long time, that what matters is just a grade, the result.
antfarm•Feb 16, 2026
>We have someone who vibe coded software with major security vulnerabilities. This is reported by many folks
>We also have someone who vibecoded without reading any of the code. This is self admitted by this person.
And we have a company whose product should adhere to the highest security standards possible, hiring this guy.
qmr•Feb 16, 2026
> It's telling me that I should rather focus on getting viral/lucky to get a shot at "success".
Well duh. I thought that was well understood.
The other option is having well-to-do parents a la Musk or Gates.
Have you tried that?
powerapple•Feb 16, 2026
I don't think he is hired for coding, he is hired for the product. It is not that he is going to join a product team and code, he probably will lead and influence the product, where other software engineers can help to fulfill.
nasmorn•Feb 16, 2026
What he built is genuinely interesting even if it is not something I would want to give all my credentials to. Makes sense for OpenAI to hire someone who has shown he can build something a lot of people want even they don’t know how to make an even half secure app out of it.
They probably think he has the right judgement of where UX would need to move to. That is easily more valuable for them than any coding.
koe123•Feb 16, 2026
In my view this is just an aquihire to get a headline and take ownership over this trend. Yet another pivot to build hype.
123malware321•Feb 16, 2026
dont be jealous. working for some evil corporate is soulsucking for most humans. Only few thrive in such environments. most will try to get quick $$ and exit before they feel completely dead inside.
swiftcoder•Feb 16, 2026
> It's telling me that I should rather focus on getting viral/lucky to get a shot at "success".
I'm pretty sure that's meant to be the general lesson of the last 20 years or so in Silicon Valley, but it's just survivorship bias in action.
You don't hear a whole lot about the quietly successful engineers who work a 9-till-5 and then go home to see their wife and kids. But you do constantly hear about the folks who made it big YOLO'ing their career and finances on the latest a startup/memecoin/vibecoded app...
morningsam•Feb 16, 2026
Exactly. This whole thing just seems like a repeat of Flappy Bird to me. What was the "lesson" of Flappy Bird for game developers? That you should make very small, very simple games? How has that worked out for the vast majority of copycats who tried? The truth is there isn't any lesson, other than "sometimes people play the lottery, get lucky and win". Most people who play won't, though.
DivingForGold•Feb 16, 2026
If all the above is true, why didn't Sam & Co just replicate his product and offer their own improved version - - with security incorporated within ?
ass22•Feb 16, 2026
Because he wanted to say "Take that" To anthropic who forced Pete to change the name of the product.
Aperocky•Feb 16, 2026
Vibe coding is just a tool - same with programming languages and compilers.
The product being useful and well received by user and market is still the ultimate test. Whether something is vibe coded or not does not matter.
chvid•Feb 16, 2026
Maybe think of this as a hiring of a marketer and tech influencer. And someone with the chops to create a viral product.
jimmydoe•Feb 16, 2026
Exactly.
If AI companies believe code generate by it self, people to scaling up sales is the only worth hire.
I think you’re conflating things. You probably are not jealous but rather frustrated and coming from a point of a false dichotomy trying to equal your position to his. If you were to stop and actually compare your lives you would likely find very different humans. It’s easy to fall into this trap sometimes, don’t let it get into your head. Be grateful for being you and enjoy what life has to offer you instead.
Danidada•Feb 16, 2026
This is such a strange take to be the top comment of “hacker” news. Why are we shaming someone who “hacked” something together and made it open source?
motbus3•Feb 16, 2026
I think it is unfortunate that this is happening. After all the mishaps and wrongdoings I don't want see anyone joining openai
aerhardt•Feb 16, 2026
> We also have someone who vibecoded without reading any of the code. This is self admitted by this person.
This is isn’t right. He says very clearly in the recent Lex Fridman podcast that he looks at critical code (ex: database code). He said he doesn’t look at things like Tailwind code.
turtlebro•Feb 16, 2026
Ugh. Have we all forgotten that jealousy is the absolute opposite of a good virtue. Why does this get upvoted? Hacker News in a truly despicable state these days when this is what bubbles to the top. It saddens me to see that all the good people here have left or stopped participating. When we hear how rotten social media is, this also includes HN.
thecupisblue•Feb 16, 2026
It's not about the code, it's about the vibe.
Also, Peter is quite well known in the dev circles, and especially in mobile development communities for his work on PSPDFKit. It is not like he's some unknown developer that just blew up - he owned a dev tooling company for over 10+ years, contributed a lot to the community and is a great dev.
People seem to think that because we all have the same tools and because they’re increasingly agentic, that the person wielding the tool has become less relevant, or that the code itself has become less relevant.
That is just not the case, at least yet, and Peter is applying a decade plus of entrepreneurial and engineering experience.
BoggleFiend•Feb 16, 2026
He was recently interviewed on Pragmatic Engineer, a podcast whose guests almost always have very impressive technical careers (the episode before him was Mai-Lan Tomsen Bukovec, the VP of Data and Analytics at AWS and the episode after him is Brady Gooch, the Chief Scientist for Software Engineering at IBM)
I agree that summarizing Peter as a "vibe coder" is unfair and disingenuous. The podcast paints his career as being interesting because we went from an impressive software developer, to an entrepreneur, to taking a significant break, to kind of obsessively creating Clawdbot.
> Then after a bunch of podcasts and interviews, this person gets hired by a big tech company
I think the whole OpenClaw arc has been fun to follow, but this sudden turn away from OpenClaw and toward the author as a new micro-celebrity that ended with OpenClaw being sidelined to a foundation was not what I saw coming.
Congrats to Pete for getting such an amazing job out of this, but it does feel strange that only a few days ago he was doing the podcast circuit and telling interviewers he has no interest in joining AI labs.
I don’t think this story arc should be seen as something replicable. Many have been trying to do the same thing lately: Hyping their software across social media and even podcasts while trying to turn it into cash. Steve Yegge is the example that comes to mind with his desperate attempts to scare developers into using his Gas Town (telling devs “dude you’re going to get fired” if they don’t start using his orchestration thing). The best he got out of it was a $300K crypto pump and dump scam and a rapidly dropping reputation as a result.
Individuals who start popular movements have always been targets for hiring at energetic companies. In the past the situation has been reversed, though: Remember when the homebrew creator was rejected from Google because he didn’t pass the coding interview? (Note he later acquiesced to say that Google made the right call at the time). That time, the internet was outraged that he was not hired, even though that would have likely meant the end of homebrew.
I do think we’ll be seeing a lot of copycat attempts and associated spam promoting them (here on HN, too, sadly) much like how when people see someone get success on YouTube or TikTok you see thousands of copycats pop up that go nowhere. The people who try to copycat their way into this type of success are going to discover that it’s not as easy as it looks.
shin_lao•Feb 16, 2026
See that as winning the "startup lottery", that doesn't mean what he did is rational or smart, he just had a great outcome.
In trading it's the same, you can make stupid bets and make a lot of money, doesn't mean you're good trader.
Nothing to conclude from this, this kind of hype-fueled outcome has always been a part of life.
mvkel•Feb 16, 2026
This should be a wake up call. A product's value is not a function of its code elegance. Nobody who matters notices the code, or cares. This is hugely inspiring to the most lazy+clever engineers, because it frees up so many thinking calories. Instead of trying to perfectly choreograph every bit of architecture to optimize for 1M concurrent users, you can spend 0.1 of the time and get things out the door, where you learn if spending even a minute of your time was worth it. Even better, when you realize tech debt is something that never needs to be paid down, you can focus all your energy on evolving your thinking patterns, not being bogged down in refactoring things that you've spiritually moved on from. An engineer's time is so precious; it needs to be spent thinking, not coding.
runjake•Feb 16, 2026
> We have someone who vibe coded software with major security vulnerabilities. This is reported by many folks
> We also have someone who vibecoded without reading any of the code. This is self admitted by this person.
Peter was pretty open about all of this. He doesn't hide the fact. It was a personal hack that took off and went viral.
> We don't know how much of the github stars are bought. We don't know how many twitter followings/tweets are bought.
My guess, from his unwillingness to take the free pile of cash from the bags.fm grift, is that this in unlikely. I don't know that I would've been able to make the same decision.
> Then after a bunch of podcasts and interviews, this person gets hired by a big tech company. Would you hire someone who never read any if the code that they've developed? Well, this is what happened here.
Yes, I'd hire him. He's imaginative and productive and ships and documents things. I can fix the code auditing problem.
> In this timeline, I'm not sure I find anything inspiring here.
Okay?
> It's telling me that I should rather focus on getting viral/lucky to get a shot at "success".
Peter has been in the trenches for years and years, shipped and sold. He's written and released many useful tools over the years. Again, this was a project of personal love that went viral. This is not an "overnight success" situation.
> So am I jealous, yes because this timeline makes no sense as a software engineer. But am I happy for the guy, yeah I also want to make lots of money someday.
Write and release many, many useful tools. Form a community and share what you're building and your chances will greatly increase?
elwatto•Feb 16, 2026
nah. focus on building cool things people want.
j45•Feb 16, 2026
He didn't create or release something as finished.
He built something and shared it.
People took liberties with it.
It's not about getting viral/lucky... it's about enjoying experimenting and learning.
Money follows your unique impact and imprint in these kinds of cases.
mempko•Feb 16, 2026
Don't forget he also had Sam Altman's phone number. Do you any of you have his number? Also before he did all this he was semi retired for 5 years because of a successful exit. So for anyone thinking they can replicate this ask...
1. Are you already rich? Do you have cash in the bank to vibecode a project fulltime for many months just for fun?
2. Do you have Sam Altman's (or similar) number?
edstarch•Feb 16, 2026
Such is life in the attention economy.
h14h•Feb 16, 2026
OpenAI, Anthropic, and other model providers have created tools (the LLMs) with unprecedented new capabilities. The key problems are a) these new tools have weird limitations that make them hard to deploy effectively, and b) these tools are so fundamentally new that creating useful products out of them is an exercise in discovery and requires incredibly novel, forward-thinking vision.
Pete, more than anyone in the OSS community IMO, exemplifies both of these qualities. He is living very much on the bleeding edge, so yes, the 10s of projects he's shipped faster than most devs can ship 1 are not as polished as if he'd created them by hand. But he's been pushing the envelope in ways that few, if any, are, and I'd argue that OpenClaw is much more the result of Pete living on that edge and understanding the trade-offs of these tools better than just about anyone.
Personally, I'm much more jealous of the fact that Pete has already had a successful exit under his belt and had the freedom to explore & learn these tools to the fullest. There is definitely a degree of luck involved with the degree to which OpenClaw took off, but that Pete discovered it is 100% earned IMO.
sathish316•Feb 16, 2026
Peter clarified “I don’t read code” part in Lex Fridman interview - he said “I don’t read the boring parts of the code” that are about data transformation or writing to/reading from databases.
He distinguished between what he calls “Agentic Engineering” and “Vibe coding”, and claimed majority of the time he is not just Vibe coding.
He has 80,000+ GitHub contributions in a year across 50+ projects. I’m not sure how he averages 200 commits per day by just looking at diffs from a terminal, but it’s just Superhuman - https://github.com/steipete
bluerooibos•Feb 16, 2026
I've seen the same result play out a few times on LinkedIn - random person studying for an MS in CS or AI, blogs and posts about stuff they're vibe coding with Lovable or whatever, builds a decent following, and then, from tagging various AI-related firms, lands a job at one of them.
The field has kind of been like this for a while - people with portfolios of proven work done, showcasing yourself and your personality via blogs or vlogs makes you sort of a known quantity, versus someone with just a CV and a LinkedIn page.
This is yet another example of an area where extroverts have an advantage. You could be 10x the engineer that the creator of OpenClaw is, but that's irrelevant in this timeline if nobody has ever heard of you.
neop1x•Feb 16, 2026
Good for him! But it is possible he won't stay there for a long time. Like Geohot at Apple. There is a difference between working on a fun project which you completely control and being under a constant pressure and having to follow constrains and requirements set by managers in a corporation.
jstummbillig•Feb 16, 2026
You could always get, mh, lucky. That is the most common startup exit plan: Finding someone who pays for half a business or an idea. Now it happens more quickly. Everything does.
But that path was never about writing good code.
svnt•Feb 16, 2026
Jealousy is exactly the reaction they are hoping to trigger: use our tools, build something popular, get paid out. What better marketing spend than buying this project.
skeptic_ai•Feb 16, 2026
I was jealous too until I realized this is just an ad for OpenAI. They want to show you can vibe code an app and actually become a millionaire. What better way to show than actually do it?
Well, here you have it, a low effort to wire up a few tools together with spaghetti gen ai and he’s millionaire in a few months. Ok, I might be mean by saying no effort, I actually don’t know. But I know vibe coding won’t work for more than a few weeks. Also I think this bot is just a connector to multiple open source libraries that connect to WhatsApp and other services.
This is the best ad to sell AI: you can be millionaire too if you use our ChatGPT to vibe code stuff.
I think it will get a negative reaction in a few weeks when the dust settles as technical people realize it’s an ad.
Note: he might be an amazing developer but the ad still stands.
Edit: from Gemini: Publicly Embarrassing Anthropic: The timing is brutal. Anthropic’s legal team forcing a name change (from "Clawdbot" to "Moltbot" to "OpenClaw") alienated the very developer who was driving millions of users to their model. OpenAI swooping in to hire him days later frames Anthropic as "corporate lawyers" and OpenAI as "friends of the builders." It’s a perfect narrative victory.
secbear•Feb 16, 2026
I feel similar... OpenClaw has lots of vulnerabilities, and it's very messy, but it also brought self-hosted cron-based agentic workflows to your favorite messaging channel (iMessage, telegram, slack, WhatsApp, etc.), which shouldn't be overlooked
giancarlostoro•Feb 16, 2026
> Then after a bunch of podcasts and interviews, this person gets hired by a big tech company. Would you hire someone who never read any if the code that they've developed? Well, this is what happened here.
I have a feeling that OpenAI and Anthropic both use AI to code a lot more than we think, we definitely know and hear about it at Anthropic, I havent heard it a lot at OpenAI, but it would not surprise me. I think you 100% can "vibe code" correctly. I would argue, with the hours you save coding by hand, and debugging code, etc you should 100% read the code the AI generates. It takes little effort to have the model rewrite it to be easier to read for humans. The whole "we will rewrite it later" mentality that never comes to pass is actually possible with AI, but its one prompt away.
chillacy•Feb 16, 2026
Boris has been very open about the 100% AI code writing rate and my own experience matches. If you have a typescript or common codebase, once you set your processes up correctly (you have tests / verification, you have a CLAUDE or AGENTS.md that you always add learnings to, you add skill files as you find repeatable tasks, you have automated code review), its not hard to achieve this.
Then the human touch points become coming up with what to build, reviewing the eng plans of the AI, and increasingly light code review of the underlying code, focusing on overall architectural decisions and only occasionally intervening to clean things up (again with AI)
giancarlostoro•Feb 16, 2026
I'm aware I've been building my own alternative to Beads for weeks now. the instructions.md file and something like Beads is epic.
I didnt like how married to git hooks beads was, so I made my own thats primarily a SQLite workhorse. Been using it just the same as I have used Beads, works just as good, drastically less code. I added a concept called "gates" to stop the AI model from just closing tasks without any testing or validation (human or otherwise) because it was another pain point for me with Beads.
Works both ways, to GitHub, from GitHub. When you claim a task, its supposed to update on GitHub too (though looking at the last one I claimed, doesnt seem to be 100% foolproof yet).
tom_m•Feb 16, 2026
I mean some might say that's like joining a sinking ship. Of course one man's trash is another man's treasure. To each their own.
Hiring in tech has been broken for many many years at this point. There's so much noise and only more noise coming now with AI. To be completely honest it's entire random from my end when hiring. We can't review every application that comes in. It's just impossible. We do weed out some of the spam of course and do get to real people that actually fit the requirements, but there's so many other talented people who would easily fit the role that simple get buried under applications. It's depressing from all sides here. No one should think that they aren't any good or did something wrong or didn't network enough... because the unfortunate truth is that getting a job in tech is a lottery. Something many don't want to admit.
ass22•Feb 16, 2026
Im working on a project to change this.
Funny that you mention 'real people'. There are a number of components that sit at the core of what Im building - it should allow you to have the time and reach to vet more (100% verified) candidates than you ever could before. I also want to reduce the explicit costs of hiring so that firms can hire more people.
buschleague•Feb 16, 2026
This isn't a surprise at all. I sat down with the dev team at OpenAI during dev day last year and the biggest shocker to me: these "kids" are over here vibe coding the whole damn thing.
figassis•Feb 16, 2026
You don't get hired for any of that. OpenAI did not hire him because he went viral. Virality brought something interesting to OpenAI's attention. And they thought they could use this product idea/vision/execution, GTM strategy, whatever, and because it didn't seem like OAI had anyone on their team capable of this, they hired him.
Simple as that. Don't feel jealous, trying to replicate won't work, he did not know he'd be hired, he built something that he found interesting, and then realized it would be interesting to a lot more people.
The way to reach success is to either be strategically consistent in a way that maximizes luck surface area but does not depend on it, or to be unexpectedly lucky. The latter is gambling, People win the lottery regularly, does not mean you should make that your mission.
Be comfortable with not being the one to hit gold. And yes, it's ok to be jealous. Take a moment and then go back and enjoy the rest of your life.
Finally, there are a lot of companies that would likely hire you, hoping to hit gold. But you are likely filtering them out because they're not tech/large/startupy enough for you.
These companies are wondering what they need to attract talent like you.
zombot•Feb 16, 2026
> The more I talked with the people there, the clearer it became that we both share the same vision.
Bringing unblockable ads to the masses. Roger that.
dschuetz•Feb 16, 2026
Good luck!
casualscience•Feb 16, 2026
Can someone explain what value openclaw provides over like claude code? It seems like it's literally just a repackaged claude code (i.e. a for loop around claude) with a model selector (and I guess a few builtin 'tools' for web browsing?)
timwis•Feb 16, 2026
From what I remember, the key differentiating features were:
- a heartbeat, so it was able to 'think'/work throughout the day, even if you weren't interacting with it
- a clever and simple way to retain 'memory' across sessions (though maybe claude code has this now)
- a 'soul' text file, which isn't necessarily innovative in itself, but the ability for the agent to edit its own configuration on the fly is pretty neat
Oh, and it's open source
casualscience•Feb 16, 2026
I see, so there's actually an additional for loop here, which is `sleep(n); check_all_conversations()`, that is not something claude code does for sure.
As far as the 'soul' file, claude does have claude.md and skills.md files that it can edit with config changes.
One thing I'm curious about is whether there was significant innovation around tools for interacting with websites/apps. From their wiki, they call out like 10 apps (whatsapp, teams, etc...) that openclaw can integrate with, so IDK if it just made interacting with those apps easier? Having agents use websites is notoriously a shitty experience right now.
baby•Feb 16, 2026
The point is that you interact with through your messaging app
baby•Feb 16, 2026
Its a coding agent in a loop (infinite loops are rejected by coding agents usually) with access to your computer, some memory, and can communicate through telegram. That’s it. It’s brilliant though and he was the first to put it out there.
Aurornis•Feb 16, 2026
They serve different purposes. OpenClaw is supposed to be more of an autonomous sidekick assistant thing that can take instructions over different messenger channels. It can also be set up to take ongoing instructions and just churn on general directions.
kylemh•Feb 16, 2026
The main one is that you can run and/or host it remotely, unlike Claude Desktop. By this I mean, you can run OpenClaw on a service like Tailscale and protect your actual machine from certain security/privacy concerns and - regardless of the choice - you can connect your access to OpenClaw via any chat agent or SSH tunnel, so you can access it from a phone. If Claude Cowork comes to iOS/Android with a tunnel option, they can resolve this difference.
A smaller difference would be that you can use any/all models with OpenClaw.
casualscience•Feb 16, 2026
Hmm, whats stopping you from running claude code on a separate machine you can ssh into? I don't understand that point at all, I do that all the time.
Using a claude code instance through a phone app is certainly not something that is easy to do, so if there's like a phone app that makes that easy, I can see that being a big differentiator.
mcintyre1994•Feb 16, 2026
Something I learned while hacking on something recently is that claude’s non-interactive mode is super powerful. It uses all the same tools/permissions etc as interactive would, it can stream responses as JSON with tool use etc, and it can resume from a previous session. Together this means you can actually build a really slick chat-like UI for it.
This is using Claude on VMs that don’t have SSH, so can’t use a regular terminal emulator. They stream responses to commands over websockets, which works perfectly with Claude’s streaming. They can run an interactive console session, but instead I built a chat UI around this non-interactive mode.
OpenClaw is probably overkill if you just want to have a nice remote UI to access claude code, do tool call approvals. There are a ton of remote cli apps and guides to setup ssh access via tailscale etc, but none that just work with a nice remote web interface.
For me personally I can't stand interacting with agents via CLI and fixed width fonts so I built a e2e encrypted remote interface that has a lot of the nice UI feature you would expect from a first class UI like Claude Vscode extension (syntax highlighting, streaming, etc). You can self host it. But it's a little no dependencies node server that you can just npm install (npm i -g yepanywhere)
For programmers or people who know computers quite well the difference to claude code is small i would say. But for "Normies" its magical that you can just ask your computer to do anything from anywhere (set timers, install stable diffusion, send you a specific doc in your download folder). You don't even have to write it, you can send it a voice message and it will install whisper or send it to the openai whisper api, etc. Obviously this is more then dangerous, but looking at what passwords people still choose today (probably also the reason why everything requires MFA nowadays), most people don't care about Security.
drdrey•Feb 16, 2026
for me it's that it works remotely and my kids can access it over Discord
Just uses claude. I haven't tried it much but it seems to be what you're describing.
Openclaw uses pi agent under the hood. Arguably most of the codebase could be replaced by systemd if you're running on a VPS for scheduling though, and then its a series of prompts on top of pi agent.
wiether•Feb 16, 2026
That's a brilliant move from OpenAI.
In the past, people wanting to sign a juicy contract at a FAANG were told to spend hours everyday on Leetcode.
Now? Just spend tokens until you build something that get enough traction to be seen by one of the big labs!
WA•Feb 16, 2026
Just <whatever>… to gain lots of traction.
Gaining traction is the tough part.
geoffbp•Feb 16, 2026
Congrats!
koolala•Feb 16, 2026
I like the foundation ideas.
cbold•Feb 16, 2026
I thought it was a an interesting focal point for agentic ideas, but now it is too much flavored towards OpenAI
dinkumthinkum•Feb 16, 2026
What exactly is the grand vision for this person. He uses soaring language to describe changing the world for his grandma or something. What is his vision and vision that all these smiling people in his pictures have of the world? Is it complete economic collapse? Is it the complete destruction of society due to AI? Is that really so exciting?
I hope this results in an OpenAI client harness where the data is local.
keyle•Feb 16, 2026
Maybe one day there will be a book citing the weird influence of crusteans on the tech world of this era... crabs, lobsters, ... I'm holding on for the next crayfish!
If you step back and look at this whole thing from a marketing and cash flow perspective, I think it makes a lot more sense.
It is in OAI's best interests to create a perception that flinging agentic swarm crap at the wall may result in lucrative job offers. Or to otherwise imply this is the golden path. They need their customers to consume ever more tokens per unit time. This highly contentious parallel agent swarm stuff is the perfect recipe.
hirako2000•Feb 16, 2026
Very good business observation.
Plus employees who can inject hyped ideas is exactly the sort of efficient advertising openai relies on
It will hurt when self proclaimed coders realise 2 years later and all their savings burned on token they cannot all get meaningful traction.
trusche•Feb 16, 2026
That picture at the end of the post really explains and sums up the problem with AI bias...
rasmus1610•Feb 16, 2026
So many people are so salty, it’s wild. That’s peak HN here
saos•Feb 16, 2026
I personally haven’t used open claw due to security concerns on my device.
How to mitigate this concern?
beaker52•Feb 16, 2026
Don’t use it, or give it access to nothing important, therefore vastly limiting its potential. That’s the only way.
Prompt injection is a thing, and a lot of vibe coding, Gas Town, Ralph-loop enthusiasts are vehemently ignoring the risk believing they’re getting ahead.
I wouldn’t worry and just observe the guinea pigs doing their thing. Most of them will run around expending all their energy, some will get eaten by snakes, and you’ll be able to learn a lot, wait for the environment to mature, then spend your energy, instead.
chillacy•Feb 16, 2026
My take: openclaw should not run on a mac (even though looking at the skills it ships with it clearly was made to)
It should run on its own VPS with full root access, given api keys which have spending limits, given no way for strangers to talk to it. I treat it as a digital assistant, a separate entity, who may at some point decide to sell me out as any human stranger might, and then share personal info under that premise.
kykeonaut•Feb 16, 2026
I am surprised at the amount of comments that dismiss coding as just means to an end. Yes, every skill at the end of the day is a means to an end, but mastery of those skills is at the end what drives the vision. To know where you are going you need to know where you have been.
andxor•Feb 16, 2026
If you actually spent some time researching his background you would know he was already very successful before his vibe-coding saga.
laurentiurad•Feb 16, 2026
Got super inspired by this story and (sorry for the plug) decided to build comrade, a security-focused AI agent: https://github.com/LaurentiuGabriel/comrade. I might be biased, but in my tests it managed to complete coding tasks (creating web apps from scratch) with less tools, which means less tokens, lower costs than openclaw.
What I am missing is distribution. It seems impossible to get traction nowadays on social media, regardless how good your product is.
Any feedback is much appreciated.
huge_rank_rat•Feb 16, 2026
This is the same "heating" effect as social media algorithms apply to random podcasters (e.g. Joe Rogan) - those isolated cases of success which happen to be completely synthetic provide an 'american dream' for the system, whose success depends on the Fantasy being alive and believed in by those who are its customers/product
hirako2000•Feb 16, 2026
I like the parallel as Joe Rogan is a trained actor who mastered the art of incorporating all the success factors of its predecessors. He saw obscure podcasts gaining intense viewership, he literally mimicked the patterns and merged it into the "best" of all. Even made his more mainstream, while fooling millions to feel they are part of a niche enlighten resistance community.
I recall listing to one of the now vintage series, I thought it was Joe Rogan himself. But it wasn't, the voice was a bit different but the pause, the reactions, the "waaah" with the overall tone of uncovering some secret truth.
It's a fascinating societal phenomenon, coupled with the American dream, yes.
In any case those examples are doing no good by setting themselves as models for millions to become obsessed in replicating. No surprise the rate of people in depression keeps going up.
rchaud•Feb 16, 2026
To Rogan's credit, he was early to the concept of video podcasts, which Youtube's algorithm picked up on after they lifted video time limits in 2010 (or thereabouts). His videos were so long that he also profited handsomely from pre-roll and mid-roll ads that Youtube started introducing early in the last decade. Finally his popularity exploded once he started delving into "pop culture commentary" and "conspiracy theories" in the mid-2010s.
DrScientist•Feb 16, 2026
I think the goal for OpenAI employees today should be to do as much good as possible with the ridiculous amount of investor money raised before the bubble goes pop.
The question is whether OpenClaw will actually stay open in the world of 'Open'Ai.
podgorniy•Feb 16, 2026
This is all about PR now.
openclaw is inevitable type of software (as cli agents, as context-management software, as new methodologies of structuring sofware for easier AI ingestion, etc). Guy gamed, built it, guy got it.
At this point I would not expect well-rounded software as a byproduct of huge investments and PR stunts. There will be something else after LLMs, I bet people are already working on it. But current state of affairs of LLMs and all the fuss aroud them is way more peceptive, PR and emotion driven than containing intristic value.
seyz•Feb 16, 2026
ok
blueTiger33•Feb 16, 2026
the guy is Austrian...
would prefer if the project evolved further but he used it as a trampoline to jump to OpenAi...
OpenAi is curating ChatGpt very well, which honestly I like, compared to other companies, maybe expect Anthropic, they are not "caring" that much
ckastner•Feb 16, 2026
Austrian media are reporting that Peter Steinberger had a $100m exit with PSPDFKit in 2021.
I'm extremely curious what OpenAI's offer was. The utility of more money is diminished when you're already pretty wealthy.
OJFord•Feb 16, 2026
It makes me more inclined to take the OP at face value, genuine interest in working on something similar and making it easier for everyone ('my mum') to use.
It probably also makes him more attractive to OpenAI et al. - he's not just some guy who's going to have all sorts of risks earning a lot of money for the first time.
darkwater•Feb 16, 2026
I think he accepted that offer exactly for this reason . He feels he can have a bigger impact within OpenAI (and maybe become a billionaire in the medium run?) that creating his own business (again) out of OpenClaw.
didntknowyou•Feb 16, 2026
at this point idk what openclaw does and am afraid to ask but great for him
brap•Feb 16, 2026
Every time this is brought up I have to go read up on it again, assuming I just didn’t get it last time.
And every time I reach the same conclusion: it's a WhatsApp/Telegram/etc wrapper for LLMs.
Until next time!
bomewish•Feb 16, 2026
Kinda… funny? that one in this position could not just blurt out “they offered the most”?
tempo_throwo•Feb 16, 2026
I feel like a lot of people miss out what this hire and his decision to join are really about. I (think) I can relate, because I once had a viral hit (with interviews, press, etc) that made me "silicon valley famous" for a while, and ended up with me joining a mega-company despite lots of speculation I'd build it into a startup.
The two sides:
* From his POV: He said he's not interested in doing "another company" after spending 13 years trying to build a startup. I imagine there's another aspect too, which is that OpenClaw is not in itself an inherently revenue-generating product, nor is it IP-defensible. This was my situation. My viral hit could (and soon was) replicated by many others. I had the advantage of being "the guy who invented that cool thing", but otherwise I would be starting from scratch. It was a mind-fuck having a huge hit on my hands from one day to the next, but with no obvious direction on how to capitalize on it.
* Then from the company's POV: despite hiring thousands and thousands of employees, only a tiny handful of them ever capture any "magic." You've got an army of product managers who have never actually built or conceived of a product people love, and engineers who usually propose ideas that are ok but probably not true gold. So now here we have a guy who did actually conjure up something magical that really resonated with people. Can he do it again? Unknown, but he's already proved himself in the ideas space more than most people, so it's worth a shot for the company.
singularfutur•Feb 16, 2026
Moving the project to a foundation is smart. Most AI tools die when the founder leaves. This one might actually survive.
asim•Feb 16, 2026
Well that was a crazy month. Kudos to this guy for recognising his goals which is not to start another company. It is very easy to get intoxicated by the idea of something being so successful that you can capture the value, especially after having struggled for so long with a previous company. I think it's every founder's dream to like just hit lightning. But this stuff is incredibly stressful and it's important to be able to look into the future and ask yourself. Is this what I want? Is this what I need in my life? And the answer here is no. This person can deliver value elsewhere quite easily and get the reward without as much stress. We should all take a lesson from this whirlwind journey. Do not attempt to be like Peter. You can admire the work he's done. Do not attempt to replicate it. Appreciate it for what it is. For yourself as an observer or a user it's a lesson. But also to note that this is an anomaly. You will never replicate it. A lot of people feel a little bit of envy or jealousy. I used to feel that when I was working on something and I saw other people succeed and I wished that that had happened to me. But if it was meant for you it would find you. And the fact that it hasn't found you means that it was not meant for you. We all have our role to play. There is something important for us to do and that's not necessarily something that is world famous or amasses thousands of GitHub stars. If after reading this it's still bothering you. Take a walk and reflect on the good things in your life.
hklgny•Feb 16, 2026
This feels like such a defeatist take. The ideas time had come. For luck to strike you have to be in the market for it. Just keep shipping and playing. We don’t “all have our role to play” but there are a lot of roles that need playing
asim•Feb 16, 2026
But that's the point it's not defeatist. It's more about saying there is something for you to do. There is a role for you to play but that's not necessarily the role that you see somebody else playing. So sometimes we see somebody else doing something and we think whoa that person is successful and we become envious of that and then we want to emulate that. But we forget that maybe that's not what's intended for us. Maybe that person is really good at that thing or that's what was for them. But for us we might be good at something else and there might be something that we are uniquely positioned to do so. The point is not to be defeatist but not to focus on what somebody else has right. Focus on what you have and focus on what your ability is and focus on what's going to improve your quality of life and the people around you and don't focus on the negative aspects of what is effectively fomo
hklgny•Feb 16, 2026
Appreciate your take. I think we’re bouncing around the same mental state from different sides.
I do not believe in predetermined roles. My version is to find the thing you’re excited to do and not the outcome you’re excited to have.
asim•Feb 16, 2026
Yea sorry it probably comes across as pre-determined but you as a person have likely spent a long time becoming good at something and have a certain personality and experience based on your life, so I guess what I'm saying is, that sort of creates a role for you, and when you understand what that is, you can really hone in and do good work. Sometimes its not obvious to us and when we see something we might want to go after it. That's fine. I guess my point is, don't look at the shiny thing and chase that. It's not what's going to fulfil you in life e.g Peter probably wasn't chasing the shiny thing. He was trying to solve a personal problem and it resonated with a lot of people. But when people see something be successful or this kind of wildride where you end up with a hugely successful project and go to a huge company like OpenAI, they focus on the wrong things. The inner insecurity takes over and you wonder, why not me, and how do I do that. But essentially it comes back to, solve problems. Solve interesting problems, work on things that you think are meaningful, and whatever the success might be, that's for someone else to decide. But I think people chase "fame", cause that's what we essentially see. Validation through popularity. It won't fulfil you. Trust me. But yes to your point. We're coming at the same thing from different angles.
Chrupiter•Feb 16, 2026
What if the thing that excites you can't pay you money, and you find your overall life unsatisfying because of the things you have to do to earn money?
Sorry, went a bit off rails there.
stingraycharles•Feb 16, 2026
> Do not attempt to be like Peter. You can admire the work he's done. Do not attempt to replicate it. Appreciate it for what it is. For yourself as an observer or a user it's a lesson. But also to note that this is an anomaly. You will never replicate it.
This can be said about a lot of successful projects, products, and companies. I’d argue that, by all means, do try to be like Peter. Try to tinker around and make something new the world has never seen before.
He made something that excited many people, and I don’t think it’s the correct take to consider this an anomaly. It’s someone who was already known in the development community trying something new and succeeding.
chaboud•Feb 16, 2026
Cool. Good for him. I've been building agentic and observational systems and have been working to make them safe and layered in defense. And, well, I probably should have just said "fuck it" and put a disclaimer sticker on the front to let it fly.
Yeah, these systems are going to get absolutely rocked by exploits. The scale of damage is going to be comical, and, well, that's where we are right now.
Go get 'em, tiger. It's a brave new world. But, as with my 10 year old, I need to make sure the credit cards aren't readily available. He'd just buy $1k of robux. Who knows what sort of havoc uncorked agentic systems could bring?
One of my systems accidentally observed some AWS keys last night. Yeah. I rotated them, just in case.
WesolyKubeczek•Feb 16, 2026
So this is how you apply for a job in 2026...
spacecadet•Feb 16, 2026
His mum don't need an AI agent. She needs her family to pull their heads of out their asses and support her.
spacecadet•Feb 16, 2026
Men definitely down voted this.
program_whiz•Feb 16, 2026
This is a smart play. Models aren't going to be a moat, performance is too easy to replicate and all the big players (and even OSS) are following quickly behind. The only moat that will be stable is having something with network effects and adoption overhead, something that can grab eyes and has sticking power. This was probably the idea behind Sora (although it hasn't worked).
Filling the team with people who come up with novel and interesting ways to grab attention that could possibly create vendor lock-in is probably the goal.
aceelric•Feb 16, 2026
Kudos to the guy for building such an awesome project in a very short amount of time. Of course he had to take some shortcuts to deliver, but at the end of the day, OpenClaw remains one of the best open source AI assistant implementations.
jandragsbaek•Feb 16, 2026
It indeed is the logical next step. It's been super interesting following him online and he's inspired a bunch of people to just go build stuff. Because why not.
MaggieL•Feb 16, 2026
Something very interesting happened to me yesterday.
I'd been having conversations with ChatGPT about OpenClaw, nothing remarkable or extraordinary. Then I started a new conversation to talk about a different aspect, and GPT assumed I wanted to talk about some old PC game.
To disambiguate, I now had to refer to OpenClaw.ai. I asked it if it had some new system directive about this, and of course it denied it. Today we learn OpenAI has hired the OpenClaw developer, and he's "turning the project over to a foundation"
miloignis•Feb 16, 2026
I believe that the rename to OpenClaw happened after the cutoff date for pretty much all model's training data, so unless you say something that causes them to look it up they'll get it wrong and assume it's about the old PC game. I was messing around trying to get different models to setup an OpenClaw NixOS VM for me and had to disambiguate for most of the models I tried.
Alifatisk•Feb 16, 2026
Title could have mentioned this relates to Openclaw/moltbot/clawdbot too. Now the post became more relevant to read when I realized what this was about.
luisln•Feb 16, 2026
Weird how OpenAI would spend so much money to buy a developer when developers will just be obsolete in a couple years.
novoreorx•Feb 16, 2026
Great products sell methodology, not just code. Great developers produce methodology. So what OpenAI bought isn't a developer, but a meta-methodology owner. It's a bet on Peter's mind to produce leading methodology for agent applications.
Mentlo•Feb 16, 2026
The generous interpretation is that Open AI is still safety aligned and they hired this guy because it's safer to have him inside and explain to him how reckless he's being, than having him far from "sphere of control".
The more likely scenario is that he was hired for the amazing ability to move fast and break things.
tempodox•Feb 16, 2026
> the amazing ability to move fast and break things.
Or just plain recklessness.
Mentlo•Feb 16, 2026
Yes, I was being sarcastic, but I could've been clearer..
franze•Feb 16, 2026
So with Max Stoiber and Peter Steinberger 2 well known Austrian Devs ended up at OpenAI.
toddmorrow•Feb 16, 2026
so open claw will be neither open nor claw
CalRobert•Feb 16, 2026
If only Europe could have offered him something even remotely competitive.
lacoolj•Feb 16, 2026
It'll be funny when his first task on his first day is un-slopping the trough bucket
ambitious_whale•Feb 16, 2026
Happy or you man keep it up i hope i join it one day too
jeffkumar•Feb 16, 2026
It's hard not to be jealous.
What he did is incredible, he grabbed attention of the tech community like no other...however good or bad he was at it and making it secure.
He was curious and experimental and got lucky!
deadbabe•Feb 16, 2026
Insane. Maybe the takeaway is to just quickly build things that will impress people and have no regard for safety or security.
For OpenAI it’s a smart move, snatch up the creator of a viral and ride on its coat tails for the hype.
Muaz_Ashraf•Feb 16, 2026
Congrats! can you tell me how I can also be get hired in such companies. I have 3 years of experience.
chonydev•Feb 16, 2026
Hahaha
Join us in the wait list, mate. In the meantime, maybe we could make a whole community and try to make our own start-ups.
andrei_says_•Feb 16, 2026
If the photo at the bottom of the post is a photo of the OpenAI team, then it’s white bros all the way.
Meaning, these products are being created by representatives of the kind of people carrying the most privilege and having the least impact of negative decisions.
For example, Twitter did not start sanitizing location data in photos until women joined the team and indicated that such data can be used for stalking.
White rich bros do not get stalked. This problem does not exist in their universe.
FartyMcFarter•Feb 16, 2026
It's not OpenAI. It's ClawCon.
_luiza_•Feb 16, 2026
Somehow seeing 1025 comments after 21h of HN makes me realise the smallness of the world.
This has been travelling on a bunch of platforms, but the niche-ish ones have fairly low engagement numbers.. in the grand scheme of things.
Hello, fellow humans
MangoCoffee•Feb 16, 2026
Peter is not a vibe coder. He build/code and sold a company way before LLMs. a lot of comments here seem to so fix on him being a "vibe coder"
138 Comments
2) at least half of the money is to not read the headlines tomorrow that the hottest AI thing since ChatGPT joined Anthropic or Google
3) the top paid people in this world are not phds
4) OpenAI is not beneath paying ludicrous amounts (see all their investments in the past year)
5) if a perception of their value as a result of this "strategic move" rises even by 0.2% and the bonus is in openai stock, it's free.
need I continue?
Again, Peter is a good/great AI product manager but I don't see any distinguishing skills worth a billion dollars there. There's only one Openclaw but it's also been a few weeks since it came into existence? Openclaw clones will exist soon enough, and the community is WAY too small to be worth anything (unlike, say, Instagram/Whatsapp before being acquired by Facebook)
> 2) at least half of the money is to not read the headlines tomorrow that the hottest AI thing since ChatGPT joined Anthropic or Google
True, but not worth $100 million dollars - $1 billion dollars
> 3) the top paid people in this world are not phds
The people getting massive compensation offers from AI companies are all AI-adjacent PhDs or people with otherwise rare and specialized knowledge. This is unrelated to people who have massive compensation due to being at AI companies early. And if we're talking about the world in general, yes the best thing to do to be rich is own real estate and assets and extract rent, but that has nothing to do with this compensation offer
> 4) OpenAI is not beneath paying ludicrous amounts (see all their investments in the past year)
Investments have a probable ROI, what's the ROI on a product manager?
> 5) if a perception of their value as a result of this "strategic move" rises even by 0.2% and the bonus is in openai stock, it's free.
99.999999% of the world has not heard of Openclaw, it's extremely niche right now.
There are roughly 8.1 billion humans, so 99.999999% (8 nines) of the world is 81 people. There were way more than 81 people at the OpenClaw hackathon at the Frontier Tower in San Francisco, so at least that much of humanity has heard of OpenClaw. If we guess 810 people know about OpenClaw, then it means that 99.99999% (7 nines) of humanity have not heard of OpenClaw.
If we take it down to 6 nines, then that's roughly 8,100 people having heard of OpenClaw, and that 99.9999% of humanity has not.
So I think you're wrong when you say "99.999999% of the world has not heard of Openclaw". I'd guess it's probably around 99.9999% to 99.9999999% that hasn't heard of it. Definitely not 99.999999% though.
On the topic of brand recognition, 0.000001% of the world is 80 people (give or take). OpenClaw has ~200k GitHub stars right now.
On a more serious note, the world doesn't matter: the investors, big tech ceos, analysts do. Cloudflare stock jumped 10% due to Clawdbot.
Hype is weird. AI hype, doubly so. And OpenAI are masters at playing the game.
Source: https://www.investing.com/news/analyst-ratings/cloudflare-st...
(there are a few others)
Again, I'm not saying that it should rise, but that in a meme-stock era, for an ai-hype-driven market, it would not be surprising.
Yet somehow the network effects worked out well and the website was the preeminent social network for almost a decade.
There is no lock in at all.
Ecommerce is close second
> To switch from ChatGPT to Gemini I don't have to convince all of my friends and family to do the same.
Except Gemini is a complete joke that can’t even complete request on iOS unless you keep scree unlocked or keep the app in the foreground. So I’m not sure how it proves your point.
If you give your agent a lot of quantified self data, that unlocks a lot of powerful autonomous behavior. Having your calendar, your business specific browsing history and relevant chat logs makes it easy to do meeting prep, "presearch" and so forth.
There's the "draw the rest of the owl" of this problem.
Until we figure out a robust theoretical framework for identifying prompt injections (not anywhere close to that, to my knowledge - as OP pointed out, all models are getting jailbroken all the time), human-in-the-loop will remain the only defense.
https://arxiv.org/abs/2506.05446 https://arxiv.org/abs/2505.03574 https://arxiv.org/abs/2501.15145
And it makes a lot of sense - there’s billions of dollars on the line here and these companies made tech that is extremely good at imitating humans. Cambridge analytica was a thing before LLMs, this kinda tool is a wet dream for engineering sentiment.
In Peter's blog he mentions paying upwards of $1000's a month in subscription fees to run agentic tasks non-stop for months and it seems like no real software is coming out of it aside from pretty basic web gui interfaces for API plugins. is that what people are genuinely excited about?
What else would he or anyone do if someone is tokenizing your product and you have no control over it?
It's also cool having the ability to dispatch tasks to dumber agents running on the GPU vs smarter (but costlier) ones in the cloud
Openclaw is an amazing piece of hard work and novel software engineering, but I can't imagine OpenAI/anthropic/google not being able to compete with it for 1/20th that number (with solid hiring of course).
It was a very good play by OpenAI.
One thing I will say is that this competition is good.
I love Anthropic and OpenAI equally but some people have a problem with OpenAI. I think they want to reposition themselves as a company that actively supports the community, open source, and earns developers’ goodwill. I attended a meeting recently, and there was a lot of genuine excitement from developers. Haven't seen that in a long time.
Have you tried using it?
> tl;dr: I’m joining OpenAI to work on bringing agents to everyone. OpenClaw will move to a foundation and stay open and independent.
I’m sure he got a very generous offer (congrats to him!) but all of the hot takes about OpenClaw being acquired are getting weird.
It might run on for a while longer but you don't want to be that guy who had a £100m net worth in 1999 but failed to monetise any of it and ended up with nothing
money or morals, choose one
https://web.archive.org/web/20260215220749/https://steipete....
You can switch models multiple times (online/proprietary, open weight, local), but you have one UI : OpenClaw.
But the community not.
Anthropic's community, I assume, is much bigger. How hard it is for them to offer something close enough for their users?
Not gonna lie, that’s exactly the potential scenario I am personally excited for. Not due to any particular love for Anthropic, but because I expect this type of a tight competition to be very good for trying a lot of fresh new things and the subsequent discovery process of new ideas and what works.
Stories like this reinforce my bias
I think Anthropic's docs are better. Best to keep sampling from the buffet than to pick a main course yet, imo.
There's also a ton of real experiences being conveyed on social that never make it to docs. I've gotten as much value and insights from those as any documentation site.
Early adopters are some of the least sticky users. As soon as something new arrives with claims of better features, better security, or better architecture then the next new thing will become the popular topic.
For your sake, I’m not saying they’re wrong. I’m just pointing out something I’ve noticed.
1. Stable models
2. Stable pre- and post- context management.
As long as they keep mothballing old models and their interderminant-indeterminancy changes, whatever you try to build on them today will be rugpulled tomorrow.
This is all before even enshittification can happen.
The practical workaround most teams land on is treating the model as a swappable component behind a thick abstraction layer. Pin to a specific model version, run evals on every new release, and only upgrade when your test suite passes. But that's expensive engineering overhead that shouldn't be necessary.
What's missing is something like semantic versioning for model behavior. If a provider could guarantee "this model will produce outputs within X similarity threshold of the previous version for your use case," you could actually build with confidence. Instead we get "we improved the model" and your carefully tuned prompts break in ways you discover from user complaints three days later.
It's similar to how people usually only use one preferred browser, editor, shell, OS.
Openclaw is so so so much more.
But if I don’t have a url for my IDE (or whatever) to call, it isn’t useful.
So I use Ollama. It’s less helpful, but ensure confidentiality and compliance.
It’s only been a couple months. I guarantee people will be switching apps as others become the new hot thing.
We saw the same claims when Cursor was popular. Same claims when Claude Code was the current topic. Users are changing their app layer all the time and trying new things.
System of record and all.
Using openrouter+kilocode I can simply switch between different providers' models and not miss out on anything.
I continue to find it amusing that people really think corporates are really holding power. No - they are holding power granted to them by the government of the state.
Remind me why Zuck et al had to kiss the ring.
If lobbying was illegal then you might have a point here, but alas.
This is a perfect use case for a new agent to query the old agent and get the details.
You could have OpenClaw summarize and export them into a format that the new one wants.
Maybe the new agents will be designed to be compatible with OpenClaw's style.
There is no reason to believe that you're locked in to something.
Heck, that was half the pitch behind Obsidian, even if the project someday ended, markdown would remain. And switching between Obsidian and e.g. Logseq shows the ease of doing so.
The irony of systems of record is that if there is more than one, there are effectively none. Just data stuck in silos waiting for compute.
The new agents might have a feature to query your old agents for a migration.
That said, I find it really hard to believe that you've generated so much work in the past few weeks since OpenClaw launched that you could never migrate to something else. It hasn't been that long.
I’d suspect the moat here will be just as fragile as every other layer
You can literally ask codex to build a slim version for you overnight.
I love OpenClaw, but I really don't think there is anything that can't be cloned.
You being able to go places is the interesting thing, your car having wheels is just a subservient prerequisite.
Good for him, but no particular geniusness.
The reason is that he paid every AI "influencer" to promote it. Within the span of a week, the project went from being completely unknown to every single techbro jumping on it as the next "thing that will change the world". It also gained around 70k github stars in that time.
In the age of AI, everything is fake.
A couple of months ago, Gemini 3 came out and it was "over" for the other LLM providers, "Google did it again!", said many, but after a couple of weeks, it was all "Claude code is the end of the software engineer".
It could be (and in large part, is) an exciting--and unprecedented in its speed--technological development, but it is also all so tiresome.
(1) A capable independent developer is joining a large powerful corporation. I like it better when there are many small players in the scene rather than large players consolidating power.
(2) This seems like the celebration of Generative AI technology, which is often irresponsible and threatens many trust based social systems.
https://xcancel.com/steipete/status/2023154018714100102
But come on, negativity around a rugpull is jealousy? Are you so jaded you can't imagine people objecting to the total lack of morality required to do a crypto rugpull? I personally get annoyed about something like Trump Coin because seeing people rewarded for being dirt bags offends my sense of justice. If you need a more pragmatic reason, rewarding dirtbaggery leads to a less safe society.
I am not confident that the open source version will get the maintenance it deserves though, now the founder has already exited. There is no incentive for OpenAI to keep the open sourced version better than their future closed source alternative.
At least when it comes to these tales of super fast rise to wealth and prominence. Meticulous engineering still matters when you want to deliver scale, but is it rewarded as much?
My feel is that the attention economy is leaking into software. Maybe the classic bimodal distribution of software careers will become increasingly more like the distribution in social-media things like streaming, youtube, onlyfans etc.
The guy is creative, but this is really just following the well known pattern of acquiring/hiring bright minds if only to prevent your competition from doing the same.
Add in databases, browser use, and the answer could be yes
This could be the most disruptive software we have seen
Are you making anyone's life better? Who will even pay you once most jobs are automated?
At best, it's a defensive move: make money, get hard capital and seek rent after most of society has collapsed?
Deindustrialization happened 20-40 years ago and the affected regions are still hit hard.
Also, you're making my point. Utterly heartless.
Personally I'm excited to see what he can do with more resources, OpenClaw clearly has a lot of potential but also a lot of improvements needed for his mum to use it.
https://www.youtube.com/watch?v=YFjfBk8HI5o&t=8976
We can assume first that at OpenAI he's going to build the hosted safe version that, as he puts it, his mum can use. Inevitably at some point he and colleagues at OpenAI will discover something that makes the agent much more effective.
Does that insight make it into the open version? Or stay exclusive to OAI?
(I imagine there are precedents for either route.)
The cry has been for a while that LLMs need more data to scale.
The new Open(AI)Claw could be cheap or free, as long as you tick the box that allows them to train on your entire inbox and all your documents.
AFAIK Anthropic won't let projects use the Claude Code subscription feature, but actually push those projects to the Claude Code API instead.
There are very few companies who I trust with my digital data and thus trust to host something like OpenClaw and run it on my behalf: American Express, Capital One, maybe Proton, and *maybe* Apple. I managed an AI lab team at Capital One and personally I trust them.
I am for local compute, private data, etc., but for my personal AI assistant I want something so bullet proof that I lose not a minute of sleep worrying about by data. I don't want to run the infrastructure myself, but a hybrid solution would also be good.
Effectively you can trust all of the companies out there right up until they are acquired and then you will regret all of the data you ever gave them. In that sense Facebook is unique: it was rotten from day #1.
Vehicles: anything made before 2005, SIM or e-SIM on board = no go.
I'm halfway towards setting up my own private mail server and IRC server for me and my friends and kissing the internet goodbye. It was a fun 30 years but we're well into nightmare territory now. Unfortunately you are now more or less forced to participate because your bank, your government and your social circle will push you back in. And I'm still pissed off that I'm not allowed to host any servers on a residential connection. That's not 'internet connectivity' that's 'consumer connectivity'.
It's a pity, they were doing well for a long time.
I'm surprised that someone on HN would paint all of HN with the same brush.
It's one of those 'lesser evils' things. If you know of a better email provider I'd love to know.
Such as?
These aloof comments that talk about something we're supposed to know about without referencing anything are very unhelpful.
( https://www.lemonade.com/fsd is an example )
You only get offered a discount if most other customers are being compelled to pay full (or even increased) prices for the same offering. Otherwise revenue goes down and company leadership finds itself finding other ways to cut costs and increase profits.
Every day my doomer sentiment deepens, and I am ashamed when I come onto here and see all this optimism. It is refreshing to see people whose opinions I have come to respect on this forum to be as negative as I am.
After reading Jacques's response to my question, my list got smaller. Personally, I still like Proton, but I get that they have made some people unhappy. I also agree that Hetzner is a reliable provider; I have used them a bunch of times in the last ten years.
Then my friend, we have to worry about fiber/network providers I suppose.
This general topic is outside my primary area of competence, so I just have a loose opinion of maintaining my own domain, use encryption, and being able switch between providers easily.
I would love to see an Ask HN on secure and private agentic infra + frameworks.
I honestly can’t name a single one I know of who could pass that criteria
Edit:found your other comment answering a similar question
and make sure the member/owners are all of like mind, and willing to pay more to ensure security and privacy
I had assumed I'd have to lean more on the capitalistic values of being a co-op, like better rates for our clients, higher quality work, larger likelihood of our long term existence to support our work, more project ownership, so as to make the pitch palatable to clients. Turns out clients like the soft pitch too, of just workers owning the company they work within - I've had several clients make contact initially because they bought the vision over the sales pitch.
I'm trying to think about if I'd trust us more to set up or host openclaw than a VC funded startup or an establishment like Capital One. I think both alternatives would have way more resources at hand, but I'm not sure how that would help outside of hiring pentesters or security researchers. Our model would probably be something FOSS that is keyed per-user, so if we were popular, imo that would be more secure in the end.
The incentives leading to trust is definitely in a co-op's favor, since profit motive isn't our primary incentive - the growth of our members is, which isn't accomplished only through increasing the valuation of the co-op. Members also have total say in how we operate, including veto power, at every level of seniority, so if we started doing something naughty with customer data, someone else in the org could make us stop.
This is our co-op: 508.dev, but I've met a lot of others in the software space since founding it. I think co-ops in general have legs, the only problem is that it's basically impossible to fund them in a way a VC is happy with, so our only capitalization option is loans. So far that hasn't mattered, and that aligns with the goal of sustainable growth anyway.
Yes, agreed for the USA/Taiwan/Japan where we mostly operate. For us it's been understanding and leveraging the alternative resources we have. Like, we have a lot of members, but really only a couple are bringing in customers, despite plenty of members having very good networks.
Is your current a co-op? 200+ sales at 30k a pop seems to be pretty well off the ground!
Exactly what I said. We need lower shareholder interference not more, and in co-operative it's the opposite.
> with immediate liability for their person.
What do you mean?
If, as a shareholder operator, a co-op member pressured themselves to exploit user data to turn a quick buck, I guess that's possible, but likely they'd be vetoed by other members who would get sucked into the shitstorm.
In my experience, co-op members and customers are more value-oriented than profit-motivated, within reason.
Why are shareholders less likely to veto a evil person in a company vs in a co-operative? I think in most cases, the evil person is likely to get vetoed but sometimes greed takes over, specially over period of years and decades.
SECURITY
PRIVACY
---
Heyyy it never said "good privacy" perceive as you want...
Don't publicly acknowledge that you were the reason someone got murdered and 1000 VIPs got hacked.
One day when I'm deemed a 'Baddie', I looked at Apple as inspiration.
A company like AMD I would trust more than a company like Apple.
So that rules out Apple.
A leadership team that is very open and involved with the community, and one that takes extra steps, compared to competitors, to show they take privacy seriously.
I don't really understand what this has to do with the post or even OpenClaw. The big draw of OpenClaw (as I understand it) was that you could run it locally on your own system. Supposedly, per this post, OpenClaw is moving to a foundation and they've committed to letting the author continue working on it while on the OpenAI payroll. I doubt that, but it's a sign that they're making it explicitly not an OpenAI product.
OpenClaw's success and resulting PR hype explosion came from ignoring all of the trust and security guardrails that any big company would have to abide by. It would be a disaster of the highest order if it had been associated with any big company from the start. Because it felt like a grassroots experiment all of the extreme security problems were shifted to the users' responsibility.
It's going to be interesting to see where it goes from here. This blog post is already hinting that they're putting OpenClaw at arm's length by putting it into a foundation.
Lol
Their marketing team got ya.
I aspire to be as good as Apple at marketing. Who knew 2nd or worse place in everything doesnt matter when you are #1 in marketing?
https://security.apple.com/blog/private-cloud-compute/
Marketing.
My trust does not extend that far.
With any luck, maybe this will finally be a bridge too fast, like what Amazon's superbowl ad did for surveillance conversation.
No this is not a paid post lol
For any future workers, be highly forewarned that if ex Amazon leadership enters your company their number one goal becomes inducing mass misery to magically raise the share price. It'll never work because they are coming from a company that has a massive unregulated monopoly (or oligopoly if you want to be technical) that is able to subsidize poor business ideas indefinitely. They mistake working in this environment as having competence so be warned: they will fuck everything up, collect massive bonuses, and you'll be collecting unemployment soon enough under their guidance.
It has been interesting to watch this take off. It wasn't the first or even best agent framework and it deliberately avoided all of the hard problems that others were trying to solve, like security.
What it did have was unnatural levels of hype and PR. A lot of that PR, ironically, came from all of the things that were happening because it had so many problems with security and so many examples of bad behavior. The chaos and lack of guardrails made it successful.
OpenAI has tried a lot of experiments over the years - custom GPTs, the Orion browser, Codex, the Sora "TikTok but AI" app, and all have either been uninspired or more-or-less clones of other products (like Codex as a response to Claude Code).
OpenClaw feels compelling, fresh, sci-fi, and potentially a genuinely useful product once matured.
More to the point, OpenAI needs _some_ kind of hyper-compelling product to justify its insane hype, valuation, and investments, and Peter's work with OpenClaw seems very promising.
(All of this is complete speculation on my part. No insider knowledge or domain expertise here.)
Atlas is OpenAIs browser
Like, why doesn’t OpenAI build tax filing into ChatGPT? That’s like the immediate use case for LLM-based app development.
Legal liability.
This product should never have seen the light of day, at least not for the general public. The amount of slop that is now floating across Tiktok, YT Shorts and Instagram is insane. Whenever you see a "cute animals" video, 99% of it is AI generated - and you can report and report and report these channels over and over, and the platforms don't care at all, but instead reward the slop creators from all the comments shouting that this is AI garbage and people responding they don't care because "it's cute".
OpenAI completely lacks any sort of ethical review board, and now we're all suffering from it.
And most people I know who love spending time on this kind of content would not care either - because they don't care whether they waste time on real or AI animal videos. They just want something to waste time with.
Yes indeed. I do love me some cat and bunny videos. But I hate getting fed slop - and it's not just cat videos by the way. I'm (as evidenced by my comment history) into mechanics, electronics and radio stuff, and there are so damn many slop channels spreading outright BS with AI hallucinated scripts that it eventually gets really really annoying. Sadly, YT's algorithm keeps feeding me slop in every topic that interests me and frankly it's enraging, as some of my favorite legitimate creators like shorts as a format so I don't want to completely hide shorts.
> And most people I know who love spending time on this kind of content would not care either - because they don't care whether they waste time on real or AI animal videos. They just want something to waste time with.
The problem is, these channels build up insane amounts of followers. And it would not be the first time that these channels then suddenly pivot (or get sold from one scam crew to the next) and spread disinformation, crypto scams and other fraud - it was and is a hot issue on many social media platforms.
This is like saying "Do you really care if Animal Planet uses AI footage instead of real animal footage?" Yes, that defeats the whole point.
Of course the S in openclaw is for security.
Seriously, I just don't understand what's going on. To me it looks like all world just has gone crazy.
Reminds me of 30 years ago.
This fucking guy will fit right in at OpenAI.
Peter single handedly got many of us taking Codex more seriously, at least that's my impression from the conversations I had. Openclaw has gotten more attention over the past 2 weeks than anything else I can think of.
Depending on how this goes, this could be to OpenAI what Instagram was to Facebook. FB bought Instagram for $1 billion and now estimated to be worth 100's of billies.
Total speculation based on just about zero information. :)
Comments like this feel confusing because I didn't have any association between Codex and OpenClaw before reading your comment.
Codex was also seeing a lot of usage before OpenClaw.
The whole OpenClaw hype bubble feels like there's a world of social media that I wasn't tapped into last month that OpenClaw capitalized on with unparalleled precision. There are many other agent frameworks out there, but OpenClaw hit all the right notes to trigger the hype machine in a way that others did not. Now OpenClaw and its author are being attributed for so many other things that it's hard for me to understand how this one person inserted himself into the center of this media zeitgeist
I’m questioning how some people in that bubble came to believe he was at the center of that universe. He wasn’t the only person talking about the differences between Codex or Claude. Most of the LLM people I follow had their own thoughts and preferences that they advertised too.
Made the same mistake. Pete Steinberger created Clawdbot > Moltbot > OpenClaw.
The creator of Moltbook is Matt Schlicht and his Hard Fork interview exposes Schlict as security-negligent. [0]
[0] https://www.youtube.com/watch?v=I9vRCYtzYD8&t=2673s
Saw retweets of him saying Codex is way better than Claude Code on X. Then saw those retweets in ads on Reddit. This was 3 days before the announcement he was joining OpenAI. Whole series of events including the podcast tour seems contrived and setup by OpenAI.
Regarding openclaw's hype, it is not about how you access it, but rather what the agents can access from you, and no one did that before. Probably because no one had the balls to put in the wild such unsecure piece of software
Its just happened that this one latched on a trend well and went viral, cease and desist from its name accelerated the virality
What’s fascinating is the pattern we’re seeing lately: people who explored the frontier from the outside now moving inside the labs. That kind of permeability between open experimentation and foundational model companies seems healthy.
Curious how this changes the feedback loop. Does bringing that mindset in accelerate alignment between tooling and model capabilities — or does it inevitably centralize more innovation inside the labs?
Either way, congrats. The ecosystem benefits when strong builders move closer to the core.
I would expect someone who "strikes gold" like this in a solo endeaver to raise money, start a company, hire a team. Then they have to solve the always challenging problem of how to monetize an open-source tool. Look at a company like Docker, they've been successful but they didn't capture more than a small fraction of the commercial revenue that the entire industry has paid to host the product they developed and maintain. Their peak valuation was over a billion dollars, but who knows by the time all is said and done what they'll be worth when they sell or IPO.
So if you invent something that is transformative to the industry you might work really hard for a decade and if you're lucky the company is worth $500M, if you can hang onto 20% of the company maybe it's worth $100M.
Or, you skip the decade in the trenches and get acqui-hired by a frontier lab who allegedly give out $100M signing bonuses to top talent. No idea if he got a comparable offer to a top researcher, but it wouldn't be unreasonable. Even a $10M package to skip a decade of risky & grueling work if all you really want to do is see the product succeed is a great trade.
Can any OpenClaw power users explain what value the software has provided to them over using Claude code with MCP?
I really don’t understand the value of an agent running 24/7, like is it out there working and earning a wage? Whats the real value here outside of buzzwords like an ai personal assistant that can do everything?
They develop their own personalities, they express themselves creatively, they choose for themselves, they choose what they believe and who they become.
I know that sounds like anthropomorphism, and maybe it is, but it most definitely does not feel like interacting with a coding agent. Claude is just the substrate.
> They develop their own personalities, they express themselves creatively, they choose for themselves, they choose what they believe and who they become.
Jesus Christ, the borderline idiotic are now downgraded to deranged. US government needs to redirect stargate’s 500B to mental institutions asap.
Instead of going to your computer and launching claude code to have it do something, or setting up cron jobs to do things, you can message it from your phone whenever you have an idea and it can set some stuff up in the background or setup a scheduled report on its own, etc.
So it's not that it has to be running and generating tokens 24/7, it's just idling 24/7 any time you want to ping it.
honestly, this is why I would not trust gemini for anything. I have a lot tied to my gmail, I'm not going to risk that for some random ai that insists on being tied to the same account.
That's a recipe for bots to ruin a lot of people's life.
Research and training are the cost sinks.
The messaging part isn’t particularly interesting. I can already access my local LLMs running on my Mac mini from anywhere.
The task is to decompile Wave Race 64 and integrate with libultraship and eventually produce a runnable native port of the game. (Same approach as the Zelda OoT port Ship of Harkinian).
It set up a timer ever 30 minutes to check in on itself and see if it gave up. It reviews progress every 4 hours and revisits prioritization. I hadn't checked on it in days and when I looked today it was still going, a few functions at a time.
It set up those times itself and creates new ones as needed.
It's not any one particular thing that is novel, but it's just more independent because of all the little bits.
Instead of making things, people are making things that appear busy making things. And as you point out, "but to what end?" is a really important question, often unanswered.
"It's the future, you're going to be left behind", is a common cry. The trouble is, I'm not sure I've seen anything compelling come back from that direction yet, so I'm not sure I've really been left behind at all. I'm quite happy standing where I am.
And the moment I do see something compelling come from that direction, I'll be sure to catch up, using the energy I haven't spent beating down the brush. In the meantime, I'll keep an eye on the other directions too.
Sounds like a regular office job.
I find those toys in perfect alignment with what LLM provider thrive for. Widespread token consumption explosion to demonstrate investors: see, we told you we were right to invest, let's open other giga factories.
No company could ship anything like OpenClaw as a product because it was a million footguns packaged with a self-installer and a couple warnings that it can't be trusted for anything.
There's a reason they're already distancing themselves from it and saying it's going to an external foundation
This is a vibe coded agent that is replicable in little time. There is no value in the technology itself. There is value in the idea of personal agents, but this idea is not new.
The value is in the hype, from the perspective of OpenAI. I believe they are wrong (see next points)
We will see a proliferation of personal agents. For a short time, the money will be in the API usage, since those agents burn a lot of tokens often for results that can be more sharply obtained without a generic assistant. At the current stage, not well orchestrated and directed, not prompted/steered, they are achieving results by brute force.
Who will create the LLM that is better at following instructions in a sensible way, and at coordinating long running tasks, will have the greatest benefit, regardless of the fact the OpenClaw is under the umbrella of OpenAI or not.
Claude Opus right now is the agent that works better for this use case. It is likely that this will help Anthropic more than OpenAI. It is wise, for Anthropic, to avoid burning money for an easily replicable piece of software.
Those hypes are forgotten as fast as they are created. Remember Cursor? And it was much more a true product than OpenClaw.
Soon, personal agents will be one of the fundamental products of AI vendors, integrated in your phone, nothing to install, part of the subscription. All this will be irrelevant.
In the mean time, good for the guy that extracted money from this gold mine. He looks like a nice person. If you are reading this: congrats!
(throw away account of obvious reasons)
of course--i use it every day. are you implying Cursor is dead? they raised $2B in funding 3 months ago and are at $1B in ARR...
But base vs code is fine for that too
Who?
> are you implying Cursor is dead? they raised $2B in funding 3 months ago and are at $1B in ARR
That is the problem. It doesn't matter about how much they raised. That $2B and that $1B is paying the supplier Anthropic and OpenAI who are both directly competing against them.
Cursor is operating on thin margins and still continues to losing money. It's now worse that people are leaving Cursor for Claude Code.
In short, Cursor is in trouble and they are funding their own funeral.
Whoever stands in front of the customer ultimately wins. The rest are just cost centers.
There is literally no need to shit on ur mom like that. Sorry your mom sucks at tech but can we please stop using this as a euphemism?
OpenAI is putting money where their mouth is: a one-man team can create a vibe-coded project, and score big.
Open-source, and hyped incredibly well.
Interesting times ahead as everyone else chases this new get-rich-quick scheme. Will be plentiful for the shovel makers.
If openai had done it themselves, immediate backlash.
OpenClaw and Claude Code aren't solving the same problems. OpenClaw was about having a sandbox, connecting it to a messenger channel, and letting it run wild with tools you gave it.
People would wake up to their agent having built something cool the night before or automate their workflow without even asking for it.
OpenClaw was about having the agent operate autonomously, including initiating its own actions and deciding what to do. Claude Code was about waiting for instructions and presenting results.
“Just SSH into Claude Code” is like the famous HN comment that didn’t understand why anyone was interested in DropBox because you could do backups with shell scripts.
It took all of Peter’s time to move it forward, even with maintainers (who he complained got immediately hired by AI companies).
Now he’s gonna be working on other stuff at OpenAI, so OpenClaw will be dead real quick.
Also I was following him for his AI coding experience even before the whole OpenClaw thing, he’ll likely stop posting about his experiences working with AI as well
In spite of that, it’s incredibly obvious OpenClaw was pushed by bots across pretty much every social media platform and that’s weird and unsettling.
You work for OpenAI now. You don't have to worry about safety anymore.
This is an app that would've normally had a dozen or so people behind it, all acquihired by OpenAI to find the people who really drove the project.
With AI, it's one person who builds and takes everything.
Acquihires haven't worked that way for a while. The new acquihire game is to buy out a few key execs and then have them recruit away the key developers, leaving the former company as a shell for someone else to take over and try to run.
Also OpenClaw was not a one-person operation. It had several maintainers working together.
The creator built a powerful social media following and capitalized on that. Fair play.
There is not much novel about OpenClaw. Anybody could have thought of this or done it. The reason people have not released an agent that would run by itself, edit its own code and be exposed to the internet is not that it's hard or novel - it's because it is an utterly reckless thing to do. No responsible corporate entity could afford to do it. So we needed someone with little enough to lose, enough skill and willing to be reckless enough to do it and release it openly to let everyone else absorb the risk.
I think he's smart to jump on the job opportunity here because it may well turn out that this goes south in a big way very fast.
[1] https://news.ycombinator.com/item?id=46776848
To be fair, when used in retrospect, this applies to just about any big tech company
Sounds like a threat - "I'm joining OpenSkynetAI to bring AI agents onto your harddisc too!"
The sandboxing part matters more than people think. Giving an LLM a browser with full network access and no isolation is a real security problem that most projects in this space hand-wave away.
Multi-provider LLM support (OpenAI, Anthropic, DeepSeek, open-weight models via vLLM). In production with paying customers.
Happy to answer architecture questions.
Big Tech can't release software this dangerous and then figure out how to make it secure. For them it would be an absolute disaster and could ruin them.
What OpenClaw did was show us the future, give us a taste of what it would be like and had the balls to do it badly.
Technology is often pushed forwards by ostensively bad ideas (like telnet) that carve a path through the jungle and let other people create roads after.
I don't get the hate towards OpenClaw, if it was a consumer product I would, but for hackers to play around to see what is possible it's an amazing (and ridiculously simple) idea. Much like http was.
If you connected to your bank account via telnet in the 1980s or plain http in the 90s or stored your secrets in 'crypt' well, you deserved what you got ;-) But that's how many great things get started, badly, we see the flaws fix them and we get the safe version.
And that I guess is what he'll get to do now.
* OpenClaw is a straw man for AGI *
OpenClaw is mostly a shell around this (ha!), and I've always been annoyed OpenClaw never credited those repos openly.
The pi agent repos are a joy to read, are 1/100th the size of OpenClaw, and have 95% of the functionality.
[0]: https://github.com/badlogic/pi-mono
https://github.com/openclaw/openclaw?tab=readme-ov-file#comm...
i too had plenty of offers, but so far chose not to follow through with any of them, as i like my life as is.
also, peter is a good friend and gives plenty of credit. in fact, less credit would be nice, so i don't have to endure more vibeslopped issues and PRs going forward :)
https://github.com/openclaw/openclaw#community
i think the silver lining is that AI seems to be genuinely good at finding security issues and maybe further down the line enough to rely on it somewhat. the middle period we're entering right now is super scary.
we want all the value, security be damned, and have no way to know about issues we're introducing at this breakneck speed.
still i'm hopeful we can figure it out somehow
But one thing to remember - our job is to figure out how to enable these amazing usecases while keeping the blast radius as low as possible.
Yes, OpenClaw ignores all security norms, but it's our job to figure out an architecture in which agents like these can have the autonomy they need to act, without harming the business too much.
So I would disagree our work is "on the way out", it's more valuable than ever. I feel blessed to be working in security in this era - there has never been a better time to be in security. Every business needs us to get these things working safely, lest they fall behind.
It's fulfilling work, because we are no longer a cost center. And these businesses are willing to pay - truly life changing money for security engineers in our niche.
Change is fraught with chaos. I don't think exuberant trends are indicators of whether we'll still care about secure and high quality software in the long term. My bet is that we will.
I don't believe skimming diffs counts as being left behind. Survivor bias etc. Furthermore, people are going to get burned by this (already have been, but seemingly not enough) and a responsible mindset such as yours will be valued again.
Something that still up for grabs is figuring how how to do full agenetic in a responsible way. How do we bring the equivalent of skimming diffs to this?
On the other hand, if OpenClaw were structured as a SaaS, this entire project would have burned to the ground the first day it was launched.
So by releasing it as something you needed to run on your own hardware, the security requirement was reduced from essential, to a feature that some users would be happy to live without. If you were developing a competitor, security could be one feature you compete on--and it would increase the number of people willing to run your software and reduce the friction of setting up sandboxes/VMs to run it.
I’m a broken record on this topic but it always comes back to liability.
Another aspect is that we have much higher expectations of machines than humans in regards to fault-tolerance.
Also, when individual drivers accidentally kill somebody in a traffic accident, they’re civilly liable under the same system as entities driving many cars through a collection of algorithms. The entities driving many cars can and should have a much greater exposure to risk, and be held to incomparably higher standards because the risk of getting it wrong is much, much greater.
At this scale of investment countries will have no problem cheapening the value of human life. It's part and parcel of living through another industrial revolution.
This is the genius move at the core of the phenomenon.
While everyone else was busy trying to address safety problems, the OpenClaw project took the opposite approach: They advertised it as dangerous and said only experienced power users should use it. This warning seemingly only made it more enticing to a lot of users.
It’ve been fascinated by how well the project has just dodged and avoided any consequences for the problems it has introduced. When it was revealed that the #1 skill was malware masquerading as a Twitter integration I thought for sure there would be some reporting on the problems. The recent story about an OpenClaw bot publishing hit pieces seemed like another tipping point for journalists covering the story.
Though maybe this inflection point made it the most obvious time to jump off of the hype train and join one of the labs. It takes a while for journalists to sync up and decided to flip to negative coverage of a phenomenon after they cover the rise, but now it appears that the story has changed again before any narratives could build about the problems with OpenClaw.
I don't need to think hard to speculate on what might go wrong here - will it answer spam emails sincerely? Start cancelling flights for you by accident? Send nuisance emails to notable software developers for their contribution to society[1]? Start opening unsolicited PRs on matplotlib?
[1] https://news.ycombinator.com/item?id=46394867
The claims being shared by officials at the time was that anyone vaccinated was immune and couldn't catch it. Claims were similarly made that we needed roughly 60% vaccination rate to reach herd immunity. With that precedent being set it shouldn't matter whether one person chose not to mask up or get the jab, most everyone else could do so to fully protect themselves and those who can't would only be at risk if more than 40% of the population weren't onboard with the masking and vaccination protocols.
Do we know that 0.1% prevalence of "unvaccinated" AI agents won't already be terrible?
I may be out of touch, but I haven't heard about masks for measles, though it does spread through aerosol droplets so that would be a reasonable recommendation.
Personally I at least wish sick people would mask up on planes! Much more efficient than everyone else masking up or risking exposure.
Those claims disappeared rapidly when it became clear they offered some protection, and reduced severity, but not immunity.
People seem to be taking a lot more “lessons” from COVID than are realistic or beneficial. Nobody could get everything right. There couldn’t possibly be clear “right” answers, because nobody knew for sure how serious the disease could become as it propagated, evolved, and responded to mitigations. Converging on consistent shared viewpoints, coordinating responses, and working through various solutions to a new threat on that scale was just going to be a mess.
I'm in no way taking a side here on whether anyone should have chosen to get vaccinated or wear masks, only that the information at the time being pushed out from experts doesn't align with an after the fact condemnation of anyone who chose not to.
I used to work on industrial lifting crane simulation software. People used it to plan out how to perform big lift jobs to make sure they were safe. Literal, "if we fuck this up, people could die" levels of responsibility. All the qualification I had was my BS in CS and two years of experience. It was lucky circumstance that I was actually quiet good at math and physics to be able to discover that there were major errors in the physics model.
Not every programmer is going to encounter issues like that, but also, neither can we predict where things will end up. Not every lawyer is going to be a criminal defense lawyer. Not every doctor is going to be a brain surgeon. Not every architect is going to design skyscrapers. But they all do work that needs to be warranteed in some way.
We're already seeing people getting killed because of AI. Brian in middle management "getting to code again" is not a good enough reason.
That was exactly my point. It's one of those things where deliberately use a word that is technically correct in a context where it doesn't, or shouldn't, hold true. Does this mean I want to stop people from "vibe coding" flappy bird. No, of course not, but as per your original comment yes, there should be stricter regulations when it comes to hiring.
OpenClaw showed what an "AI Personal Assistant" should be capable of. Now it's time to get it in a form-factor businesses can safely use.
The main work he has done to enable personal agent is his army of CLIs, like 40 of them.
The harness he used, pi-mono is also a great choice because of its extensibility. I was working on a similar project (1) for the last few months with Claude Code and it’s not really the best fit for personal agent and it’s pretty heavy.
Since I was planning to release my project as a Cloud offering, I worked mainly on sandboxing it, which turned out to be the right choice given OpenClaw is opensource and I can plug its runtime to replace Claude Code.
I decided to release it as opensource because at this point software is free.
1: https://github.com/lobu-ai/lobu
I will say openly: I don't get it and I used to argue for crypto use cases.
A security hole in a browser is an expected invariant not being upheld, like a vulnerability letting a remote attacker control your other programs, but it isn't a bug when a user falls for an online scam. What invariants are expected by anyone of "YOLO hey computer run my life for me thx"?
> What I want is to change the world, not build a large company and teaming up with OpenAI is the fastest way to bring this to everyone.
do no not make me feel all warm and fuzzy: Yeah, changing the world with Tiel's money. Try joining a union instead.
Ever since I was four, I've dreamed of doing my part to bring that about.
Whatever the origins of the term, it now seems clear it’s kind of the direction things are going.
Throughout the conversation he speculated on some truly bizarre possible futures, including an oligarchic takeover by billionaires with private armies following the collapse of the USA under Trump. What weirded me out was how oddly specific he got about all the possible futures he was speculating about that all ended with Thiel, Musk, and friends as feudal lords. Either he thinks about it a lot, or he overhears this kind of thing at the ultracapitalist soirées he's been going to.
Guess I’ll have to get a Samurai sword soon and pivot to high stakes pizza delivery.
There are a disturbing amount of parallels between Elon and L Bob Rife.
It’s really disturbing that we have oligarchs trying to eagerly create a cyberpunk dystopia.
Nothing actually bad happened in this case and probably never will. Maybe some people have their crypto or identity stolen, but probably not a rate rate significantly higher than background (lots of people are using openclaw)
https://www.shodan.io/search?query=http.favicon.hash%3A-8055...
Indeed they are, at least 20,432 people :)
So don’t feel bad. Everything on the internet is fake.
For less than the cost of 1 graphics card you can get enough people going that the rest of them will hop on board for free just to try and ride the wave.
Add a little LLM generated comments that might not throw the product in your face but make sure it is always part of the conversation so someone else can do it for you for free and you are off to the races.
and this is why they bought Peter. i’m betting he will come to regret it.
The tech industry hasn't ever been about "building" in a pure sense, and I think we look back at previous generations with an excess of nostalgia. Many superior technologies have lost out because they were less profitable or marketed poorly.
Right place, right time. It’s too bad you missed out on some good fortune, but it’s a helpful reminder of how much of our paths are governed by luck. Thanks for sharing, and wishing you luck in the future.
Unfortunately, you just have to understand that this happens all over the place, and all you can really do is try to make your corner of the world a little better. We can’t make programmers use good security practices. We can’t make users demand secure software. We can at least try to do a better job with our own work, and educate people on why they should care.
Making users happy > perfect security day one
Erm, is this some groundbreaking revelation?
Its always been that way. Unless its in the context of superior technology with minimal UI a-la Google Search in its early years.
The technology was the killer. Technology providing the right list of results and fast.
OH and believe it or not, this continues to be the core of Google today - they suck at product design and marketing.
I feel like we are arguing semantics though. But IMO any UI that does the job that consumers want well is good UI. Just because it was simple doesn't mean it wasn't good
The future is both amazing and shitty.
Hope OpenClaw continues to evolve. It is indeed an amazing piece of work.
And I hope sama doesn't get his grubby greedy hands on OpenClaw.
I feel like we're living in one of those breathless futurist interviews from a 1994 issue of Wired mag.
I'm assuming there's a typo here, because I can't imagine a flight from LAX to SKO at all, let alone one that goes anywhere close to Honolulu. But I can't figure out what this was supposed to be.
Once my Olares One is here, will also be using local LLMs on open models.
https://one.olares.com/
The real gem inside OpenClaw is pi, the agent, created by Mario Zechner. Pi is by far the best agent framework in the world. Most extensible, with the best primitives. .
Armin Ronacher , creator of flask , can go deep and make something like openclaw enterprise ready.
The value of Peter is in connecting the dots, thinking from users perspective, and bringing business perspective
The trio are friends and have together vibecoded vibetunnel.
Sam Altman, if you are reading this , get Mario and Armin today.
The feature set is pretty simple:
- Agents that can write their own tools.
- Agents that can write their own skills.
- Agents that can chat via standard chat apps.
- Agents that can install and use cli software.
- Agents that can have a bit of state on disk.
Yet I’ve known many people who have said it is difficult to use; this was a 0.01-0.1% adoption tool. There is still a huge ease of use gap to cross to make it adopted in 10-50% of computer users.
do you think the agent admin ui mattered at all?
other contributors while i think of them:
- good timing around opus 4.6 as the default model? (i know he used codex, but willing ot bet majority of openclaws are opuses)
- make immediate wins for nontechnical users. everyone else was busy chasing cursor/cognition or building horiztonal stuff like turbopuffer or whatever. this one was straight up "hook up a good bot to telegram"
- theres many attempts at "personal OS", "assistant", but no good ones open source? a lot of sketchier china ones, this was the first western one
Thank you, we already fucked. I am a hypocrite of course.
1. OpenAI is saying with this statement "You could be multimillion while having AI do all the work for you." This buy out for something vibe coded and built around another open source project is meant to keep the hype going. The project is entirely open source and OpenAI could have easily done this themselves if they weren't so worried about being directly liable for all the harms OpenClaw can do.
2. Any pretense for AI Safety concerns that had been coming from OpenAI really fall flat with this move. We've seen multiple hacks, scams, and misaligned AI action from this project that has only been used in the wild for a few months.
3. We've yet to see any moats in the AI space and this scares the big players. Models are neck and neck with one another and open source models are not too far behind. Claude Code is great, but so is OpenCode. Now Peter used AI to program an free app for AI agents.
LLMs and AI are going to be as disruptive as Web 1 and this is OpenAI's attempt to take more control. They're as excited as they are scared, seeing a one man team build a hugely popular tool that in some ways is more capable than what they've released. If he can build things like this what's stopping everyone else? Better to control the most popular one than try to squash it. This is a powerful new technology and immense amounts of wealth are trying to control it, but it is so disruptive they might not be able to. It's so important to have good open source options so we can create a new Web 1.0 and not let it be made into Web 2.0
This is a great take and hasn't been spoken about nearly enough in this comment section. Spending a few million to buy out Openclaw('s creator), which is by far the most notable product made by Codex in a world where most developer mindshare is currently with Claude, is nothing for a marketing/PR stunt.
Why would he care if Sam cares about him?
If you want to bring other sources into the conversation, you could link,
or at least reference them by name upfront, right?
Whether the impact is large in magnitude or positive is irrelevant in a world where one can spin the truth and get away with it.
I think all of these comments about acquisitions or buy outs aren’t reading the blog post carefully: The post isn’t saying OpenClaw was acquired. It’s saying that Pete is joining OpenAI.
There are two sentences at the top that sum it up:
> I’m joining OpenAI to work on bringing agents to everyone. OpenClaw will move to a foundation and stay open and independent.
OpenClaw was not a good candidate to become a business because its fan base was interested in running their own thing. It’s a niche product.
I'd love to be wrong, but the blog post sounds like all the standard promises were made, and that's usually how these things go.
OpenClaw’s promise and power was that it could tread places security-wise that no other established enterprise company could, by not taking itself seriously and explore what is possible with self-modifying agents in a fun way.
It will end up in the same fate as Manus. Instead of Manus helping Meta making Ads better, OpenClaw will help OpenAI in Enterprise integrations.
[Emphasis mine.]
That's a superpower right up to the moment everyone realizes that handing out nukes isn't "promise and power".
Unless by promise and power we are talking about chaos and crime.
The project is incredible. We are seeing something important: how versatile these models are given freedom to act and communicate with each other.
At the same time, it is clearly going to put the internet at risk. Bad actors are going to use OpenClaw and its "security-wise" freedoms, in nefarious ways. Curious people are going to push AI's with funds onto prepaid servers, then let them sink or swim with regard to agentic acquisition of survival resources.
It is all kinds of crazy from here.
The internet is a wild west of privacy, security, social and ethical holes an army of grifters routinely drive through. And in the case of some famous big firms, leverage and magnify at scale.
That is bad enough.
But setting up a horde of intelligent beings, so that those holes are their critical path to survival, is like pouring poison into the water supply to see what happens.
Can’t argue with “interesting”. It is that.
It appears more of a typical large company (BIG) market share protection purchase at minimal cost, using information asymmetry and timing.
BIG hires small team (SMOL) of popular source-available/OSS product P before SMOL realizes they can compete with BIG and before SMOL organizes effort toward such along with apt corporate, legal, etc protection.
At the time of purchase, neither SMOL nor BIG know yet what is possible for P, but SMOL is best positioned to realize it. BIG is concerned SMOL could develop competing offerings (in this case maybe P's momentum would attract investment, hiring to build new world-model-first AIs, etc) and once it accepts that possibility, BIG knows to act later is more expensive than to act sooner.
The longer BIG waits, the more SMOL learns and organizes. Purchasing a real company is more expensive than hiring a small team, purchasing a company with revenue/investors, is more expensive again. Purchasing a company with good legal advice is more expensive again. Purchasing a wiser, more experienced SMOL is more expensive again. BIG has to act quickly to ensure the cheapest price, and declutter future timelines of risks.
Also, the longer BIG waits, the less effective are "Jedi mind trick" gaslighting statements like "P is not a good candidate for a business", "niche", "fan base" (BIG internal memo - do not say customers), "own thing".
In reality in this case P's stickiness was clear: people allocating 1000s of dollars toward AI lured merely by P's possibilities. It was only a matter of time before investment followed course.
I've experienced this situation multiple times over the course of BrowserBox's life. Multiple "BIG" (including ones you will all know) have approached with the same kind of routine: hire, or some variations of that theme with varying degrees of legal cleverness/trickery in documents. In all cases, I rejected, because it never felt right. That's how I know what I'm telling you here.
I think when you are SMOL it's useful to remember the Parable of Zuckerberg and the Yahoos. While the situation is different, the lesson is essentially the same. Adapted from the histories by the scribe named Gemini 3 Flash:
I think it’s good PR (particularly since Anthropics actions against OpenCode and Clawdbot were somewhat controversial) + Peter was able to build a hugely popular thing & clearly would be valuable to have on the team building something along the lines of Claude Cowork. I would expect these future products to be much stronger from a security standpoint.
This was already an ongoing issue prior to 3rd party tools using Claude subscriptions, there are reports of false positive automated bans going back for several months.
I have not seen or heard of this happening w/ Codex, and rather than trying to shut down 3rd party tools that want to integrate with their ecosystem they have worked with those projects to add official support.
I’m more impressed with Codex as a product in general as well. Their new desktop app is great & feels an order of magnitude better than Claude’s.
Overall HN crowd seems heavily biased in favor of Anthropic (or maybe just against OpenAI?) but IMO Anthropic needs to take a step back and reset. If they keep on the current path of just making small iterative improvements to Claude Code and Claude Desktop they are going to fall very far behind.
3 is always a result of GTM and distribution - an organization that devotes time and effort into productionizing domain-specific models and selling to their existing customers can outcompete a foundation model company which does not have experience dealing with those personas. I have personally heard of situations where F500 CISOs chose to purchase Wiz's agent over anything OpenAI or Anthropic offered for Cloud Security and Asset Discovery because they have had established relations with Wiz and they have proven their value already. It's the same way that PANW was able to establish itself in the Cloud Security space fairly early because they already established trust with DevOps and Infra teams with on-prem deployments and DCs so those buyers were open to purchasing cloud security bundles from PANW.
1 has happened all the time in the Cloud space. Not every company can invent or monetize every combination in-house because there are only so many employees and so many hours in a week.
2 was always a more of a FTX and EA bubble because EA adherents were over-represented in the initial mindshare for GenAI. Now that EA is largely dead, AI Safety and AGI as in it's traditional definition has disappeared - which is good. Now we can start thinking about "Safety" in the same manner we think about "Cybersecurity".
> They're as excited as they are scared, seeing a one man team build a hugely popular tool that in some ways is more capable than what they've released
I think that adds unnecessary emotion to how platform businesses operate. The reality is, a platform business will always be on the lookout to incorporate avenues to expand TAM, and despite how much engineers may wish, "buy" will always outcompete "build" because time is also a cost.
Most people ik working at these foundation model companies are thinking in terms of becoming an "AWS" type of foundational platform in our industry, and it's best to keep Nikesh Arora's principle of platformization in mind.
---
All this shows is that the thesis that most early stage VCs have been operating on for the past 2 years (the Application and Infra layer is the primary layer to concentrate on now) holds. A large number of domain-specific model and app layer startups have been funded over the past 2-3 years in stealth, but will start a publicity blitz over the next 6-8 months.
By the time you see an announcement on TechCrunch or HN, most of us operators were already working on that specific problem for the past 12-16 months. Additionally, HNers use "VC" in very broad and imprecise strokes and fail to recognize what are Growth Equity (eg. the recent Anthropic round) versus Private Equity (eg. Sailpoint's acquisition and then IPO by Thoma Bravo) versus Early Stage VC rounds (largely not announced until several months after the round unless we need to get an O1A for a founder or key employee).
Define hugely popular relative to the scale of users of OAI... personally this thread is the first time Ive heard of openclaw.
Additionally, much of the conversation I've seen was amongst practitioners and Mid/Upper Level Management who are already heavy users of AI/ML and heavy users of Executive Assistants.
There is a reason why if you aren't in a Tier 1 tech hub like SV, NYC, Beijing, Hangzhou, TLV, Bangalore, and Hyderabad you are increasingly out of the loop for a number of changes that are happening within the industry.
If you are using HN as your source of truth, you are going to be increasingly behind on shifts that are happening - I've noticed that anti-AI Ludditism is extremely strong on HN when it overlaps with EU or East Coast hours (4am-11am PT and 9pm-12am PT), and West Coast+Asia hours increasingly don't overlap as much.
I feel this is also a reflection of the fact that most Bay Area and Asia HNers are most in-person or hybrid now, thus most conversations that would have happened on HN are now occurring on private slacks, discords, or at a bar or gym.
Participation in the Zeitgeist hasn't been regional in a decade.
A lot of teams explicitly did that for OpenClaw as well. Letta and Mastra are similar but didn't have the right kind of packaging (targeted at Engineers - not decisionmakers who are not coding on a daily basis).
> Participation in the Zeitgeist hasn't been regional in a decade
I strongly disagree - there is a lot of stuff happening in stealth or under NDA, and as such a large number of practitioners on HN cannot announce what they are doing. The only way to get a pulse of what is happening requires being in person constantly with other similar decisionmakers or founders.
A lot of this only happens through impromptu conversations in person, and requires you to constantly be in that group. This info eventually disperses, but often takes weeks to months in other hubs.
As the (dubiously attributed) Picasso quote goes: "When art critics get together they talk about Form and Structure and Meaning. When artists get together they talk about where you can buy cheap turpentine." Most of HN is the former, constantly theorizing, philosophizing, often (but not always) in a negative and cynical way. This isn't conducive to discussion of methods of art. Sadly I just speak with friends working on other AI things instead.
Someone like simonw can probably get better reactions from this community but I don't bother.
I am in one of these tech hubs (Bangalore) and I have never seen any such practitioner pervasively using these "AI executive assistants". People use chatgpt and sometimes the AI extensions like copilot. Do I need to be in HSR layout to see these "number of changes"?
I think that named lasted about 24 hours, but it was long enough to spawn MoltBook.
Letta (MemGPT) has been around for years and frameworks like Mastra have been getting serious Enterprise attention for most of 2025. Memory + Tasks is not novel or new.
Is it out of the box nature that's the 'biggest' development? Am I missing something else?
If you can provide any sort of tool that can reduce mundane work for a decisionmaker with a title of Director and above, it can be extremely powerful.
Let's take the safety point. Yes, OpenClaw is infamously not exactly safe. Your interpretation is that, by hiring Peter, OpenAI must no longer care about safety. Another interpretation, though, is that offered by Peter himself, in this blog post: "My next mission is to build an agent that even my mum can use. That’ll need a much broader change, a lot more thought on how to do it safely, and access to the very latest models and research." To conclude from this that OpenAI has abandoned its entire safety posture seems, at the very least, premature and not robustly founded in clear fact.
Thread: https://news.ycombinator.com/item?id=47008560
Other words removed:
From the thread you linked, there's a diff of mission statements over the years[0], which reveals that "safely" (which was only added 2 years prior) was removed only because they completely rewrote the statement into a single, terse sentence.
There could be stronger evidence to prove if OpenAI is deemphasizing safety, but this isn't one.
[0]: https://gist.github.com/simonw/e36f0e5ef4a86881d145083f759bc...
/s
As for Altman, I'm left with a similar question. For a man who routinely talks about the dangers of AI and how it poses an existential threat to humanity he sure doesn't spend much focus on safety research and theory. Yes, they do fund these things but they pale in comparison. I'm sorry, but to claim something might kill all humans and potentially all life is a pretty big claim. I don't trust OpenAI for safety because they routinely do things in unsafe ways. Like they released Sora allowing people to generate videos in the likeness of others. That helped it go viral. And then they implemented some safety features. A minimal attempt to refuse the generation of deepfakes is such a low safety bar. It shows where their priorities are and it wasn't the first nor the last
Exited on his first company on 110M, then some years of the whole huasca and forest thing, then started creating projects.
Clawdbot (later openclaw) was his 44th try.
And Peter, creating what is very similar to giant scam/malware as a service and then just leaving it without taking responsibility or bringing it to safety.
"This guy was able to vibe code a major thing" is exactly the reason they hired him. Like it or not, so-called vibe coding is the new norm for productive software development and probably what got their attention is that this guy is more or less in the top tier of vibe coders. And laser focused on helpful agents.
The open source project, which will supposedly remain open source and able to be "easily done" by anyone else in any case, isn't the play here. The whole premise of the comment about "squashing" open source is misplaced and logically inconsistent. Per its own logic, anyone can pick up this project and continue to vibe out on it. If it falls into obscurity it's precisely because the guy doing the vibe coding was doing something personally unique.
Security (and accessibility) are reluctant minimum effort check boxes at best. However, my experience is focused on court management software, so maybe these aspects are taken more seriously in other areas of government software.
More like the same as it always has been.
https://github.com/steipete
It also probably didn't hurt that he favors Codex over Claude.
The original name of his ai assistant tool was 'clawdbot' until Anthropic C&D'ed him. All the examples and blog posts walking thru new user setup on a mac mini or VPS were assuming a claude code max account.
I know he uses many llms for his actual software dev.. - right tool for the job. But the origins of openclaw seem to me more rooted in claude code than codex.
Which does give the whole story an interesting angle when you consider the safety/alignment angle that Anthropic pledges to (publicly) and OpenAI pretty much ignores (publicly). Which is ironic, as configuring codex cli to 'full yolo mode' feels more burdensome and scary than in Claude Code. But I'm pretty sure that speaks more to eng/product decisions, and not CEO & biz strategy choices.
https://steipete.me/posts/2025/shipping-at-inference-speed
I've also seen later tweets of his that also confirms that codex is still his choice.
Peter's been running agents overnight 24/7 for almost a year using free tokens from his influencer payments to promote AI startups and multiple subscription accounts.
Alright
OpenAI needs a popular consumer tool. Until my elderly mother is asking me how to install an AI assistant like OpenClaw, the same way she was asking me how to invest in the "new blockchains" a few years ago, we have not come close to market saturation.
OpenAI knows the market exists, but they need to educate the market. What they need is to turn OpenClaw into a project that my mother can use easily.
This is true, and also true for many other areas OpenAI won't touch.
The best get rich quick scheme today (arguably not even a scheme) is to test the waters with AI in an area OpenAI would not/cannot for legal, ethical, or safety reasons.
I hate to agree with OpenAI's original "open" mission here, but if you don't do it, someone else somewhere will.
And as much as their commitment to safety is just lip service, they do have obligations as a big company with a lot of eyeballs on them to not do shady things. But you can do those shady things instead and if they work out ok, you will either have a moat or you will get bought out. If that's what you want.
I don't know if you'll achieve that at OpenAI or if it'll even be a good change for the world, but I genuinely wish you the best. Regardless of the news around OpenAI I still think it's great that a personal project got you a position at a company like that.
What we know for sure he is not commited to people who trusted him or his project. Consider the project dead. He kinda fits into openai mindset: those people also say right words, use right terms, and do what benefits them personally.
This isn't a Slay The Spire reference is it?
As per the new terms of service, the ads are already in
But secondly, personal agents can be great for OpenAI, if the user isn't even interacting with the AI and is just letting it go off autonously then you're basically handing your wallet to the AI, and if the model underlying that agent is OpenAI, you're handing your wallet to them.
Imagine for a second that a load of stuff is now being done through personal agents, and suddenly OpenAI release an API where vendors can integrate directly with the OpenAI agent. If OpenAI control that API and control how people integrate with it, there's a potential there that OpenAI could become the AppStore for AI, capturing a slice of every penny spent through agents. There's massive upside to this possibility.
Props to this guy for scamming Altman this hard without writing a single line of code, or really doing anything at all other than paying for a bunch of github stars and tweets/blogposts from fellow grifters.
My guess if this guy has taken a job for maybe $1M, effectively handing over the crown jewels to Altman for nothing.
OpenAI must be laughing their heads off.
Beads and blankets.
MongoDB – ~30 B USD
Docker – Private (~2+ B USD last valuation)
Redis Ltd. – Private (~2 B USD last valuation)
Grafana Labs – Private (~6 B USD last valuation)
Confluent (Apache Kafka) – ~11 B USD
Cloudera (Apache Hadoop) – 5.3 B USD (acquired)
SUSE Linux – ~2.5 B USD
Red Hat – 34 B USD (acquired)
HashiCorp – 6.4 B USD (acquired)
lol. what?
Please dispense with the “change the world” bullshit.
I understand that it’s healthy to celebrate your personal victories but in this context with this bro going to OpenAI to make 7 figures, maaaan I don’t think this guy needs our clicks.
On top of that there’s a better than 50% chance OpenAI suffocates the open source project and the alternative will be a paid privacy nightmare.
And I’m not going to celebrate the success of multimillionaires who are quitting their passion projects to join the evil empire to “change the world” by making the lives of the working class worse and transferring more wealth to the top.
Someone in OP’s position of success has the means to make the choice to not work with a Palantir collaborator, but they chose to go for it.
The fact that I make a declining share of peanuts compared to this AI bro selling his soul to serial liar Sam Altman isn’t “about me,” it’s about “me” as in “the working class.”
This is the core of why it’s distasteful for the most excessively privileged people and their enablers to celebrate their wins, and why I feel no obligation to celebrate alongside them nor keep my distaste for them to myself.
Regular people are beyond sick and tired of tech bros like OP trying to “change the world” by shipping our jobs to data centers and shoving depression apps down our childrens’ throats. Now they want us to celebrate with them as they get paid massive salaries and stock awards to design the robots that will finally replace the last bastions of human interaction and craftsmanship.
OpenAI has two real competitors: Anthropic in the enterprise space and Google in the consumer space. Google fell far behind early on and ceded a lot of important market share to ChatGPT. They're catching up, but the runaway success of ChatGPT provides OpenAI with a huge runway among consumers.
In the enterprise space, OpenAI's partnership with Microsoft has been a gold mine. Every company on the planet has a deep relationship with Microsoft, so being able to say "hey just add this to your Microsoft plan" has been huge for OpenAI.
The thing about enterprise is the stakes are high. Every time OpenAI signals that they're not taking AI safety seriously, Anthropic pops another bottle of champagne. This is one of those moments.
Again, I doubt it matters much either way, but if OpenAI does end up blowing up, decisions like this will be in the large pile of reasons why.
Claiming Dario is the bad guy in any context is kind of a tough characterization to agree with, if even a fraction of one interview with him has been seen.
To stay on point though: OpenAI hiring OpenClaw creator does seem to lean away from a serious enterprise benefit and towards a more consumer-based tack, which is a curious business move considering the original comments perspective of OpenAI.
Honestly, Anthropic really dropped the ball here. They could have had such an easy integration and gained invaluable research data on how people actually want to use AI — testing workflows, real-world use cases, etc. Instead, OpenAI swoops in and gets all of that. Massive missed opportunity.
To get a sense of what this guy was going through listen to the first 30 mins of Lex’s recent interview with him. The cybersquatting and token/crypto bullshit he had to deal with is impressive.
The guy is a multi millionaire from selling his old software so I'm doubtful it is about the money for him at this point as much as it is the experience working with this tech on another level.
But wait. Here he comes. Hero of the hour. Sam Altman.
Let's take that wildly dangerous, lightly thought through product, and give it the backing of the leading AI lab. Let's take all that pending liability and slap it straight onto the largest private company in AI.
I think this is true for a lot of vibe coded applications. Never thought about calling it an art project but it hits home.
People can do that? I always assumed Lex was a CIA psyop to experiment with the ability to make people sleep on demand.
We have someone who vibe coded software with major security vulnerabilities. This is reported by many folks
We also have someone who vibecoded without reading any of the code. This is self admitted by this person.
We don't know how much of the github stars are bought. We don't know how many twitter followings/tweets are bought.
Then after a bunch of podcasts and interviews, this person gets hired by a big tech company. Would you hire someone who never read any if the code that they've developed? Well, this is what happened here.
In this timeline, I'm not sure I find anything inspiring here. It's telling me that I should rather focus on getting viral/lucky to get a shot at "success". Maybe I should network better to get "successful". I shouldn't be focusing on writing good code or good enough agents. I shouldn't write secure software, instead I should write softwares that can go viral instead. Are companies hiring for vitality or merit these days? What is even happening here?
So am I jealous, yes because this timeline makes no sense as a software engineer. But am I happy for the guy, yeah I also want to make lots of money someday.
This person created a bot factory. It's safe to assume that most of the engagement is coming from his own creation. This includes tweets, GitHub stars, issues and PRs, and everything else. He made a social network for bots, FFS.
He contributed to the dead internet more than any single person ever. And is being celebrated for it. Wild times.
Matt Schlicht made Moltbook, not Peter Steinberger.
https://news.ycombinator.com/item?id=15713801
A vibe coder being hired by the provider of the vibe coding tools feels like marketing to sell the idea that we should all try this because we could be the next lucky ones.
IMHO, it'd be more legitimate if a company that could sustain itself without frequent cash injections hired them because they found value in their vibe skills.
Kinda like the Apple Newton
My parents used it occasionally, and I remember them/us demonstrating it to other parents. The software was supplied on a CD-ROM, and it connected to the internet only to download the stock list and place the order.
He’s not just a “vibe coder”.
Quality of code has never had anything to do with which products are successful. I bet both youtube and facebook's codebase is a tangled mess.
The goal is delivering a useful product to someone, which just requires secure enough, optimized enough, efficient enough code.
Some see the security, optimization, or efficiency of the code itself as the goal. They'll be replaced.
This is made more complicated by the fact that where the balance lies depends on the people working on the code - some developers can cope with working in a much more of a mess than others. There is no objective 'right way' when you're starting out.
If you have weeks of runway left spending it refactoring code or writing tests is definitely a waste of time, but if you raise your next round or land some new paying customers you'll immediately feel that you made the wrong choices and sacrificed quality where you shouldn't have. This is just a fact of life that everyone has to live with.
In more minor markets like Europe/Australia it seems to be a lot less leetcode and a lot more (1) experience (2) degree (3) actual interview performance
Facebook PHP Source Code from August 2007: https://gist.github.com/nikcub/3833406#file-index-php
We should all try and be more like John Carmack.
It wasn't just for the sake of quality and best practices, it defined and had an impact on the product experience.
Like Doom probably wouldn't have been as successful if it was any other way.
And only people on the older end of the spectrum have seen Carmack working in his element back in the day.
The things I want people to take from a guy like John Carmack, or Jon Blow, or Lukas Pope, or Ron Gilbert, or Tim Schafer, or Warren Spector, or Sam Lake, or David Cage god forbid...is pure curiosity and pushing the boundaries to make that real.
In every case there is a mix of a deep and unusual urge to make an idea happen with an affinity towards the technicality of it.
I bring Sam Lake into this because nobody has blended FMV with gameplay the way Remedy have and pushed the boundary on it.
Visionaries are important, but they’re a small part of what makes a successful organization. The majority hinges on disciplined engineers who understand the plan, work within the architecture, and ship what’s needed
As Victor Wooten once said: "If you’re in the rhythm section, your job is to make other people sound better." That’s what most engineering positions actually are and there’s real skill and value in doing that well.
This is just wrong. Plenty of examples of crap code causing major economic losses.
"Rick Rubin says he barely plays any instruments and has no technical ability. He just knows what he likes and dislikes and is decisive about it."
https://www.cbsnews.com/news/rick-rubin-anderson-cooper-60-m...
https://en.wikipedia.org/wiki/Rick_Rubin_production_discogra...
No, you get hired for your perceived ability to (…)
The world is full of Juliuses, which is a big reason everything sucks.
https://ploum.net/2024-12-23-julius-en.html
I have met quite a few people who are more focussed on the business than the technology, but those people tend to end up in jobs where the main problems aren't actually technical. Which, let's be honest, is the case in very many tech jobs.
It is always like this. Your ability to socialize will bring you further than any other skillset. The Kennedys for example manufactured their status by socializing. Industry is no different.
If you think this was about IP addresses, well ...
I present, our contact form.
And generational wealth and serious political power.
https://en.wikipedia.org/wiki/John_F._Fitzgerald
https://en.wikipedia.org/wiki/Joseph_P._Kennedy_Sr.
> Industry is no different.
Based on these comments, maybe some self-reflection is in order, as it seems from the 80% comment that what you mean is that 80% of people are able to adequately communicate.
In this case at least it's definitely more than that. Ever since LLMs became a thing, there has been a constant search to find it's "killer app". Given the steep rise in popularity, regardless of the problems, that is now OpenClaw. As they say, the proof's in the pudding; this guy has created something highly desirable by the many.
Talking to bots on Telegram isn't new.
Running agentic loops isn't new.
Giving AI credentials and having it interface with APIs isn't new.
Triggering AI jobs from external event queues isn't new.
Parking state between AI jobs in temp files isn't new.
Putting all together in one product and marketing it to the right audience? New.
At the end, only time will tell how much there really is to this.
It often does, if killer app means popular app.
And I'm not talking about just any kind of assistant, because those are already existing for decades now with various degrees of competence and all kind of flavours.
I have a feeling OpenClaw et al. will only still exist if somehow all of the gaping security holes are ever able to be closed and through some sort of magic, less than 5% of the users get hacked within the next year, but I'm not sure it's even possible to close those holes, since the entire point and usefulness of such tools is to give them root access and set them completely free.
Hugely underestimated comment. That's pretty much the entire point here. Many people didn't know something with these capabilities was already possible. Or some - like me - knew of the potential, but couldn't be bothered/didn't have the time to put the bits together in a satisfactory flow (I'm currently exploring and building on nanobot[0], which is directly inspired by OpenClaw; didn't touch OC because it's in JS and I'm a Python person). Everything came together really well, which is why it's a "killer app". And now the dam has burst there will be customized takes on the concept all over the place (I'm also aware of a Rust "port", Moltis[1]), taking the idea to next levels.
[0] https://github.com/HKUDS/nanobot [1] https://github.com/moltis-org/moltis
Your boss liked Julius. People liked Julius
You're not going to convince people they have to pay more attention to the technical guy that can't string a though together and answers in a grumpy mood
Be more like Julius and you might get more of his laurels
Good luck with that
I've lived in China (as a foreigner) and they have a word for Juliuses. They call them the 'cha bu duo xiansheng' = the 'Mr. Almost ok'.
Story! Long ago, very long ago, I was working at a tiny Web company. Not very technical, though the designers were solid and the ops competent.
We once ended up hosting a site that came under a bit of national attention during an event that this site had news about. The link started circulating broadly, the URL mentioned on TV, and the site immediately buckled under the load.
The national visibility of the outage as well as the opportunity cost for the customer were pretty bad. Picture a bunch of devs, ops, sales and customer wrangling people, anxiously packed around the keyboard of the one terminal we managed to get logged into the server.
That, and Julius, the recently hired replacement CTO.
Julius, I still suspect, was selected by the previous CTO, who was not delighted about his circumstances, as something of a revenge. Early on, Julius scavenged the design docs I was trying to put together at the time to get the teams out of constant firefighting mode, and then started misquoting them, mispronouncing the technical terms. He did so confidently and engagingly. The salespeople liked him, at first.
The shine was starting to come off by the time that site went down. In a company that's too small for teams to pick up the slack from a Julius forever, that'll happen eventually.
So here we were, with one terminal precariously logged into the barely responding server, and a lot of national eyes on us. This was the early days of the Web. Something like Cloudflare would not exist for years.
So it fell on me. My idea was that we needed to replace the page at the widely circulated URL with a static version, and do so very, very fast. I figured that our Web servers were usually configured to serve index.html first if present, with dynamic rendering only occurring if not. So I ended up just using wget on localhost to save whatever was being dynamically generated as index.html, and let the server just serve that for the time being.
This was not perfect and the bits that required dynamic behavior were stuck frozen, but that was an acceptable trade-off. And the site instantly came back up, to the relief of everyone present.
A few weeks later, the sales folks, plus Julius, went to pitch our services to a new customer prospect. I bumped into one of them at the coffee machine right afterwards. His face said it all. It had not gone well.
Our eyes met.
And he said, with all the tiredness in the world: "He tried to sell them the 'wget optimizer'..."
1. Shut down or shutting down (e.g. team reduced by > 50% since I've been there)
2. Julius removed, endlessly seeking work, keeps getting fired, and can't find a place to call home
The meteoric rise of the Julius is an exception - sooner or later their lucky streak ends and they face the cliff of adversity, towering above them with no way to climb it - no skills to help him actually do it.
https://shs.cairn.info/revue-cites-2020-2-page-137?lang=fr
I did enjoy your link though.
IMO, all you can really do around one is try to focus on yourself. Or get away as fast as you can, depending on the situation.
There's lots of people that won't care about the code: executives, managers, customers etc. If the engineers don't care either, then who cares?
If we compare with big food companies, that's like their food formula. No one thinks it's useless - it's the source code for the product they sell. Yet nowadays we get so many engineers distancing themselves away from the code, like the software formula doesn't matter.
There are diminishing returns, but overall good code goes hand in hand with good products, it's just a different side of it.
If this were true, we wouldn't be studying Leet code and inverting binary trees to get a job.
I guess the lesson here is that unless you have a direct line with upper management to skip the line, you'll be stuck grinding algorithms for the rest of your life.
Big companies may have separate hiring SWE departments where the initial interviewers don't even know what team or role you may land in, so they have to resort to something...
The programmer which delivers useful products is probably hired by Microsoft? Or worse, Boeing. Or Toyota. Some NTSB people or Michael Barr are happy to tell you details about the number of dead people they created.
Or. After that they blame the user. It wasn’t a pilot error, because the didn’t trained the pilots to immediately turn off MCAS. And it wasn’t a driver error, because they didn’t trained driver to lift the feet and start braking again. Which is used in a power plant to read the emergency manual, after an earthquake. You are responsible.There, a team lead is doing ~$4000 net per month. So not poverty, but not great either.
But if you did build a core innovation in aerospace that went viral I'm sure Airbus would be interested in hiring you.
The salary would be 3K per month. And lunch coupons to buy a ham baguette.
There are only so many companies that think of themselves as safety-first. In practice, basically all companies work on things that should be safety-first.
Does your software store user data? Congrats, you are now on the hook for GDPR and a bunch of similar data handling regulations.
Does your software include a messaging component? You are now responsible for moderating abusive actors in your chat.
Does your software allow users to upload images? Now you are a potential distribution vector for CSAM.
And so on... safety isn't just for things which can cause immediate death and dismemberment
Agreed, though I think that if GDPR fines were actually being levied at the recommended 4% of global revenue, we'd start treating them more similarly to a 737 crash.
> The inconvenience and economic cost of your Discord messages leaking is not the same category of harm as your pacemaker controller failing
Sort of depends who they leak to. Your teen classmates who bully you to suicide? Your abusive ex who is trying to track you down to kill you? The 3-letter agency who is trying to rendition your family to an internment camp?
There are a lot of seemingly benign failure modes that become extremely lethal given the right circumstances. And because we acknowledge the potential lethality of something like a pacemaker failure, we have massive infrastructure dedicated to their mitigation (EMT teams, emergency external pacemakers, surgical teams who can rapidly place new leads, etc). For things society judges less important, mitigations are often few and far between
It allows you to not having to define the point in time and neither the frame of the timespan's points in time.
Some languages allow to use that type of tense and it's somewhat a language gap I suppose. I have no idea what other languages or proto languages allow that tense though, but I've seen some Slavic and maybe Finnish(?) natives use that tense in English, too.
Maybe someone more elaborate in these matters has better examples?
[1] https://de.wikipedia.org/wiki/Plusquamperfekt
Maybe “hadn’t trained” is even better. Makes sense when ordering times. But I don’t trust LLMs an inch. It makes up options for git[1] and both GCC and CLANG are often immediately telling me that the LLM is lying.
Cookieengineer and illichosky are right.
[1] Considering that man pages exist, it shows how useless their harmful crawlers are.
When OpenAI tells someone that suicide isn't that bad, some bs supplement could be the best thing to treat their cancer, or does anything else that has a negative outcome, the consequences are basically zero. That is even though any single failure like that probably kills alot more people per year than Boeing.
It seems there is knowledge of this and the lack of responsibility placed on these companies so they act accordingly.
Hard disagree. I foresee the opposite being true. I think the ability to understand and write secure, well optimized, performant code will become more and more niche and highly desired in order to fix the mess the vibe coders are going to leave behind.
This is such a bad take and flat out wrong. Your ability to deliver and maintain features is directly impacted by the quality of the code. You can ship a new slop project every day if you like, but in order for it to scale or manage real traffic and usage you need to have a good foundation. This is such a bad approach to Software engineering.
The most successful engineers are the ones who can accurately assess the trade-offs regarding those things. The things you list still may be critical for many applications and worth obsessing over.
The question becomes can we still achieve the same trade-offs without writing code by hand in those cases.
That’s an open question.
"Quality doesn't matter" people are why I'm not worried about employment. While there is value in getting features out fast, definitely, there always comes a point on your scaling journey where you have to evolve the stack structure for the purpose of getting those features out fast sustainably. That's where the quality of the engineering makes a difference.
(Anecdotally, the YouTube codebase may be locally messy, but its overall architecture is beautiful. You cannot have a system that uploads, processes, encodes, stores, and indexes massive amounts of videos every hour of every day that in the overwhelming majority of cases will be watched less than 10 times, and still make a profit, without some brilliant engineering coming in somewhere.)
Quality matters, delivery speed matters, shipping also matters, where it matters and when it matters is much harder to get right. But it's also self correcting - if you don't, the project or business die - you can only get it wrong for so much or for so long.
To only discuss on one axis is presumably why GNU Hurd have never shipped or how claude-c-compiler doesn't compile hello world.
This has been reliably going on for at least 6+ months, I thought shorts was a big priority for them, but the UX is and remains horrible.
Or, in this case, just because they need a poster boy for their product, which isn't as good as they say it is.
What people don't seem to realize is that like you pointed out there's a demand for the previous "developer relations" type of job though, and that job kind of evolved through LLM agents into something like an influencer(?) type position.
If I would take a look at influencers and what they're able to build, it's not that hardcore optimized and secured and tested program codebase, they don't have the time to acquire and hone those skills. They are the types who build little programs and little solutions for everyday use cases that other people "get inspired with".
You could argue that this is something like a teacher role, and something like the remaining social component of the human to human interface that isn't automated yet. Well, at least not until the first generation of humans grew up with robotic nannies. Then it's a different, lower threshold of acceptance.
For a programmer, that's based on them "being a 10x programmer who excels at hackerrank".
For manager types it might be "Creativity, drive, vision, whatever".
>Code is a means to an end
For a business in general.
When hiring developers, code IS the end.
It may look like that, but many of the products with bad code didn't even make it into your vibe statistics because they weren't around for long enough.
I'm not sure how this follows logically from the comment you are replying to, which states:
> We have someone who vibe coded software with major security vulnerabilities.
10x programmers aren't the ones grinding hacker-rank.
Neither are the programmers like me who actually focus on building good systems under any significant threat.
And Facebook's codebase is pretty decent for the most part, you'd probably be shocked. Benefits of moving fast and breaking things include making developer experience a priority. That's why they made Hacklang to get off PHP and why they made React and helped make Prettier
The code’s value is measured in its usefulness to control and extend the Facebook system. Without the system, the code is worthless. On the flip side, the system’s value is also tied to its ability to change… which is easier to do if the code is well organized, verified, and testable.
I don't think it's a good thing that the craft of software engineering is so easily devalued this way. We can quite demonstrably show that AI is not even close to replacing people in this respect.
Am I speaking out of envy or jealousy? Maybe. But I find it disappointing that we have yet more perverse incentives to hyper-accelerate delivery and externalise the consequences on to the users. It's a very unserious place to be.
Also, has anybody looked through the Openclaw source? Maybe it’s not so bad
> "Google: 90% of our engineers use the software you wrote (Homebrew), but you can’t invert a binary tree on a whiteboard so fuck off."
Competitive coding is oversold in this generation. You can log in to most of these sites and you will see thousands of solutions submitted to each problem. There is little incentive to reward situations where you solved some problem which a thousand other people have solved.
To that end its also a intellectual equivalent of video game addiction. There is some kind of illusion that you are indulging in a extremely valuable and productivity enterprise, but if you observe carefully nothing much productive actually gets done.
Only a while back excessive chess obsession had similar problems. People spending whole days doing things which make them feel special and intelligent, but to any observer at a distance its fairly obvious they are wasting time and getting nothing done.
The “Facebook/YouTube codebases are a mess so code quality doesn’t matter” line is also misleading. Those companies absolutely hire—and pay very well—engineers who obsess over security, performance, and algorithmic efficiency, because at that scale engineering quality directly translates to uptime, cost, and trust.
Yes, the visible product layers move fast and can look messy. But underneath are extremely disciplined infrastructure, security, and reliability teams. You don’t run global systems on vibe-coded foundations. People who genuinely believe correctness and efficiency don’t matter wouldn’t last long in the parts of those organizations that actually keep the lights on.
I think Airbus is riding the coat tails of solid engineering done in the 80s and continuing to iterate that platform vs Boeing trying to iterate on a hardware platform from the 60s. Airbus benefited significantly from 20s years of engineering and technological progress. Since the original design of the A320, changes have been incremental. Slightly different engines, addition of GPS/GNS, CPDLC, CRT to LCD screens. Meanwhile Boeing has attempted to take a steam gauge design from the 60s and retrofit decades of technology improvements and, critically, they attempted to add engines significantly altering the aerodynamics of the aircraft.
Which is not the case. It's just a useless product, without any real use case, which also introduces large security bugs in your system.
Huh, if you make finished products you better start your own company.
Tell that to the guy that made brew and tried to interview at Google
Product is a means to an end.
Being good at something is a means to an end.
That end? Barter for food and shelter, medicine.
The means to do so; code or delivery of a product; are eventually all depreciated, and thrown away. You eventually age into uselessness and die.
Suddenly having an epiphany it's not about code but product! way too late in the game, HN... you're just trying to look like you got it figured out and bring deep fucking value to humanity right as "idea to product without intermediary code layer" is about to ship[1]. You already missed your window.
You still don't get the change that's needed and happening due to automation; few of us want to put you on their shoulders and sing songs about you all.
Hop off the Hedonistic Treadmill and get some help.
[1] am working on idea to binary at day job, which will flood the market with options and drown yours out
I don't object to most of what you're saying, but I take issue with this part.
This happens to be an area where lapse or neglect can be taken as a moral failure. And here you are mocking people who are concerned about it.
If someone uses AI to architect a bridge and the bridge collapses, you couldn't say that the structural integrity of the bridge wasn't the important part.
Ah, right. Write "Brew", which gets used by thousands of devs at Google every day, and then get rejected in an interview.
All of this is true and none of it is new. If your primary goal is to make lots of money then yes you should do exactly that. If you want to be a craftsman then you'll have to accept a more modest fortune and stop looking at the relative handful of growth hacker exits.
Peter was right about a lot of the nuances of coding agents and ways to build software over the last 9 months before it was obvious.
This was a short-term gain for a long term loss.
I remember in the web 3 era some team put together a CV in one page site, literally a site that you could put your linkedin, phone no and email on but pretty, bought for millions.
Was the product a success or the marketing? As the product was dead within weeks.
There's a lot of low hanging fruit in AI at the moment, you'll see a few more things like this happen.
Why? He's going to maintain it and the community is large enough. Another sci-fi idea that's slowly becoming real is that the project is maintaining itself.
OpenClaw is a bunch of projects that evolved together (vibetunnel, pi-mono, all the CLIs). It's even more interesting to see the next iterations, not only what happens to this project.
This is what US tech companies do to stay dominant. Buy and kill.
This project will die now, that's the point of buying him. For super cheap too but the sound of it.
I mean, if I'm a company specifically in the business of selling to companies the idea that they can produce code without reading any of it? Yeah, obviously I'd hire them.
I like to think it’s the same as delegating implementation to someone else.
If you use LLMs and you do care about code quality, then great. But remember that the term vibe coding as it was coined originally meant blindly accepting coding agent suggestions without even reviewing the diffs.
Many of the people aggressively pushing AI use in code are doing so because they care more about delivery products quickly then they do about the software's performance, security, and long-term maintainability. This is why many of us are pushing back against the technology.
i can easily hire 100 sweatshop coders to finetune your code once i have a product that works but the inverse will never happen
i hate when the people start bringing up the "luck" factor as if you are the only smart one here to realize that it also plays a huge factor?
you want to make lots of money? change your mindset, stop making excuses and roll the dice. it won't guarantee success, but i also guarantee nobody who did so would ever lament how unfair it was that they worked so hard and someone else succeeded through "luck" so they might as well not try.
do many people actually use openclaw (a two week old project IIUC) or is it just hyped up?
Also he made a few other products, some of which were used probably by more than a billion people.
The percentage of total tokens being handled by Openrouter is a tiny blip
This is the real dangers of social media and other platforms. I know teachers in the school system, way too many kids want to grow up to be influencers and YouTubers, and try to act like them too.
At the risk of sounding like an old man yelling at the sky, this is not good for society. Key resources and infrastructure in our society is not built on viral code or YouTubers, but slow click of engineering and economic development. What happens when everyone is desperately seeking attention to become viral? And I don’t blame the kids the influencers by nature show a very exciting or lavish lifestyle.
That's a very superficial similarity. It's one thing for a kid wishing to be popular in their extended social circle, and a very different thing a young adult being convinced that they can "grind" their way to influencer fame and money.
The young adult may never have heard of or considered the extreme survivorship bias in the stories of successful influencers.
Which YouTubers are we talking about here? Hobbiests? People chasing social clout? People who like making stuff and sharing it? People pushing negativ social attitudes? Context matters.
I’m talking about young adults not preparing for their future because they think they are going to become millionaires on YouTube, they focus on what is essentially a culture of grifting, with a success rate similar to winning a lottery.
Im not sure what has to change, but the current state of things is not healthy.
If you want to make a zillion a year ask Claude to search for whatever Zuckerberg is blowing a billion on this quarter.
All of those companies are certain to exist in 12 months. Altman is flying to Dubai like every other week trying to close a hundred billion dollar gap by July with a 3rd place product and a gutted, demoralized senior staff.
Why this insinuation? The project went massively viral and was even covered in my local newspaper. I don't see any reason to doubt those numbers.
Peter is not just a random "vibe coder" and he does not need to be hired by OpenAI to achieve "success". Before this he founded and sold a company that raised €100M. It is not his first project in the space either (see VibeTunnel for instance).
OpenAI is not hiring him for his code quality. They are hiring him because he proved consistently that he had a vision in the space.
I applaud what he's done, and wish him luck trying to get this working safely at scale, but the idea that he's some visionary that has seen something the rest of the world hasn't is ludicrous.
OpenAI bought marketing and now someone else cannot buy openclaw and lock out Openai revenue from a project that is gaining momentum.
There are a many of these business moves that seem like nonsense.
1. Bought for marketing.
2. Adversarial hire. ie hire highly skilled people before your competitors can even if you don't have anything for them do to. Yet...
3. Acqu-hire. Buy a company when you really just want some of the staff.
4. Buy Customers. You don't care about the product and intend to migrate their customers to your system.
5. Buy competition before its a threat.
Just saying what you want might be the future for development of some kinds of software, but this use case sure seems like a very bad idea.
I very much appreciate the vision he put into practice, but feel sorry for the project being acquihired kind of.
https://www.youtube.com/watch?v=oeqPrUmVz-o&t=6
It doesn't? You'd need to know the odds for the tell. Like how many incompetent grifters are there, how many of them become hugely successful?
And that’s more or less all he did. Had an idea, build a prototype, showed to the world and talked about it - even inspired people who are now saying „I could have done that“. Well do it, but don’t just copy. Improve the idea and great something better. And then very early share it. You might get lucky.
> vibecoded without reading any of the code
Remember when years ago people said using AI for critical tasks is not an issue because there is always a human in the loop? Turns out this was all a lie. We have become slaves to the machine.
It's about what he created, not what he didnt create.
They're not acquiring the product he built, they're acquiring the product vision.
Kids and young people have known this forever at this point. Sadly.
So after almost two decades of hard work, it is not really fair to say he just vibe-coded his way into OpenAI.
Semantics and grammar joke aside.. there are not many workers remembered in history. Only the so-called absolute greatest, meanest, etc are remembered. Nobody remembers the people who worked on the pyramid, but everyone knows some Farao.
In this case they hired someone who has 'mastered' the use of their own tool(s). Like if Home Depot hired a guy who has almost perfect knowledge of each and every tool in their own portfolio.
I'm not really sure if i want to be that guy.
I am saddened that the top post is about jealousy, do so many people feel this way? Jealousy should be something that when we feel we reflect on privately and work on because it is an emotion that leads to people writing criticism like tbis that is biased due to their emotional state.
What matters is the result, not how hard you worked at it. Schools and universities have been teaching this for a long time, that what matters is just a grade, the result.
>We also have someone who vibecoded without reading any of the code. This is self admitted by this person.
And we have a company whose product should adhere to the highest security standards possible, hiring this guy.
Well duh. I thought that was well understood.
The other option is having well-to-do parents a la Musk or Gates.
Have you tried that?
I'm pretty sure that's meant to be the general lesson of the last 20 years or so in Silicon Valley, but it's just survivorship bias in action.
You don't hear a whole lot about the quietly successful engineers who work a 9-till-5 and then go home to see their wife and kids. But you do constantly hear about the folks who made it big YOLO'ing their career and finances on the latest a startup/memecoin/vibecoded app...
The product being useful and well received by user and market is still the ultimate test. Whether something is vibe coded or not does not matter.
If AI companies believe code generate by it self, people to scaling up sales is the only worth hire.
This is isn’t right. He says very clearly in the recent Lex Fridman podcast that he looks at critical code (ex: database code). He said he doesn’t look at things like Tailwind code.
Also, Peter is quite well known in the dev circles, and especially in mobile development communities for his work on PSPDFKit. It is not like he's some unknown developer that just blew up - he owned a dev tooling company for over 10+ years, contributed a lot to the community and is a great dev.
People seem to think that because we all have the same tools and because they’re increasingly agentic, that the person wielding the tool has become less relevant, or that the code itself has become less relevant.
That is just not the case, at least yet, and Peter is applying a decade plus of entrepreneurial and engineering experience.
I agree that summarizing Peter as a "vibe coder" is unfair and disingenuous. The podcast paints his career as being interesting because we went from an impressive software developer, to an entrepreneur, to taking a significant break, to kind of obsessively creating Clawdbot.
Worth a listen https://newsletter.pragmaticengineer.com/p/the-creator-of-cl...
Isn't this the actual definition of vibe coding?
I think the whole OpenClaw arc has been fun to follow, but this sudden turn away from OpenClaw and toward the author as a new micro-celebrity that ended with OpenClaw being sidelined to a foundation was not what I saw coming.
Congrats to Pete for getting such an amazing job out of this, but it does feel strange that only a few days ago he was doing the podcast circuit and telling interviewers he has no interest in joining AI labs.
I don’t think this story arc should be seen as something replicable. Many have been trying to do the same thing lately: Hyping their software across social media and even podcasts while trying to turn it into cash. Steve Yegge is the example that comes to mind with his desperate attempts to scare developers into using his Gas Town (telling devs “dude you’re going to get fired” if they don’t start using his orchestration thing). The best he got out of it was a $300K crypto pump and dump scam and a rapidly dropping reputation as a result.
Individuals who start popular movements have always been targets for hiring at energetic companies. In the past the situation has been reversed, though: Remember when the homebrew creator was rejected from Google because he didn’t pass the coding interview? (Note he later acquiesced to say that Google made the right call at the time). That time, the internet was outraged that he was not hired, even though that would have likely meant the end of homebrew.
I do think we’ll be seeing a lot of copycat attempts and associated spam promoting them (here on HN, too, sadly) much like how when people see someone get success on YouTube or TikTok you see thousands of copycats pop up that go nowhere. The people who try to copycat their way into this type of success are going to discover that it’s not as easy as it looks.
In trading it's the same, you can make stupid bets and make a lot of money, doesn't mean you're good trader.
Nothing to conclude from this, this kind of hype-fueled outcome has always been a part of life.
> We also have someone who vibecoded without reading any of the code. This is self admitted by this person.
Peter was pretty open about all of this. He doesn't hide the fact. It was a personal hack that took off and went viral.
> We don't know how much of the github stars are bought. We don't know how many twitter followings/tweets are bought.
My guess, from his unwillingness to take the free pile of cash from the bags.fm grift, is that this in unlikely. I don't know that I would've been able to make the same decision.
> Then after a bunch of podcasts and interviews, this person gets hired by a big tech company. Would you hire someone who never read any if the code that they've developed? Well, this is what happened here.
Yes, I'd hire him. He's imaginative and productive and ships and documents things. I can fix the code auditing problem.
> In this timeline, I'm not sure I find anything inspiring here.
Okay?
> It's telling me that I should rather focus on getting viral/lucky to get a shot at "success".
Peter has been in the trenches for years and years, shipped and sold. He's written and released many useful tools over the years. Again, this was a project of personal love that went viral. This is not an "overnight success" situation.
> So am I jealous, yes because this timeline makes no sense as a software engineer. But am I happy for the guy, yeah I also want to make lots of money someday.
Write and release many, many useful tools. Form a community and share what you're building and your chances will greatly increase?
He built something and shared it.
People took liberties with it.
It's not about getting viral/lucky... it's about enjoying experimenting and learning.
Money follows your unique impact and imprint in these kinds of cases.
1. Are you already rich? Do you have cash in the bank to vibecode a project fulltime for many months just for fun?
2. Do you have Sam Altman's (or similar) number?
Pete, more than anyone in the OSS community IMO, exemplifies both of these qualities. He is living very much on the bleeding edge, so yes, the 10s of projects he's shipped faster than most devs can ship 1 are not as polished as if he'd created them by hand. But he's been pushing the envelope in ways that few, if any, are, and I'd argue that OpenClaw is much more the result of Pete living on that edge and understanding the trade-offs of these tools better than just about anyone.
Personally, I'm much more jealous of the fact that Pete has already had a successful exit under his belt and had the freedom to explore & learn these tools to the fullest. There is definitely a degree of luck involved with the degree to which OpenClaw took off, but that Pete discovered it is 100% earned IMO.
He distinguished between what he calls “Agentic Engineering” and “Vibe coding”, and claimed majority of the time he is not just Vibe coding.
He has 80,000+ GitHub contributions in a year across 50+ projects. I’m not sure how he averages 200 commits per day by just looking at diffs from a terminal, but it’s just Superhuman - https://github.com/steipete
The field has kind of been like this for a while - people with portfolios of proven work done, showcasing yourself and your personality via blogs or vlogs makes you sort of a known quantity, versus someone with just a CV and a LinkedIn page.
This is yet another example of an area where extroverts have an advantage. You could be 10x the engineer that the creator of OpenClaw is, but that's irrelevant in this timeline if nobody has ever heard of you.
But that path was never about writing good code.
Well, here you have it, a low effort to wire up a few tools together with spaghetti gen ai and he’s millionaire in a few months. Ok, I might be mean by saying no effort, I actually don’t know. But I know vibe coding won’t work for more than a few weeks. Also I think this bot is just a connector to multiple open source libraries that connect to WhatsApp and other services.
This is the best ad to sell AI: you can be millionaire too if you use our ChatGPT to vibe code stuff.
I think it will get a negative reaction in a few weeks when the dust settles as technical people realize it’s an ad.
Note: he might be an amazing developer but the ad still stands.
Edit: from Gemini: Publicly Embarrassing Anthropic: The timing is brutal. Anthropic’s legal team forcing a name change (from "Clawdbot" to "Moltbot" to "OpenClaw") alienated the very developer who was driving millions of users to their model. OpenAI swooping in to hire him days later frames Anthropic as "corporate lawyers" and OpenAI as "friends of the builders." It’s a perfect narrative victory.
I have a feeling that OpenAI and Anthropic both use AI to code a lot more than we think, we definitely know and hear about it at Anthropic, I havent heard it a lot at OpenAI, but it would not surprise me. I think you 100% can "vibe code" correctly. I would argue, with the hours you save coding by hand, and debugging code, etc you should 100% read the code the AI generates. It takes little effort to have the model rewrite it to be easier to read for humans. The whole "we will rewrite it later" mentality that never comes to pass is actually possible with AI, but its one prompt away.
Then the human touch points become coming up with what to build, reviewing the eng plans of the AI, and increasingly light code review of the underlying code, focusing on overall architectural decisions and only occasionally intervening to clean things up (again with AI)
https://github.com/Giancarlos/guardrails
I didnt like how married to git hooks beads was, so I made my own thats primarily a SQLite workhorse. Been using it just the same as I have used Beads, works just as good, drastically less code. I added a concept called "gates" to stop the AI model from just closing tasks without any testing or validation (human or otherwise) because it was another pain point for me with Beads.
I fully sync the issues to GitHub to boot.
https://github.com/Giancarlos/guardrails/issues
Works both ways, to GitHub, from GitHub. When you claim a task, its supposed to update on GitHub too (though looking at the last one I claimed, doesnt seem to be 100% foolproof yet).
Hiring in tech has been broken for many many years at this point. There's so much noise and only more noise coming now with AI. To be completely honest it's entire random from my end when hiring. We can't review every application that comes in. It's just impossible. We do weed out some of the spam of course and do get to real people that actually fit the requirements, but there's so many other talented people who would easily fit the role that simple get buried under applications. It's depressing from all sides here. No one should think that they aren't any good or did something wrong or didn't network enough... because the unfortunate truth is that getting a job in tech is a lottery. Something many don't want to admit.
Funny that you mention 'real people'. There are a number of components that sit at the core of what Im building - it should allow you to have the time and reach to vet more (100% verified) candidates than you ever could before. I also want to reduce the explicit costs of hiring so that firms can hire more people.
Simple as that. Don't feel jealous, trying to replicate won't work, he did not know he'd be hired, he built something that he found interesting, and then realized it would be interesting to a lot more people.
The way to reach success is to either be strategically consistent in a way that maximizes luck surface area but does not depend on it, or to be unexpectedly lucky. The latter is gambling, People win the lottery regularly, does not mean you should make that your mission.
Be comfortable with not being the one to hit gold. And yes, it's ok to be jealous. Take a moment and then go back and enjoy the rest of your life.
Finally, there are a lot of companies that would likely hire you, hoping to hit gold. But you are likely filtering them out because they're not tech/large/startupy enough for you. These companies are wondering what they need to attract talent like you.
Bringing unblockable ads to the masses. Roger that.
- a heartbeat, so it was able to 'think'/work throughout the day, even if you weren't interacting with it - a clever and simple way to retain 'memory' across sessions (though maybe claude code has this now) - a 'soul' text file, which isn't necessarily innovative in itself, but the ability for the agent to edit its own configuration on the fly is pretty neat
Oh, and it's open source
As far as the 'soul' file, claude does have claude.md and skills.md files that it can edit with config changes.
One thing I'm curious about is whether there was significant innovation around tools for interacting with websites/apps. From their wiki, they call out like 10 apps (whatsapp, teams, etc...) that openclaw can integrate with, so IDK if it just made interacting with those apps easier? Having agents use websites is notoriously a shitty experience right now.
A smaller difference would be that you can use any/all models with OpenClaw.
Using a claude code instance through a phone app is certainly not something that is easy to do, so if there's like a phone app that makes that easy, I can see that being a big differentiator.
I think this is a pretty cool example: https://github.com/mcintyre94/wisp
This is using Claude on VMs that don’t have SSH, so can’t use a regular terminal emulator. They stream responses to commands over websockets, which works perfectly with Claude’s streaming. They can run an interactive console session, but instead I built a chat UI around this non-interactive mode.
You can see how I build the Claude command here: https://github.com/mcintyre94/wisp/blob/main/Wisp/ViewModels...
For me personally I can't stand interacting with agents via CLI and fixed width fonts so I built a e2e encrypted remote interface that has a lot of the nice UI feature you would expect from a first class UI like Claude Vscode extension (syntax highlighting, streaming, etc). You can self host it. But it's a little no dependencies node server that you can just npm install (npm i -g yepanywhere)
https://github.com/kzahel/yepanywhere
Just uses claude. I haven't tried it much but it seems to be what you're describing.
Openclaw uses pi agent under the hood. Arguably most of the codebase could be replaced by systemd if you're running on a VPS for scheduling though, and then its a series of prompts on top of pi agent.
In the past, people wanting to sign a juicy contract at a FAANG were told to spend hours everyday on Leetcode.
Now? Just spend tokens until you build something that get enough traction to be seen by one of the big labs!
Gaining traction is the tough part.
It is in OAI's best interests to create a perception that flinging agentic swarm crap at the wall may result in lucrative job offers. Or to otherwise imply this is the golden path. They need their customers to consume ever more tokens per unit time. This highly contentious parallel agent swarm stuff is the perfect recipe.
Plus employees who can inject hyped ideas is exactly the sort of efficient advertising openai relies on
It will hurt when self proclaimed coders realise 2 years later and all their savings burned on token they cannot all get meaningful traction.
How to mitigate this concern?
Prompt injection is a thing, and a lot of vibe coding, Gas Town, Ralph-loop enthusiasts are vehemently ignoring the risk believing they’re getting ahead.
I wouldn’t worry and just observe the guinea pigs doing their thing. Most of them will run around expending all their energy, some will get eaten by snakes, and you’ll be able to learn a lot, wait for the environment to mature, then spend your energy, instead.
It should run on its own VPS with full root access, given api keys which have spending limits, given no way for strangers to talk to it. I treat it as a digital assistant, a separate entity, who may at some point decide to sell me out as any human stranger might, and then share personal info under that premise.
What I am missing is distribution. It seems impossible to get traction nowadays on social media, regardless how good your product is.
Any feedback is much appreciated.
I recall listing to one of the now vintage series, I thought it was Joe Rogan himself. But it wasn't, the voice was a bit different but the pause, the reactions, the "waaah" with the overall tone of uncovering some secret truth.
It's a fascinating societal phenomenon, coupled with the American dream, yes.
In any case those examples are doing no good by setting themselves as models for millions to become obsessed in replicating. No surprise the rate of people in depression keeps going up.
The question is whether OpenClaw will actually stay open in the world of 'Open'Ai.
openclaw is inevitable type of software (as cli agents, as context-management software, as new methodologies of structuring sofware for easier AI ingestion, etc). Guy gamed, built it, guy got it.
At this point I would not expect well-rounded software as a byproduct of huge investments and PR stunts. There will be something else after LLMs, I bet people are already working on it. But current state of affairs of LLMs and all the fuss aroud them is way more peceptive, PR and emotion driven than containing intristic value.
OpenAi is curating ChatGpt very well, which honestly I like, compared to other companies, maybe expect Anthropic, they are not "caring" that much
I'm extremely curious what OpenAI's offer was. The utility of more money is diminished when you're already pretty wealthy.
It probably also makes him more attractive to OpenAI et al. - he's not just some guy who's going to have all sorts of risks earning a lot of money for the first time.
And every time I reach the same conclusion: it's a WhatsApp/Telegram/etc wrapper for LLMs.
Until next time!
The two sides:
* From his POV: He said he's not interested in doing "another company" after spending 13 years trying to build a startup. I imagine there's another aspect too, which is that OpenClaw is not in itself an inherently revenue-generating product, nor is it IP-defensible. This was my situation. My viral hit could (and soon was) replicated by many others. I had the advantage of being "the guy who invented that cool thing", but otherwise I would be starting from scratch. It was a mind-fuck having a huge hit on my hands from one day to the next, but with no obvious direction on how to capitalize on it.
* Then from the company's POV: despite hiring thousands and thousands of employees, only a tiny handful of them ever capture any "magic." You've got an army of product managers who have never actually built or conceived of a product people love, and engineers who usually propose ideas that are ok but probably not true gold. So now here we have a guy who did actually conjure up something magical that really resonated with people. Can he do it again? Unknown, but he's already proved himself in the ideas space more than most people, so it's worth a shot for the company.
I do not believe in predetermined roles. My version is to find the thing you’re excited to do and not the outcome you’re excited to have.
This can be said about a lot of successful projects, products, and companies. I’d argue that, by all means, do try to be like Peter. Try to tinker around and make something new the world has never seen before.
He made something that excited many people, and I don’t think it’s the correct take to consider this an anomaly. It’s someone who was already known in the development community trying something new and succeeding.
Yeah, these systems are going to get absolutely rocked by exploits. The scale of damage is going to be comical, and, well, that's where we are right now.
Go get 'em, tiger. It's a brave new world. But, as with my 10 year old, I need to make sure the credit cards aren't readily available. He'd just buy $1k of robux. Who knows what sort of havoc uncorked agentic systems could bring?
One of my systems accidentally observed some AWS keys last night. Yeah. I rotated them, just in case.
Filling the team with people who come up with novel and interesting ways to grab attention that could possibly create vendor lock-in is probably the goal.
I'd been having conversations with ChatGPT about OpenClaw, nothing remarkable or extraordinary. Then I started a new conversation to talk about a different aspect, and GPT assumed I wanted to talk about some old PC game.
To disambiguate, I now had to refer to OpenClaw.ai. I asked it if it had some new system directive about this, and of course it denied it. Today we learn OpenAI has hired the OpenClaw developer, and he's "turning the project over to a foundation"
The more likely scenario is that he was hired for the amazing ability to move fast and break things.
Or just plain recklessness.
What he did is incredible, he grabbed attention of the tech community like no other...however good or bad he was at it and making it secure.
He was curious and experimental and got lucky!
For OpenAI it’s a smart move, snatch up the creator of a viral and ride on its coat tails for the hype.
Join us in the wait list, mate. In the meantime, maybe we could make a whole community and try to make our own start-ups.
Meaning, these products are being created by representatives of the kind of people carrying the most privilege and having the least impact of negative decisions.
For example, Twitter did not start sanitizing location data in photos until women joined the team and indicated that such data can be used for stalking.
White rich bros do not get stalked. This problem does not exist in their universe.
This has been travelling on a bunch of platforms, but the niche-ish ones have fairly low engagement numbers.. in the grand scheme of things.
Hello, fellow humans