Microsoft Corp. will no longer pay revenue to OpenAI and said its partnership with the leading artificial intelligence firm will not be exclusive going forward.
What does this mean that Microsoft will no longer pay revenue to OpenAI? How did the original deal work?
Handy-Man•Apr 27, 2026
They were paying them 20% of the revenue from the hosted OpenAI products I believe?
bilbo0s•Apr 27, 2026
Does this mean they will host OpenAI products but not pay them? Or does it mean they are paying them in some other way?
Handy-Man•Apr 27, 2026
I suppose continue to host until the 2030/32 that they have access to but not share revenues when they use those models for their products like the bazillions of Copilots.
HarHarVeryFunny•Apr 27, 2026
It seems that the old deal was exclusivity to MSFT with revenue share, and now no exclusivity, no revenue share.
Bear in mind that MSFT have rights to OpenAI IP (as well as owning ~30% of them). The only reason they were giving revenue share was in return for exclusivity.
borski•Apr 27, 2026
This is a really common way to structure exclusivity; we did the same thing whenever customers requested it (and we couldn’t get rid of it entirely). Charge for the exclusivity explicitly.
If they wanted named exclusivity rather than general exclusivity, we would charge a somewhat smaller amount for each competitor they wanted exclusivity from. They could give up exclusivity at any time.
That was precisely how we structured our deal with Azure, back in 2014-2016 or so.
deaux•Apr 27, 2026
Azure was the only non-OpenAI provider that was allowed to provide OpenAI models. The comparison here is with Anthropic whose models are on both GCP and AWS (and technically also Azure though I think that might just be billing passthrough to Anthropic).
justinclift•Apr 27, 2026
Wonder if this means Microsoft is actually going to be deploying Claude Code internally for usage?
That might help fix some of the bugs in Teams... :)
Tried to delete this submission in place of it but too late.
aurareturn•Apr 27, 2026
The original "AGI" agreement was always a bit suspect and open to wild interpretations.
I think this is good for OpenAI. They're no longer stuck with just Microsoft. It was an advantage that Anthropic can work with anyone they like but OpenAI couldn't.
Handy-Man•Apr 27, 2026
It also restricted Microsoft from "partnering" with anyone else. Wouldn't be surprised if we see another news like Amazon, Alphabet investing in Anthropic.
aurareturn•Apr 27, 2026
I don't think Microsoft ever had that restriction. They partnered with everyone already.
AFAICT they are just hedging their bets left and right still. Also feels like they are winning in the sense that despite pretty much all those products being roughly equivalent... they are still running on their cloud, Azure. So even though they seem unable to capture IP anymore, they are still managing to get paid for managing the infrastructure.
delecti•Apr 27, 2026
Are they getting paid in actual money? Or are the AI companies "paying" their infrastructure bills with IOU/equity.
Handy-Man•Apr 27, 2026
Yeah my bad, I was misremembering, it was about investing in others and pursuing its own "AGI" efforts. But even those conditions were updated over the last two years, hence the small investment in Anthropic last year.
I think it was a lot less restrictive, as far as I understood, the only limit was Microsoft not being allowed to launch competing Microsoft-developed LLMs.
aliljet•Apr 27, 2026
Why is this being made public?
brookst•Apr 27, 2026
It’s an agreement between a public company and a highly scrutinized private company. Several of the provisions will change what happens in the marketplace, which everyone will see.
I imagine the thinking was that it’s better to just post it clearly than to have rumors and leaks and speculations that could hurt both companies (“should I risk using GCP for OpenAI models when it’s obviously against the MS / OpenAI agreement?”).
Schlagbohrer•Apr 27, 2026
Also it's about OpenAI going public.
ZeroCool2u•Apr 27, 2026
Interesting side effect of this is that Google Cloud may now be the only hype scaler that can resell all 3 of the labs models? Maybe I'm misinterpreting this, but that would be a notable development, and I don't see why Google would allow Gemini to be resold through any of the other cloud providers.
Might really increase the utility of those GCP credits.
aurareturn•Apr 27, 2026
Might not be good for Gemini long term if Anthropic and OpenAI can and will sell in every cloud provider they can find but businesses can only use Gemini via Google Cloud.
jfoster•Apr 27, 2026
Good for Google Cloud, bad for Gemini = ??? for Google
stavros•Apr 27, 2026
How is it good for Gemini that it's not available on two out of three major cloud platforms?
aurareturn•Apr 27, 2026
It isn't. That's why I said "might not be good for Gemini".
stavros•Apr 27, 2026
Oof, I completely missed that "not", thanks.
retinaros•Apr 27, 2026
that will likely mean the end of gemini models...
_jab•Apr 27, 2026
This agreement feels so friendly towards OpenAI that it's not obvious to me why Microsoft accepted this. I guess Microsoft just realized that the previous agreement was kneecapping OpenAI so much that the investment was at risk, especially with serious competition now coming from Anthropic?
dinosor•Apr 27, 2026
> Microsoft will no longer pay a revenue share to OpenAI.
I feel this looks like a nice thing to have given they remain the primary cloud provider. If Azure improves it's overall quality then I don't see why this ends up as a money printing press as long as OpenAI brings good models?
JumpCrisscross•Apr 27, 2026
OpenAI was also threatening to accuse "Microsoft of anticompetitive behavior during their partnership," an "effort [which] could involve seeking federal regulatory review of the terms of the contract for potential violations of antitrust law, as well as a public campaign" [1].
Does this mean Microsoft gets OpenAI's models for "free" without having to pay them a dime until 2032?
And on top of that, OpenAI still has to pay Microsoft a share of their revenue made on AWS/Google/anywhere until 2030?
And Microsoft owns 27% of OpenAI, period?
That's a damn good deal for Microsoft. Likely the investment that will keep Microsoft's stock relevant for years.
dzonga•Apr 27, 2026
own 27%. but are entitled to OpenAI profits of 49% for eternity (if OpenAI is profitable or government steps in)
lokar•Apr 27, 2026
Does anyone expect azure quality to improve? Has it improved at all in the last 3 years? Does leadership at MS think it needs to improve?
I doubt it
jakeydus•Apr 27, 2026
Don’t worry I’m sure there’s a few products without copilot integration still. They’ll get to them before too long.
gchamonlive•Apr 27, 2026
No and at this point tying yourself to azure is a strategic passive and anyone making such decisions should be held responsible for any service outage or degradation.
dkrich•Apr 27, 2026
Probably more that they are compute constrained. In his latest post Ben Thompson talks about how Microsoft had to use their own infrastructure and supplant outside users in the process so this is probably to free up compute.
DanielHB•Apr 27, 2026
Microsoft is a major shareholder of OpenAI, they don't want their investment to go to 0. You don't just take a loss on a multiple-digit billion investment.
snowwrestler•Apr 27, 2026
I think you’re right about this deal. But it’s kind of funny to think back and realize that Microsoft actually has just written off multi-billion-dollar deals, several times in fact.
guluarte•Apr 27, 2026
I think MS wants OpenAI to fail so it can absorb it
Oras•Apr 27, 2026
MS put 10B for 50% if I remember correctly. OpenAI is worth many multiples of that.
bmitc•Apr 27, 2026
> OpenAI is worth many multiples of that.
How?
airstrike•Apr 27, 2026
"Advancing Our Amazing Bet" type post
31276•Apr 27, 2026
Pursue "new opportunities"? Microslop is dumping OpenAI and wishes it well in its new endeavors.
iewj•Apr 27, 2026
In retrospect all those OAI announcements are gonna look so cringe.
They did not need to go so hard on the hype - Anthropic hasn’t in relative terms and is generating pretty comparable revenues at present.
JumpCrisscross•Apr 27, 2026
> They did not need to go so hard on the hype - Anthropic hasn’t in relative terms and is generating pretty comparable revenues at present
OpenAI bet on consumers; Anthropic on enterprise. That will necessitate a louder marketing strategy for the former.
eieiw•Apr 27, 2026
That’s funny.
Why is it Altman is facing kill shots and Dario isn’t?
JumpCrisscross•Apr 27, 2026
> Why is it Altman is facing kill shots and Dario isn’t?
Altman peaked in the zeiteist in 2023; Dario, much less prominently, in 2024 and now '26 [1]. I'd guess around this time next year, Dario will be as hated as Altman is today.
I read this as the other way. OpenAI was desperate to dump Microsoft.
JumpCrisscross•Apr 27, 2026
> OpenAI was desperate to dump Microsoft
Yes. Microsoft was "considering legal action against its partner OpenAI and Amazon over a $50 billion deal that could violate its exclusive cloud agreement with the ChatGPT maker" [1].
Kagi Translate was kind enough to turn this from LinkedIn Speak to English:
The Microsoft and OpenAI situation just got messy.
We had to rewrite the contract because the old one wasn't working for anyone. Basically, we’re trying to make it look like we’re still friends while we both start seeing other people. Here is what’s actually happening:
1. Microsoft is still the main guy, but if they can't keep up with the tech, OpenAI is moving out. OpenAI can now sell their stuff on any cloud provider they want.
2. Microsoft keeps the keys to the tech until 2032, but they don't have the exclusive rights anymore.
3. Microsoft is done giving OpenAI a cut of their sales.
4. OpenAI still has to pay Microsoft back until 2030, but we put a ceiling on it so they don't go totally broke.
5. Microsoft is still just a big shareholder hoping the stock goes up.
We’re calling this "simplifying," but really we’re just trying to build massive power plants and chips without killing each other yet. We’re still stuck together for now.
azinman2•Apr 27, 2026
This was actually really helpful. I feel like it should be done for all PR speak.
JumpCrisscross•Apr 27, 2026
It's better than the original, but still off.
"The Microsoft and OpenAI situation just got messy" is objectively wrong–it has been messy for months [1]. Nos. 1 through 3 are fine, though "if they can't keep up with the tech, OpenAI is moving out" parrots OpenAI's party line. No. 4 doesn't make sense–it starts out with "we" referring to OpenAI in the first person but ends by referring to them in the third person "they." No. 5 is reductive when phrased with "just."
It would seem the translator took corporate PR speak and translated it into something between the LinkedIn and short-form blogger dialects.
Being objectively correct isn't the goal of the translator, the translator can't possibly know if a statement is truthful. What the translator does is well... translate, specifically from some kind of corporate speak that is really difficult for many people including myself to understand, into something more familiar.
I don't expect the translation to take OpenAI's statements and make them truthful or to investigate their veracity, but I genuinely could not understand OpenAI's press release as they have worded it. The translation at least makes it easier to understand what OpenAI's view of the situation is.
ghostly_s•Apr 27, 2026
> The only only pure fuck-up I'd call out is switching from third to first person when referring to OpenAI in the same sentence (No. 4).
"We" in this sentence refers to both parties; "they" refers to OpenAI. Not a grammatical error.
JumpCrisscross•Apr 27, 2026
> "We" in this sentence refers to both parties
Fair enough.
> "they" refers to OpenAI. Not a grammatical error
I'd say it is. It's a press release from OpenAI. The rest of the release uses the third-person "they" to refer to Microsoft. The LLM traded accuracy for a bad joke, which is someting I associate with LinkedIn speak.
The fundmaental problem might be the OpenAI press release is vague. (And changing. It's changed at least once since I first commented.)
auscompgeek•Apr 27, 2026
In isolation sure. But in context with the other points it makes it look like "they" refers to Microsoft in all the dot points.
Biggest upside of this is I expect OpenAI models to be available on Bedrock, which is huge for not having to go back to all your customers with data protection agreements.
easton•Apr 27, 2026
Isn’t that an “API product”? I read this assuming the whole point of renegotiation was to let OpenAI sell raw inference via bedrock, but that still seems to be blocked except for selling to the US Government.
fengkx•Apr 27, 2026
> OpenAI can now jointly develop some products with third parties. API products developed with third parties will be exclusive to Azure. Non-API products may be served on any cloud provider.
This seems impossible.
jryio•Apr 27, 2026
> OpenAI has contracted to purchase an incremental $250B of Azure services, and Microsoft will no longer have a right of first refusal to be OpenAI’s compute provider.
Azure is effectively OpenAI's personal compute cluster at this scale.
JumpCrisscross•Apr 27, 2026
What fraction of Azure compute does OpenAI represent? (Does the $250bn commitment have a time period? Is it legally binding?)
runako•Apr 27, 2026
Azure did $75B last quarter.
That article doesn't give a timeframe, but most of these use 10 years as a placeholder. I would also imagine it's not a requirement for them to spend it evenly over the 10 years, so could be back-loaded.
OpenAI is a large customer, but this is not making Azure their personal cluster.
einrealist•Apr 27, 2026
I wonder how this figure was settled. Is it based on consumer pricing? Can't Microsoft and OpenAI just make a number up, aside from a minimum to cover operating costs? When is the number just a marketing ploy to make it seem huge, important and inevitable (and too big to fail)?
m3kw9•Apr 27, 2026
Looks like MS is shafting OpenAI.
delis-thumbs-7e•Apr 27, 2026
It’s insane how they talk about AGI, like it was some scientifically qualifiable thing that is certain to happen any time now. When I have become the javelin Olympic Champion, I will buy a vegan ice cream to everyone with a HN account.
hx8•Apr 27, 2026
Do the investments make sense if AGI is not less than 10 years away?
JumpCrisscross•Apr 27, 2026
> Do the investments make sense if AGI is not less than 10 years away?
They can. If one consolidated the AI industry into a single monopoly, it would probably be profitable. That doesn't mean in its current state it can't succumb to ruionous competition. But the AGI talk seems to be mostly aimed at retail investors and philospher podcasters than institutional capital.
iewj•Apr 27, 2026
What kind of ludicrous statement is this? Any monopoly with viable economics for profit with no threat of competition yields monopoly profits…
JumpCrisscross•Apr 27, 2026
> Any monopoly with viable economics for profit with no threat of competition yields monopoly profits
"With viable economics" is the point.
My "ludicrous statement" is a back-of-the-envelope test for whether an industry is nonsense. For comparison, consolidating all of the Pets.com competitors in the late 1990s would not have yielded a profitable company.
eieiw•Apr 27, 2026
Very convenient to leave out Amazon in your back of the envelope test, whose internal metrics were showing a path toward quasi-monopoly profits.
Do you argue in good faith?
There’s a difference between being too early vs being nonsense.
JumpCrisscross•Apr 27, 2026
> Very convenient to leave out Amazon in your back of the envelope test, who’s internal metrics were showing a path toward quasi-monopoly profits
Not in the 1990s. The American e-commerce industry was structurally unprofitable prior to the dot-com crash, an event Amazon (and eBay) responded to by fundamentally changing their businesses. Amazon bet on fulfillment. eBay bet on payments. Both represented a vertical integration that illustrates the point–the original model didn't work.
> There’s a difference between being too early vs being nonsense
When answering the question "do the investments make sense," not really. You're losing your money either way.
The American AI industry appears to have "viable economics for profit" without AGI. That doesn't guarantee anyone will earn them. But it's not a meaningless conclusion. (Though I'd personally frame it as a hypothesis I'm leaning towards.)
SkyEyedGreyWyrm•Apr 27, 2026
Malcolm Harris' Palo Alto explained the failures of many dotcom startups and Amazon's later success in the field (in part) to the fact that dotcom era delivery was done by highly trained, highly compensated, unionized in-company workers, meanwhile Amazon prevents unions, contracts (or contracted, I'm not up to date on this) companies for delivery and has exploitative working conditions with high turnover, the economics are very different and are a big contributor to their success
Maxatar•Apr 27, 2026
>"...viable economics for profit..."
OP did not include this requirement in their post because doing so would make the claim trivially true.
antupis•Apr 27, 2026
Thing is that distillation is so easy that it would also need large scale regulatory capture to keep smaller competitors out.
rapind•Apr 27, 2026
Best way to achieve AGI: Redefine AGI.
2ndorderthought•Apr 27, 2026
They already did that, and AI. That's how we got into this mess.
jrflo•Apr 27, 2026
The investments don't make sense.
theplatman•Apr 27, 2026
when i realized that sama isn't that much of an ai researcher, it became clearer that this is more akin to a group delusion for hype purposes than a real possibility
iewj•Apr 27, 2026
He’s a glorified portfolio manager (questionable how good he actually is given the results vs Anthropic and how quickly they closed the valuation gap with far less money invested) + expert hype man to raise money for risky projects.
lokar•Apr 27, 2026
From the reporting I’ve read his main attributes are being a sociopath with an amazing ability to manipulate people 1:1
sourraspberry•Apr 27, 2026
You can read the leaked emails from the Musk lawsuit.
At the very least, Ilya Sutskever genuinely believed it, even when they were just making a DOTA bot, and not for hype purposes.
I know he's been out of OpenAI for a while, but if his thinking trickled down into the company's culture, which given his role and how long he was there I would say seems likely, I don't think it's all hype.
Grand delusion, perhaps.
freejazz•Apr 27, 2026
> Ilya Sutskever genuinely believed it
Seems more like an incredibly embarrassing belief on his part than something I should be crediting.
someguyiguess•Apr 27, 2026
Any sufficiently complex LLM is indistinguishable from AGI
JumpCrisscross•Apr 27, 2026
> Any sufficiently complex LLM is indistinguishable from AGI
Isn't this tautology? We've de facto defined AGI as a "sufficiently complex LLM."
Schlagbohrer•Apr 27, 2026
Yes! Same logic as the financials, in which the companies pass back and forth the same $200 Billion promissory note.
ohyoutravel•Apr 27, 2026
No, it’s just an example of something that’s indistinguishable from AGI. Of all the things that are or are indistinguishable from AGI, a sufficiently complex LLM is one. A sufficiently complex decision tree is probably another. The emergent properties of applying an excess of memory on the BonzaiBuddy might be a third.
izzydata•Apr 27, 2026
If we take that statement as fact then I don't believe we are even close to an LLM being sufficiently complex enough.
However, I don't think it is even true. LLMs may not even be on the right track to achieving AGI and without starting from scratch down an alternate path it may never happen.
LLMs to me seem like a complicated database lookup. Storage and retrieval of information is just a single piece of intelligence. There must be more to intelligence than a statistical model of the probable next piece of data. Where is the self learning without intervention by a human. Where is the output that wasn't asked for?
At any rate. No amount of hype is going to get me to believe AGI is going to happen soon. I'll believe it when I see it.
Investors are typically people with surplus money to invest. Progress cannot be made without trial and error. So fleecing of investors for the greater good of humanity is something I shall allow.
ambicapter•Apr 27, 2026
A "surplus of money"? So people saving for retirement have a "surplus of money"? Basically if any money is standing still, it's a legitimate tactic to just...take it, in your mind.
Other people just call it "theft".
stavros•Apr 27, 2026
At this point, AGI is either here, or perpetually two years away, depending on your definition.
greybeard69•Apr 27, 2026
Full Self-Driving 2.0
xienze•Apr 27, 2026
It's always been this way. I remember, speaking of Microsoft, when they came to my school around 2002 or so giving a talk on AI. They very confidently stated that AGI had already been "solved", we know exactly how to do it, only problem is the hardware. But they estimated that would come in about ten years...
jakeydus•Apr 27, 2026
I knew flappy bird was a bigger deal than it got credit for. Didn’t realize it was agi until just now.
lucaslazarus•Apr 27, 2026
It’s pretty much a religious eschatology at this point
rtkwe•Apr 27, 2026
It feels like they have to say/believe it because it's kind of the only thing that can justify the costs being poured into it and the cost it will need to charge eventually (barring major optimizations) to actually make money on users.
renticulous•Apr 27, 2026
Progess is generally salami slicing just as escalation in geopolitics. Not a step function.
Russian Invasion - Salami Tactics | Yes Prime Minister
AGI is right around the corner, and we're all going to be rich, there's going to be abundance for everyone, universal high income, everyone will live in a penthouse...
...just please stop burning our warehouses and blocking our datacenters.
nikeyshon•Apr 27, 2026
Where do I sign up?
otabdeveloper4•Apr 27, 2026
> AGI
We already have several billion useless NGI's walking around just trying to keep themselves alive.
Are we sure adding more GI's is gonna help?
RobRivera•Apr 27, 2026
Make mine p p p p p p vicodin
CWwdcdk7h•Apr 27, 2026
It sounds really similar to Uber pitch about how they are going to have monopoly as soon as they replace those pesky drivers with own fleet of self driving cars. That was supposed to be their competitive edge against other taxi apps. In the end they sold ATG at end of 2020 :D
ambicapter•Apr 27, 2026
ATH?
murkt•Apr 27, 2026
Autonomous Thriving Hroup?
PurpleRamen•Apr 27, 2026
They redefined AGI to be an economical thing, so they can continue making up their stories. All that talk is really just business, no real science in the room there.
JumpCrisscross•Apr 27, 2026
> They redefined AGI to be an economical thing
Huh. Source? I mean, typical OpenAI bullshit, but would love to know how they defined it.
binary0010•Apr 27, 2026
OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity
I'm so confused why I was down voted for answering the question that was asked?
a2128•Apr 27, 2026
Around the end of 2024, it was reported that OpenAI and Microsoft agreed that for the purposes of their exclusivity agreement, AGI will be achieved when their AI system generates $100 billion in profit: https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
JumpCrisscross•Apr 27, 2026
> OpenAI and Microsoft agreed that for the purposes of their exclusivity agreement, AGI will be achieved when their AI system generates $100 billion in profit
Wow. Maybe they spelled it out as aggregate gross income :P.
bena•Apr 27, 2026
So no human on Earth is intelligent by that metric.
gowld•Apr 27, 2026
Companies that have created "AGI":
Apple, Alphabet, Amazon, NVIDIA, Samsung, Intel, Cisco, Pfizer, UnitedHealth , Procter & Gamble, Berkshire Hathaway, China Construction Bank, Wells Fargo, ...
wrs•Apr 27, 2026
It’s a system that generates $100 billion in profit. [0]
> some scientifically qualifiable thing that is certain to happen any time now.
If you present GPT 5.5 to me 2 years ago, I will call it AGI.
BoredPositron•Apr 27, 2026
GPT 4 was 3 years ago... it's iterative enhancement.
wongarsu•Apr 27, 2026
It performs at a usable level across a wide range of tasks. I'm not sure about two years ago, but ten years ago we would have called it an AGI. As opposed to "regular AI" where you have to assemble a training set for your specific problem, then train an AI on it before you can get your answers.
Now our idea of what qualifies as AGI has shifted substantially. We keep looking at what we have and decide that that can't possibly be AGI, our definition of AGI must have been wrong
Der_Einzige•Apr 27, 2026
Just don't move the goal posts. AGI was already here the day ChatGPT came out:
... until you actually, like, use it and find out all the limitations it has.
vntok•Apr 27, 2026
How is this relevant? Human General Intelligence has a lot of limitations as well and we have managed to do lots.
ifdefdebug•Apr 27, 2026
This is like saying that talking about my financial limitations is irrelevant because Jeff Bezos also has financial limitations...
nromiun•Apr 27, 2026
If you present ELIZA to people some will think it is AGI today.
There is a reason so many scams happen with technology. It is too easy to fool people.
staticman2•Apr 27, 2026
If you didn't call GPT 3.5 AGI I do not believe you when you claim you would have called 5.5 AGI.
freejazz•Apr 27, 2026
And I've been told my job (litigation attorney) is about to be replaced for over 3 years now, has yet to come close.
BloondAndDoom•Apr 27, 2026
People always over estimate the impact of technology because they dont Understand human aspect of many businesses. Will it eventually replaced or will the shape of these kind of work will be completely different in the future? That’s an easy yes, when is that future? That’s a big unknown, in my experience this kind of stuff takes at least a decade (and possibly more on this case) to make a big impact like replacing all of X.
romaniv•Apr 27, 2026
Some people thought SHRDLU was basically AGI after seeing its demo in 1970. The hype around such systems was so strong that Hubert Dreyfus felt the need to write an entire book arguing against this viewpoint (1972 What Computers Can't Do). All this demonstrates is that we need to be careful with various claims about computer intelligence.
AntiUSAbah•Apr 27, 2026
Sure, but it was probably stuck at doing that one thing.
neural networks are solving huge issues left and right. Googles NN based WEathermodel is so good, you can run it on consumer hardware. Alpha fold solved protein folding. LLMs they can talk to you in a 100 languages, grasp tasks concepts and co.
I mean lets talk about what this 'hype' was if we see a clear ceiling appearing and we are 'stuck' with progress but until then, I would keep my judgment for judgmentday.
BloondAndDoom•Apr 27, 2026
I agree with this but they don’t. And that’s the the thing, AGI as they refer is much much much more than what we have, and I don’t know if they are going to ever get there and I’m not sure what’s even there at this point and what will justify their investments.
AndrewKemendo•Apr 27, 2026
> some scientifically qualifiable thing that is certain to happen any time now
Your position is a tautology given there is no (and likely will never be) collectively agreed upon definition of AGI. If that is true then nobody will ever achieve anything like AGI, because it’s as made up of a concept as unicorns and fairies.
Is your position that AGI is in the same ontological category as unicorns and Thor and Russell’s teapot?
Is there’s any question at this point that humans won’t be able to fully automate any desired action in the future?
no_wizard•Apr 27, 2026
This is all happening as I predicted. OpenAI is oversold and their aggressive PR campaign has set them up with unrealistic expectations. I raised alot of eyebrow at the Microsoft deal to begin with. It seemed overvalued even if all they were trading was mostly Azure compute
AntiUSAbah•Apr 27, 2026
We are throwing unheared amounts of money in AI and unseen compute. Progress is huge and fast and we barely started.
If this progress and focus and resources doesn't lead to AI despite us already seeing a system which was unimaginable 6 years ago, we will never see AGI.
And if you look at Boston Dynamics, Unitree and Generalist's progress on robotics, thats also CRAZY.
mort96•Apr 27, 2026
If I'm reading you right, your opinion is essentially: "If building bigger and bigger statistical next word predictors won't lead to artificial general intelligence, we will never see artificial general intelligence"
I don't know, maybe AGI is possible but there's more to intelligence than statistical next word prediction?
bmitc•Apr 27, 2026
Same thing happened with self-driving cars. Oh and cryptocurrencies.
benterix•Apr 27, 2026
Not sure if you're being sincere or sarcastic but some of us have lived through several AI winters now. And the fact that such a phenomenon exists is because of this terrible amount of hype the topic gets whenever any progress is made.
freejazz•Apr 27, 2026
Impossible to take any of this seriously when it constantly refers to AGI.
Schlagbohrer•Apr 27, 2026
Especially when the OpenAI definition of AGI is only in financial terms (when it becomes profitable), which can be easily manipulated.
JumpCrisscross•Apr 27, 2026
It's unclear which elements of this new deal are binding versus promises with OpenAI characteristics. "Microsoft Corp. will publish fiscal year 2026 third-quarter financial results after the close of the market on Wednesday, April 29, 2026" [1]; I'd wait for that before jumping to conclusions.
"We want to sell surveillance services to the US gov. MSFT was hesitant so we gave ourselves room to do it without them."
Schlagbohrer•Apr 27, 2026
Extremely hard to believe that MSFT would have any hesitancy about working with the US government.
Schlagbohrer•Apr 27, 2026
The AGI talk is shocking but not surprising to anyone looking at how bombastic Sam Altman's public statements are.
The circular economy section really is shocking- OpenAI committing to buying $250 Billion of Azure services, while MSFT's stake is clarified as $132 Billion in OpenAI. Same circular nonsense as NVIDIA and OpenAI passing the same hundred billion back and forth.
ModernMech•Apr 27, 2026
Dennis: I think we made every single one of our Paddy's Dollars back, buddy.
Mac: You're damn right. Thus creating the self-sustaining economy we've been looking for.
Dennis: That's right.
Mac: How much fresh cash did we make?
Dennis: Fresh cash! Uh, well, zero. Zero if you're talking about U.S. currency. People didn't really seem interested in spending any of that.
Mac: That's okay. So, uh, when they run out of the booze, they'll come back in and they'll have to buy more Paddy's
Dollars. Keepin' it moving.
Dennis: Right. That is assuming, of course, that they will come back here and drink.
Mac: They will! They will because we'll re-distribute these to the Shanties. Thus ensuring them coming back in, keeping the money moving.
Dennis: Well, no, but if we just re-distribute these, people will continue to drink for free.
Mac: Okay...
Dennis: How does this work, Mac?
Mac: The money keeps moving in a circle.
Dennis: But we don't have any money. All we have is this. ... How does this work, dude!?
Mac: I don't know. I thought you knew.
ksimukka•Apr 27, 2026
Great scene
sourraspberry•Apr 27, 2026
The disparity in coverage on this new deal is fascinating. It feels like the narrative a particular outlet is going with depends entirely on which side leaked to them first.
scottyah•Apr 27, 2026
Just some of the games sama is playing.
concinds•Apr 27, 2026
Am I crazy, or was this press release fully rewritten in the past 10 minutes? The current version is around half the length of the old one, which did not frame it as a "simplification" "grounded in flexibility" but as a deeper partnership. It also had word salad about AGI, and said Azure retained exclusivity for API products but not other products, which the new statement seems to contradict.
What was I looking at?
einsteinx2•Apr 27, 2026
I noticed the exact same thing. I read the original, went back to read it again and it’s completely changed.
3form•Apr 27, 2026
I think a stickied comment about this would be due. No idea if it's possible to call in @dang via at-name?
antonkochubey•Apr 27, 2026
They forgot the "hey ChatGPT, rewrite this to have better impact on the company stock" before submitting it
martinald•Apr 27, 2026
Really interesting. Why would Microsoft have done this deal? I'm a bit lost. Sure they get to not pay a revenue share _to_ OpenAI but surely that's limited to just OpenAI products which is probably a rounding error? Losing exclusivity seems like a big issue for them?
chasd00•Apr 27, 2026
This gives OpenAI the ability to goto AWS instead of exclusively on Azure. I guess Azure really is hanging on by a thread.
I was under the impression that as long as GitHub doesn't support IPv6 it is a sign that they still haven't finished their migration to Azure. Azure supports IPv6 just fine.
torginus•Apr 27, 2026
What? I thought Azure will always have the Sharepoint/Office/Active Directory cash cow.
isk517•Apr 27, 2026
Their engineers have been working tirelessly to make Sharepoint/Office/Active Directory as terrible as it possibly could be while still technically being functional, while continuing to raise prices on them. I've seen many small business start to chose Google Workspace over them, the cracks have formed and are large enough that they are no longer in a position were every business just go with Office because that's what everyone uses.
1f60c•Apr 27, 2026
Wait, I thought OpenAI had to pay Microsoft until AGI was achieved or something? Am I misremembering? Is that a different thing?
ksherlock•Apr 27, 2026
Per WSJ, previously, they both had revenue sharing agreements. MSFT will no longer send any revenue to OpenAI. OpenAI will still send revenue to MSFT until 2030 (with new caps)
staminade•Apr 27, 2026
My understand was that was in relation to IP licensing. Microsoft got access to anything OpenAI built unless they declared they had developed AGI. This new article apparently unlinks revenue sharing from technology progress, but it's unclear to me if it changes the situation regarding IP if OpenAI (claim to) have achieved AGI.
Nadella had OpenAI by the short and curlies early on. But all I've seen from him in the last couple of years is continuously acquiescing to OpenAI's demands. I wonder why he's so weak and doesn't exert more control over the situation? At one point Microsoft owned 49% of OpenAI but now it's down to 27%?
PunchyHamster•Apr 27, 2026
Why would they acquire more when company is still not making profit ? To be left with bigger bag ?
dijit•Apr 27, 2026
Everything is personal preference, and perhaps I am more fiscally conservative because I grew up in poverty.
But if I own 49% of a company and that company has more hype than product, hasn't found its market yet but is valued at trillions?
I'm going to sell percentages of that to build my war chest for things that actually hit my bottom line.
The "moonshot" has for all intents and purposes been achieved based on the valuation, and at that valuation: OpenAI has to completely crush all competition... basically just to meet its current valuations.
It would be a really fiscally irresponsible move not to hedge your bets.
Not that it matters but we did something similar with the donated bitcoin on my project. When bitcoin hit a "new record high" we sold half. Then held the remainder until it hit a "new record high" again.
Sure, we could have 'maxxed profit!'; but ultimately it did its job, it was an effective donation/investment that had reasonably maximal returns.
(that said, I do not believe in crypto as an investment opportunity, it's merely the hand I was dealt by it being donated).
freediddy•Apr 27, 2026
Microsoft didn't sell anything. OpenAI created more shares and sold those to investors, so Microsoft's stake is getting diluted.
And Microsoft only paid $10B for that stake for the most recognizable name brand for AI around the world. They don't need to "hedge their bets" it's already a humongous win.
Why let Altman continue to call the shots and decrease Microsoft's ownership stake and ability to dictate how OpenAI helps Microsoft and not the other way around?
tonyedgecombe•Apr 27, 2026
About the same as they wasted on Nokia.
zozbot234•Apr 27, 2026
> They don't need to "hedge their bets" it's already a humongous win.
That's a flawed argument. Why wouldn't you want to hedge a risky bet, and one that's even quite highly correlated to Microsoft's own industry sector?
theplatman•Apr 27, 2026
do we know whether Microsoft could have been selling secondary shares as part of various funding rounds?
my impression is that many of these "investments" are structured IOUs for circular deals based on compute resources in exchange for LLM usage
solumunus•Apr 27, 2026
They haven’t sold anything they’ve been diluted.
saaaaaam•Apr 27, 2026
I don’t understand the “record high” point. How did you decide when a “record high” had been reached in a volatile market? Because at $1 the record high might be $2 until it reaches $3 a week or month later. How did you determine where to slice on “record highs”?
Genuine question because I feel like I’m maybe missing something!
jhk482001•Apr 27, 2026
So AWS can finally use OpenAI and not only OSS version.
eranation•Apr 27, 2026
So, silly question, does this mean I will be able to get OpenAI models via Bedrock soon?
cdrnsf•Apr 27, 2026
OpenAI's logo is actually a depiction of their financial connections.
28 Comments
Bear in mind that MSFT have rights to OpenAI IP (as well as owning ~30% of them). The only reason they were giving revenue share was in return for exclusivity.
If they wanted named exclusivity rather than general exclusivity, we would charge a somewhat smaller amount for each competitor they wanted exclusivity from. They could give up exclusivity at any time.
That was precisely how we structured our deal with Azure, back in 2014-2016 or so.
That might help fix some of the bugs in Teams... :)
Tried to delete this submission in place of it but too late.
I think this is good for OpenAI. They're no longer stuck with just Microsoft. It was an advantage that Anthropic can work with anyone they like but OpenAI couldn't.
https://blogs.microsoft.com/blog/2025/11/18/microsoft-nvidia...
https://azure.microsoft.com/en-us/blog/deepseek-r1-is-now-av...
https://ai.azure.com/
AFAICT they are just hedging their bets left and right still. Also feels like they are winning in the sense that despite pretty much all those products being roughly equivalent... they are still running on their cloud, Azure. So even though they seem unable to capture IP anymore, they are still managing to get paid for managing the infrastructure.
I imagine the thinking was that it’s better to just post it clearly than to have rumors and leaks and speculations that could hurt both companies (“should I risk using GCP for OpenAI models when it’s obviously against the MS / OpenAI agreement?”).
Might really increase the utility of those GCP credits.
I feel this looks like a nice thing to have given they remain the primary cloud provider. If Azure improves it's overall quality then I don't see why this ends up as a money printing press as long as OpenAI brings good models?
[1] https://www.wsj.com/tech/ai/openai-and-microsoft-tensions-ar...
And on top of that, OpenAI still has to pay Microsoft a share of their revenue made on AWS/Google/anywhere until 2030?
And Microsoft owns 27% of OpenAI, period?
That's a damn good deal for Microsoft. Likely the investment that will keep Microsoft's stock relevant for years.
I doubt it
How?
They did not need to go so hard on the hype - Anthropic hasn’t in relative terms and is generating pretty comparable revenues at present.
OpenAI bet on consumers; Anthropic on enterprise. That will necessitate a louder marketing strategy for the former.
Why is it Altman is facing kill shots and Dario isn’t?
Altman peaked in the zeiteist in 2023; Dario, much less prominently, in 2024 and now '26 [1]. I'd guess around this time next year, Dario will be as hated as Altman is today.
[1] https://trends.google.com/explore?q=altman%2C%20Dario&date=t...
Yes. Microsoft was "considering legal action against its partner OpenAI and Amazon over a $50 billion deal that could violate its exclusive cloud agreement with the ChatGPT maker" [1].
[1] https://www.reuters.com/technology/microsoft-weighs-legal-ac...
The Microsoft and OpenAI situation just got messy.
We had to rewrite the contract because the old one wasn't working for anyone. Basically, we’re trying to make it look like we’re still friends while we both start seeing other people. Here is what’s actually happening:
1. Microsoft is still the main guy, but if they can't keep up with the tech, OpenAI is moving out. OpenAI can now sell their stuff on any cloud provider they want.
2. Microsoft keeps the keys to the tech until 2032, but they don't have the exclusive rights anymore.
3. Microsoft is done giving OpenAI a cut of their sales.
4. OpenAI still has to pay Microsoft back until 2030, but we put a ceiling on it so they don't go totally broke.
5. Microsoft is still just a big shareholder hoping the stock goes up.
We’re calling this "simplifying," but really we’re just trying to build massive power plants and chips without killing each other yet. We’re still stuck together for now.
"The Microsoft and OpenAI situation just got messy" is objectively wrong–it has been messy for months [1]. Nos. 1 through 3 are fine, though "if they can't keep up with the tech, OpenAI is moving out" parrots OpenAI's party line. No. 4 doesn't make sense–it starts out with "we" referring to OpenAI in the first person but ends by referring to them in the third person "they." No. 5 is reductive when phrased with "just."
It would seem the translator took corporate PR speak and translated it into something between the LinkedIn and short-form blogger dialects.
[1] https://www.wsj.com/tech/ai/openai-and-microsoft-tensions-ar...
I don't expect the translation to take OpenAI's statements and make them truthful or to investigate their veracity, but I genuinely could not understand OpenAI's press release as they have worded it. The translation at least makes it easier to understand what OpenAI's view of the situation is.
"We" in this sentence refers to both parties; "they" refers to OpenAI. Not a grammatical error.
Fair enough.
> "they" refers to OpenAI. Not a grammatical error
I'd say it is. It's a press release from OpenAI. The rest of the release uses the third-person "they" to refer to Microsoft. The LLM traded accuracy for a bad joke, which is someting I associate with LinkedIn speak.
The fundmaental problem might be the OpenAI press release is vague. (And changing. It's changed at least once since I first commented.)
That's kagi? Cool, I'm check out out more!
This seems impossible.
Azure is effectively OpenAI's personal compute cluster at this scale.
That article doesn't give a timeframe, but most of these use 10 years as a placeholder. I would also imagine it's not a requirement for them to spend it evenly over the 10 years, so could be back-loaded.
OpenAI is a large customer, but this is not making Azure their personal cluster.
They can. If one consolidated the AI industry into a single monopoly, it would probably be profitable. That doesn't mean in its current state it can't succumb to ruionous competition. But the AGI talk seems to be mostly aimed at retail investors and philospher podcasters than institutional capital.
"With viable economics" is the point.
My "ludicrous statement" is a back-of-the-envelope test for whether an industry is nonsense. For comparison, consolidating all of the Pets.com competitors in the late 1990s would not have yielded a profitable company.
Do you argue in good faith?
There’s a difference between being too early vs being nonsense.
Not in the 1990s. The American e-commerce industry was structurally unprofitable prior to the dot-com crash, an event Amazon (and eBay) responded to by fundamentally changing their businesses. Amazon bet on fulfillment. eBay bet on payments. Both represented a vertical integration that illustrates the point–the original model didn't work.
> There’s a difference between being too early vs being nonsense
When answering the question "do the investments make sense," not really. You're losing your money either way.
The American AI industry appears to have "viable economics for profit" without AGI. That doesn't guarantee anyone will earn them. But it's not a meaningless conclusion. (Though I'd personally frame it as a hypothesis I'm leaning towards.)
OP did not include this requirement in their post because doing so would make the claim trivially true.
At the very least, Ilya Sutskever genuinely believed it, even when they were just making a DOTA bot, and not for hype purposes.
I know he's been out of OpenAI for a while, but if his thinking trickled down into the company's culture, which given his role and how long he was there I would say seems likely, I don't think it's all hype.
Grand delusion, perhaps.
Seems more like an incredibly embarrassing belief on his part than something I should be crediting.
Isn't this tautology? We've de facto defined AGI as a "sufficiently complex LLM."
However, I don't think it is even true. LLMs may not even be on the right track to achieving AGI and without starting from scratch down an alternate path it may never happen.
LLMs to me seem like a complicated database lookup. Storage and retrieval of information is just a single piece of intelligence. There must be more to intelligence than a statistical model of the probable next piece of data. Where is the self learning without intervention by a human. Where is the output that wasn't asked for?
At any rate. No amount of hype is going to get me to believe AGI is going to happen soon. I'll believe it when I see it.
Other people just call it "theft".
Russian Invasion - Salami Tactics | Yes Prime Minister
https://www.youtube.com/watch?v=yg-UqIIvang
...just please stop burning our warehouses and blocking our datacenters.
We already have several billion useless NGI's walking around just trying to keep themselves alive.
Are we sure adding more GI's is gonna help?
Huh. Source? I mean, typical OpenAI bullshit, but would love to know how they defined it.
From: https://openai.com/charter/
Wow. Maybe they spelled it out as aggregate gross income :P.
Apple, Alphabet, Amazon, NVIDIA, Samsung, Intel, Cisco, Pfizer, UnitedHealth , Procter & Gamble, Berkshire Hathaway, China Construction Bank, Wells Fargo, ...
[0] https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
If you present GPT 5.5 to me 2 years ago, I will call it AGI.
Now our idea of what qualifies as AGI has shifted substantially. We keep looking at what we have and decide that that can't possibly be AGI, our definition of AGI must have been wrong
https://www.noemamag.com/artificial-general-intelligence-is-...
There is a reason so many scams happen with technology. It is too easy to fool people.
neural networks are solving huge issues left and right. Googles NN based WEathermodel is so good, you can run it on consumer hardware. Alpha fold solved protein folding. LLMs they can talk to you in a 100 languages, grasp tasks concepts and co.
I mean lets talk about what this 'hype' was if we see a clear ceiling appearing and we are 'stuck' with progress but until then, I would keep my judgment for judgmentday.
Your position is a tautology given there is no (and likely will never be) collectively agreed upon definition of AGI. If that is true then nobody will ever achieve anything like AGI, because it’s as made up of a concept as unicorns and fairies.
Is your position that AGI is in the same ontological category as unicorns and Thor and Russell’s teapot?
Is there’s any question at this point that humans won’t be able to fully automate any desired action in the future?
If this progress and focus and resources doesn't lead to AI despite us already seeing a system which was unimaginable 6 years ago, we will never see AGI.
And if you look at Boston Dynamics, Unitree and Generalist's progress on robotics, thats also CRAZY.
I don't know, maybe AGI is possible but there's more to intelligence than statistical next word prediction?
[1] https://news.microsoft.com/source/2026/04/08/microsoft-annou...
The circular economy section really is shocking- OpenAI committing to buying $250 Billion of Azure services, while MSFT's stake is clarified as $132 Billion in OpenAI. Same circular nonsense as NVIDIA and OpenAI passing the same hundred billion back and forth.
Mac: You're damn right. Thus creating the self-sustaining economy we've been looking for.
Dennis: That's right.
Mac: How much fresh cash did we make?
Dennis: Fresh cash! Uh, well, zero. Zero if you're talking about U.S. currency. People didn't really seem interested in spending any of that.
Mac: That's okay. So, uh, when they run out of the booze, they'll come back in and they'll have to buy more Paddy's Dollars. Keepin' it moving.
Dennis: Right. That is assuming, of course, that they will come back here and drink.
Mac: They will! They will because we'll re-distribute these to the Shanties. Thus ensuring them coming back in, keeping the money moving.
Dennis: Well, no, but if we just re-distribute these, people will continue to drink for free.
Mac: Okay...
Dennis: How does this work, Mac?
Mac: The money keeps moving in a circle.
Dennis: But we don't have any money. All we have is this. ... How does this work, dude!?
Mac: I don't know. I thought you knew.
What was I looking at?
https://news.ycombinator.com/item?id=47616242
[1] https://github.com/orgs/community/discussions/10539
They still run their own platform.
https://thenewstack.io/github-will-prioritize-migrating-to-a...
https://blogs.microsoft.com/blog/2026/04/27/the-next-phase-o...
But if I own 49% of a company and that company has more hype than product, hasn't found its market yet but is valued at trillions?
I'm going to sell percentages of that to build my war chest for things that actually hit my bottom line.
The "moonshot" has for all intents and purposes been achieved based on the valuation, and at that valuation: OpenAI has to completely crush all competition... basically just to meet its current valuations.
It would be a really fiscally irresponsible move not to hedge your bets.
Not that it matters but we did something similar with the donated bitcoin on my project. When bitcoin hit a "new record high" we sold half. Then held the remainder until it hit a "new record high" again.
Sure, we could have 'maxxed profit!'; but ultimately it did its job, it was an effective donation/investment that had reasonably maximal returns.
(that said, I do not believe in crypto as an investment opportunity, it's merely the hand I was dealt by it being donated).
And Microsoft only paid $10B for that stake for the most recognizable name brand for AI around the world. They don't need to "hedge their bets" it's already a humongous win.
Why let Altman continue to call the shots and decrease Microsoft's ownership stake and ability to dictate how OpenAI helps Microsoft and not the other way around?
That's a flawed argument. Why wouldn't you want to hedge a risky bet, and one that's even quite highly correlated to Microsoft's own industry sector?
my impression is that many of these "investments" are structured IOUs for circular deals based on compute resources in exchange for LLM usage
Genuine question because I feel like I’m maybe missing something!