I'm all for naming things in honor of Rosalind Franklin, but this seems like incredible misplaced hubris instead.
peyton•Apr 16, 2026
> GPT‑Rosalind is now available … for qualified customers …
It’s kind of gross to make money off her name (if that’s what’s happening) posthumously. It’s a complicated story anyway. IIRC her sister referred to it as “the Cult of Rosalind” when people were cashing in on books about her.
bombcar•Apr 16, 2026
I'd rather the AI companies make up names, or name their products things like "Clod" than use my name (if they were to ask) - as no matter how good it looks today eventually it'll be some form of laughingstock.
Sanzig•Apr 17, 2026
Claude is most likely a nod to Claude Shannon, father of information theory and an early AI pioneer.
bombcar•Apr 17, 2026
The real hubris will be to name a model Turing, or Alan if you’re a bit more discrete.
Cynddl•Apr 16, 2026
Is it me or they very carefully do not report performance on GPT-5.4 Pro, only the default GPT-5.4? They also very carefully left Anthropic models out of their comparison.
I went back to the BixBench benchmark which they mentioned. I couldn't find official results for Anthropic models, but I found a project taking Opus 4.6 from 65.3% to 92.0% (which would be above GPT-Rosalind) with nearly 200 carefully crafted skills [1]. There also appears to be competitive competitor models with scores on par with this tuned GPT.
Bix Bench seems like a really interesting/useful idea but most of the value for a layperson (like me) is comparing the results of different models on the benchmark. From what I can find there is no centralised & updated model results set. Shame.
modeless•Apr 16, 2026
The voiceover in the promo video on this page seems to be AI generated, with some weird artifacts. Right at the beginning it sounds like it says "cormbiying structure daya retrieval and lirrachure search".
an0malous•Apr 17, 2026
“GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert.”
For me too, it was around that time last year, with GPT-5, Claude Sonnet 4.5 and then Gemini 3 that I started feeling that these models are clearly becoming great at reasoning. I'm not at all opposed to saying that they are around PhD-level on at least some domains.
kmaitreys•Apr 17, 2026
I think there's a lot of difference between sounding like someone and being someone. The models are excellent at pretending indeed.
0123456789ABCDE•Apr 17, 2026
exactly. this is what whole RL thing is optimizing for, even if that is not the intent.
jostmey•Apr 17, 2026
The real issue isn’t finding therapies but getting them tested in clinical trials
XenophileJKO•Apr 17, 2026
I would argue that while you still have failed trials, then we have room to improve trial vetting.
tonfreed•Apr 17, 2026
Who's at fault when it suggests feeding someone cyanide?
falcor84•Apr 17, 2026
> We want to make these capabilities available to the scientists and research organizations best positioned to advance human health, while maintaining strong safeguards against biological misuse. The Life Sciences model is launching through a trusted-access deployment structure for qualified Enterprise customers in the U.S. to start, with controls around eligibility, access management, and organizational governance.
I'm absolutely ok with a legitimate lab scientist conducting biochemical research getting suggestions about substances that are generally considered dangerous but might be appropriate for their study, and it'll be up to the scientist to discern whether it is indeed appropriate to use.
huslage•Apr 17, 2026
I work for a life sciences company. It will be a long time before anyone trusts a generative model to do the actual science when mathematically provable models are as good as they are today. There is room for AI in the field, but it's not in the science directly.
oofbey•Apr 17, 2026
What would be a good use of AI? Writing code to do the modeling?
shwn2989•Apr 17, 2026
I prefer GPT 5 pro, which i found expert in coding and reasoning.
8 Comments
It’s kind of gross to make money off her name (if that’s what’s happening) posthumously. It’s a complicated story anyway. IIRC her sister referred to it as “the Cult of Rosalind” when people were cashing in on books about her.
I went back to the BixBench benchmark which they mentioned. I couldn't find official results for Anthropic models, but I found a project taking Opus 4.6 from 65.3% to 92.0% (which would be above GPT-Rosalind) with nearly 200 carefully crafted skills [1]. There also appears to be competitive competitor models with scores on par with this tuned GPT.
[1] https://github.com/jaechang-hits/SciAgent-Skills
Sam Altman, August 2025
https://www.bbc.com/news/articles/cy5prvgw0r1o
For me too, it was around that time last year, with GPT-5, Claude Sonnet 4.5 and then Gemini 3 that I started feeling that these models are clearly becoming great at reasoning. I'm not at all opposed to saying that they are around PhD-level on at least some domains.
I'm absolutely ok with a legitimate lab scientist conducting biochemical research getting suggestions about substances that are generally considered dangerous but might be appropriate for their study, and it'll be up to the scientist to discern whether it is indeed appropriate to use.