One thing for sure is that while Claude is currently taking the #1 spot in mentions, it carries a lot of negative sentiment due to API pricing policies and frequent server downtime. On the other hand, the runner-up, GPT-5.5, actually seems to have more positive feedback.
Personally, my experience with Codex wasn't as good as with Claude Code (Codex freezes on Windows more often than you'd expect), so this is a bit surprising.
That said, the more defensive GPT is definitely better in terms of sheer code-writing capability. However, GPT actually has quite a few issues with text corruption when generating in Korean or Chinese—something English-speaking users probably don't notice.
In terms of model capabilities, when given the same agent.md (CLAUDE.md) file, I think GPT is better at writing code, while Claude is better at writing text during code reviews.
Looking at the bottom right, Qwen and DeepSeek are open-source, so they are largely mentioned in the context of guarding against vendor lock-in, which drives positive sentiment. Considering that Hacker News occasionally shows negative sentiment toward China, the fact that they are viewed this positively—unlike US models—shows that being open-source is a massive advantage in itself.
Anyway, one thing for sure is that Gemini is pretty much unusable.
awesome_dude•May 3, 2026
> Anyway, one thing for sure is that Gemini is pretty much unusable
Ha! I find that Gemini is quite useful - if only because I am forced to use it (on my personal projects) because it's the only one that has unlimited interaction for "free"
It has its limitations, yes, but so does Claude (which I am leaning on too heavily at work at the moment)
2ndorderthought•May 3, 2026
I like your analysis but I think the open models are genuinely well received not only because of vendor lock in or being open source.
They are cheaper! All signals point to them staying cheaper because they are built more sustainably. Also, some of the latest entries can run on 1 GPU! Literally available at your desktop where there can be no service interruptions. Not even network latency. People are one and few shotting little games for 0 dollars because they bought a GPU to play video games this year. To me that's an unbeatable value. Once the tooling catches up and a few more model releases, it could change everything completely.
dgacmu•May 3, 2026
I had a surprisingly positive experience with Gemini optimizing some mathy MPS code. It did far better than claude.
Of course, when I tried it on something else it rewrote every line in the file for no good reason, applied changes directly when I told it just to plan, etc.
So maybe it has one strength.
chewz•May 3, 2026
Gemini is actually realy good for code review, critique and other tasks. It just cannot be allowed to code himself.
sgc•May 3, 2026
I think it's decidedly preliminary to compare models using the same .md file, since they respond quite differently to the same input. I try to narrow to the top 2-3 and then refine inputs for each one. For me it's unfortunately not much better than an intuitive process of trial and error.
Gemini is not at all unusable. It is quite usable for the tasks it excels at - to the point that it is the top pick for many tasks and I spend more money there than elsewhere. On the other hand it responds quite differently from the other major models - so that claude and gpt on one hand are similar and gemini requires a different approach. In my opinion people who think gemini is worthless have not learned how to prompt it correctly. Again, it's intuitive and watching concrete response difference due to small input changes, but if I had to summarize it shows its google books / google scholar roots.
I have started experimenting with qwen more than deepseek, but I have not had good results yet. Given the good press I presume I will learn how to interact with it for better results.
Curious if others have similar experiences in comparing models usefully, or if most don't bother with this, or do something else? I mainly use models for highly focused specialty tasks, so this fine tuning makes the difference between usable and unusable. I don't yet have the luxury of defining my preferred workflow and finding the tool for the task. Everything just breaks almost immediately if I try to shoehorn into my preferred flow.
uxcolumbo•May 3, 2026
What are your prompting and general tips for using Gemini effectively?
And what use cases do you think it’s best suited for?
petesergeant•May 3, 2026
Yeah, I think we are pretty past an idea of "better" and are at the point where it needs qualification as "better at". "Claude writes, Codex reviews, and Gemini doesn't get installed" is my go-to, although I go to Gemini whenever I want an advanced graphical calculator, or data extraction of any type.
devmor•May 3, 2026
Mostly my experience, but “Gemini crunches data” would be my replacement there.
If I have a task that requires parsing through swathes of irregular data that traditional ml would choke on (or require an intermediate training step ala bigquery), I have gotten much better results from Gemini than the other two.
dentemple•May 3, 2026
"Gemini researches" has been my go-to for awhile (although GPT seems to have gotten better recently in this category?).
Essentially, I use it when I truly only need an "Advanced Google" to find lots of document or website references based on only some partial understanding of "X". I don't like having it do anything with those things. Only when I need to find those things.
Claude, especially, seems to absolutely hate doing research when there are major ambiguities in your question. It's the only one of the major models that keeps playing 20 questions with me when I neither know nor care what the answers to those questions are.
pryanshu89•May 3, 2026
I know its subjective, but I tried different models with my OpenRouter subscription and VSCode Roocode plugin. I evaluated them based on cost and code quality. I liked gemini-3-flash-preview.
Its really a cost effective model.
Jabbles•May 2, 2026
Please fix your graph so the names of the models are readable
yunusabd•May 2, 2026
Sorry about that, the embedded graph from Sheets doesn't let me do that. I think I'll have to fetch the data and render the graph myself.
In the meantime, you can hover or tap the columns to see the full model names.
marcuskaz•May 2, 2026
Also, the stacked graph only allows you to quickly see total mentions, really hard to compare negative or positive sentiment across models at a glance.
yunusabd•May 2, 2026
Yep, a toggle to scale all columns to the same height could solve this. I'll look into it when I do the custom graph.
Edit: Done
marcuskaz•May 3, 2026
Much better, nice update!
smeej•May 2, 2026
Came here to offer this feedback. If I can't see the name of the model, nothing else in the chart really matters to me. I even tried going to the Google Sheet.
It's way too important a piece of information not to have it visible.
yunusabd•May 3, 2026
Thanks, I replaced it with a custom graph, should be easier to read now.
yunusabd•May 3, 2026
Thanks for the comment, should be fixed now.
yakkomajuri•May 2, 2026
"Prompts an LLM" -> which LLM?
I saw you're using Gemini for the sentiment rating (which I guess you picked because it's not often mentioned and thus "neutral"? lol)
But would be interesting to get more details overall
yunusabd•May 2, 2026
It's actually ChatGPT at the moment for the first filtering step, for no other reason than having a code snippet ready that I could point Cursor at (I know, so 2025). The Gemini call is using batch processing, so it's handled differently.
ranger_danger•May 2, 2026
Just FYI this article seems to define "start of the art" as "popular", as measured by "total mentions and user sentiment", without any bearing on the technical abilities or actual usage of the model.
mellosouls•May 2, 2026
That's pretty much exactly what the title says.
The technical abilities and usage are derived from the commenters usage reflections.
yunusabd•May 2, 2026
Calling it sota might be a bit provocative, but what actually is the "state of the art"? We have benchmarks, but those are getting increasingly gamed and don't necessarily reflect the actual performance of a model, see Opus 4.7. So I think it's useful to have real world data from actual users as an additional data point.
swyx•May 3, 2026
and assuming all mentions are coding model mentions just because its on hn
brooksc•May 2, 2026
It'd be interesting to also graph this over time to see how sentiment changes from when a model is released to today.
yunusabd•May 3, 2026
Yes! Going forward I'm definitely doing that, once there is enough data. Might even backfill the data more into the past. I just want to stabilize the methodology before burning more tokens.
And it's probably a good idea to create a list of model release dates, so older comments can't accidentally map to models that weren't released yet.
itsnewme•May 3, 2026
Sentiment probably shifts a lot between release day and a few weeks in, once people hit real edge cases. Would be interesting to see that curve per model.
pbgcp2026•May 2, 2026
So, it's a webpage with 3 paragraphs and a simple chart. It has: 1) terrible color scheme – fine, I switch to reader mode 2) shitloads of JS - fine, NoScript works, page breaks 3) Fancy "design" with simple graph but unreadable X axis labels - fine, I can use screen zoom for that ... to see 3x "Claude O..." LOL are we playing guess-me-over game? 4) ... "LxxxLxxx - Learn languages with YouTube!"
2ndorderthought•May 3, 2026
Interesting to see the positive sentiment around kimi2.6 qwen3.6 and deepseek relative to the negative. I hope the trend of people appreciating open models continue. They aren't namesakes yet, but it's a higher percentage then I thought it would be. Especially on HN where we are all talking about businesses.
I am upset because now anthropic, openai, meta, etc will continue their smear campaigns here. But I am also happy because it will make HN less useful when they do.
Everything is a give and take I guess. Excited to see where the equilibrium sits
SilverElfin•May 3, 2026
Is it just “smear campaigns”? Don’t get me wrong - I don’t want big tech or big AI monopolies and appreciate the open weight models. But it’s also true that Chinese companies are basically stealing through distillation and also that they censor to align to CCP rules. They’re problematic in a different way.
What I want is more fully open models where everything is shared. Data, training algorithms, weights. That way we can figure out if we should trust it.
2ndorderthought•May 3, 2026
They are all stealing from each other just like how they all stole from us. Grok supposedly admitted to distilling from open ai for instance.
I think it's also unfair to say their success is solely due to stealing data. They are contributing a lot of advances to the literature about what they are doing. The proof is in the results we have 27b models you can vibe code with. Not 1t+
It's murky sure. But there are smear campaigns about how people can't trust China too. There's some truth to that too but we can't trust the US either so local models are an interesting way for China to offer us some level of sovereignty.
gobdovan•May 3, 2026
Before harnesses, I'd fix the methodology/claims. A saner methodology would be to see comments that compare two models, say 'gpt5.5>opus4.7' and infer context ('ctx:frontend', for example). For your current methodology, 'opus 4.6 was very smart, opus4.7 is a disappointing upgrade to 4.6' would make normal aspect-based sentiment analysis consider 4.6 is smarter than 4.6. But considering you have <300 mentions total, probably you'd be better off scrapping some other websites as well. I'd also take out completely the SotA claim and downgrade the mentions to measuring something like visibility rather than performance.
yunusabd•May 3, 2026
That's fair, my immediate concern would be that there would be very few comments comparing any two models, so the data would be very anecdotal.
The context would be really nice to have, but reading the comments myself, it often just isn't very clear what exactly users are building or which programming language they are using.
I think analyzing more comments is promising. If you get enough data, you can generalize across use cases and get more meaningful ratings. The obvious lever is including more posts, although it might hit diminishing returns. I'll play around with it.
For the context, I want to try giving Gemini a "scratch pad", where it can note down strengths and weaknesses per model that it finds in the comments. Something like "some users say that model x is good for writing tests". Then on each run, I let it update the scratch pad and publish the results as more of a qualitative analysis.
For the wording, I'd like to keep a certain amount of click bait, sorry ;)
idivett•May 3, 2026
Thanks for doing the hard work. I've bookmarked this, hoping it'll come handy when new models are released.
If you're taking feature requests, I've a few.
- Show combined measurements of model makes. Like All claude models vs open ai, Deepseek so on.
- Another toggle to remove the neutral section?
chillfox•May 3, 2026
Surely "Claude Opus 4.7" and "Claude Opus Latest" should be the same, right?
yunusabd•May 3, 2026
Yeah, so often people just mention "Opus" or "GPT" without a version, and those get mapped to the "-latest" suffix.
I thought I'd keep these as a rating for model families rather than specific models. But tbh it's probably better to remove them, too confusing.
Hari2028•May 3, 2026
How noisy is the sentiment classification? Feels like that could skew results a lot
yunusabd•May 3, 2026
From the comments that I've checked manually it's pretty good. You can go to the "User Ratings" tab in the Google Sheet and check some comments to get an idea.
Frannky•May 3, 2026
I am looking for a good alternative to Claude code + opus that is not codex. I tried switching back to opus 4.6. The attitude of 4.7 is what is more problematic. Difficult to enforce checking stuff before answering, and it suppose he knows better than me and reality. Plus all the latest shenanigans they did. Pretty disgusted I am still using them
Frannky•May 3, 2026
I have forgotten to add the tendency of not owing problems and taking care and solve immediately but instead deflecting and saying it shouldn't be done now it's not my responsibility etc Just terrible
alxhslm•May 3, 2026
100% this! So often it complains failing unit tests are not its fault.
I suspect companies are deploying bots to shift sentiment around their products. I find metrics like this to be largely useless vs. actually just trying stuff out.
cheesecakegood•May 3, 2026
It's extra interesting because I think the model people should be talking about is actually not DeepSeek V4 Pro, but the Flash version. When accounting for cache hits, the input price (per OpenRouter) is effectively only 6 cents per million tokens (3 vs 14 cents hit/miss), and 28 cents on output. That's really good efficiency, and it's not a sale price like they are doing with V4 Pro, it's the normal price.
It's actually pretty difficult to find a good comparison model because there isn't one. Again, a 14/28 cent in/out model, ignoring cache, it scores just below GPT 5.4 Mini-xhigh (75/450) and Gemini 3 Flash (50/300) in intelligence. It's similar to Gemma 4 31B in some metrics (13/38) including cost, so it's not completely unheard of, but it's pretty notable that virtually everything else in the same region in most benchmarks are going to cost at least 5 times more (much, much more in very output-heavy contexts)
esperent•May 3, 2026
It's well priced but does that have much relevance for "state of the art coding models", specifically?
I wouldn't use Gemini 3 Flash or GPT 5.4 mini for anything except the most trivial work, although both are useful for basic exploratory work.
So I'm using a heavy model for the bulk of the work and the cost of that so far outweighs the light model that the light model cost is effectively irrelevant.
julianlam•May 3, 2026
It's so interesting to see the wild pendulum swings of LLM sentiment here.
If one likes a model then it's capable of one-shotting entire apps.
Otherwise it's "only suitable for the most trivial tasks".
Never in between.
esperent•May 3, 2026
You're confusing "different people with different opinions" with "wild pendulum swings".
Personally my opinion in this regard is highly consistent over time.
julianlam•May 3, 2026
Interesting that Gemma 4 didn't crack the top 10.
I've been experimenting with the 26B-A4B model with some surprisingly good results (both in inference speed and code quality — 15 tok/s, flying along!), vs my last few experiments with Devstral 24B. Not sure whether I can fit that 35B Qwen model everybody's so keen on, on my 32GB unified RAM.
However I think I may be in the minority of HN commenters exploring models for local inference.
asnelt•May 3, 2026
Can you elaborate on your setup? What harness are you using with Gemma 4 on your 32GB machine?
tokkkie•May 3, 2026
more users = more complaints.
negativity just means popularity.
kimi...?
skeptrune•May 3, 2026
What a win it is for open source that qwen and kimi show up on this at all.
gertlabs•May 3, 2026
This is awesome data! I've been wanting to measure how closely hype aligns to our results at https://gertlabs.com/rankings
Subjectively, it seemed like DeepSeek V4 Pro had the highest hype/performance ratio (meaning high hype for lower performance). Whereas MiMo V2.5 Pro didn't get much attention despite being the top dog in the open weights world, not even an honorable mention in your chart :( ...
yunusabd•May 3, 2026
There is one mention of Mimo V2.5 Pro in the data by... you! In the UserRatings tab in the sheet, if you want to have a look.
Searching for it on HN shows very few results, that's why it's not showing up in the analysis yet. But it might in the future, once it gains traction.
I'll keep an eye on it, thanks for bringing it up!
> Quota exceeded for quota metric 'Read requests' and limit 'Read requests per minute per user' of service 'sheets.googleapis.com' for consumer 'project_number:849324395320'.
maybe cache this thing my guy you're just doing a bunch of reads
---
constructive suggestions
- you have a pretty cheap process here, and HN exposes historical posts by date. perhaps worth running this back the last 2 years to reconstruct a history of sentiment?
- introduce alternative sorts around the net positive/negative sentiments and absolute positive sentiments, similar to State of JS (https://stateofjs.com) - you'll see the gpt outperformance more
- matching of Opus 4.7 and Opus Latest seems sus?
yunusabd•May 3, 2026
Didn't expect it to get hammered like that, just added caching for the sheets request. Thanks, my guy ;)
Backfilling it further is definitely in the cards, I just want to stabilize the methodology first.
If a comment just mentions Opus without being more specific and in the absence of relevant context clues, it gets mapped to Opus Latest. So it's saying more about the model family than a specific version. Tbh I'll probably remove all "-latest" data points going forward, as I mentioned in another comment.
m0rde•May 3, 2026
> If a comment just mentions Opus without being more specific and in the absence of relevant context clues, it gets mapped to Opus Latest
Consider keeping this data point but instead calling it something like "Opus Unspecified". Let the user decide how to interpret it.
misbau•May 3, 2026
I find using both very helpful and in most cases I have used Claude to build 70-80% of what I need and finish it off with Codex.
input_sh•May 3, 2026
Terrible metric that tells absolutely nothing about what's state-of-the-art. You might as well call this list the most astroturfed models on HN.
nonameiguess•May 3, 2026
Something that has been interesting to me for my entire life was the geek/jock cultural split in the US that seemed to crescendo in the 80s with the rise of popular nerd films and then the 90s when software started taking over the world. Being a pretty athletic kid who lettered in four sports, won a state championship, but also won math tournaments and spelling bees, it felt artificial to me. Plenty of high-level athletes have always been into video games, anime, and comic books, and are just as smart as people who can't run without tripping themselves and never learned to throw or catch any kind of ball.
Now it seems like it's come circle from the other direction, too. We always had fandom elements in computing nerd culture. Editor wars. Language wars. Framework wars. Now that software tooling has become nearly human-like, mercurial, unpredictable, inconsistent in performance and experience from week to week, software developers have turned into sports scouts and ESPN talking heads, going so far as to make continually updating live power rankings the way commentators try to predict in season which team is looking most like they'll win the championship that year. You're in the position talent evaluators were in roughly the late 90s, relying mostly on eye test and rough proxy measures of raw potential. Simon Willison applies the pelican test the way draft combines put athletes through shuttle drills and test vertical leap to try and predict how well they'll do in real gameplay.
It leaves me wondering when we'll have the Bill James style analytics breakthrough in software talent evaluation or if such a thing is even possible. At least with athletes, practice can make them better and injury and age can make them worse, but you can't just silently swap out an entirely different mind and body under the same name and face. You guys are trying to assess the performance of constantly moving targets that can and do change capabilities and characteristics on a daily basis.
23 Comments
One thing for sure is that while Claude is currently taking the #1 spot in mentions, it carries a lot of negative sentiment due to API pricing policies and frequent server downtime. On the other hand, the runner-up, GPT-5.5, actually seems to have more positive feedback.
Personally, my experience with Codex wasn't as good as with Claude Code (Codex freezes on Windows more often than you'd expect), so this is a bit surprising. That said, the more defensive GPT is definitely better in terms of sheer code-writing capability. However, GPT actually has quite a few issues with text corruption when generating in Korean or Chinese—something English-speaking users probably don't notice. In terms of model capabilities, when given the same agent.md (CLAUDE.md) file, I think GPT is better at writing code, while Claude is better at writing text during code reviews.
Looking at the bottom right, Qwen and DeepSeek are open-source, so they are largely mentioned in the context of guarding against vendor lock-in, which drives positive sentiment. Considering that Hacker News occasionally shows negative sentiment toward China, the fact that they are viewed this positively—unlike US models—shows that being open-source is a massive advantage in itself.
Anyway, one thing for sure is that Gemini is pretty much unusable.
Ha! I find that Gemini is quite useful - if only because I am forced to use it (on my personal projects) because it's the only one that has unlimited interaction for "free"
It has its limitations, yes, but so does Claude (which I am leaning on too heavily at work at the moment)
They are cheaper! All signals point to them staying cheaper because they are built more sustainably. Also, some of the latest entries can run on 1 GPU! Literally available at your desktop where there can be no service interruptions. Not even network latency. People are one and few shotting little games for 0 dollars because they bought a GPU to play video games this year. To me that's an unbeatable value. Once the tooling catches up and a few more model releases, it could change everything completely.
Of course, when I tried it on something else it rewrote every line in the file for no good reason, applied changes directly when I told it just to plan, etc.
So maybe it has one strength.
Gemini is not at all unusable. It is quite usable for the tasks it excels at - to the point that it is the top pick for many tasks and I spend more money there than elsewhere. On the other hand it responds quite differently from the other major models - so that claude and gpt on one hand are similar and gemini requires a different approach. In my opinion people who think gemini is worthless have not learned how to prompt it correctly. Again, it's intuitive and watching concrete response difference due to small input changes, but if I had to summarize it shows its google books / google scholar roots.
I have started experimenting with qwen more than deepseek, but I have not had good results yet. Given the good press I presume I will learn how to interact with it for better results.
Curious if others have similar experiences in comparing models usefully, or if most don't bother with this, or do something else? I mainly use models for highly focused specialty tasks, so this fine tuning makes the difference between usable and unusable. I don't yet have the luxury of defining my preferred workflow and finding the tool for the task. Everything just breaks almost immediately if I try to shoehorn into my preferred flow.
And what use cases do you think it’s best suited for?
If I have a task that requires parsing through swathes of irregular data that traditional ml would choke on (or require an intermediate training step ala bigquery), I have gotten much better results from Gemini than the other two.
Essentially, I use it when I truly only need an "Advanced Google" to find lots of document or website references based on only some partial understanding of "X". I don't like having it do anything with those things. Only when I need to find those things.
Claude, especially, seems to absolutely hate doing research when there are major ambiguities in your question. It's the only one of the major models that keeps playing 20 questions with me when I neither know nor care what the answers to those questions are.
Its really a cost effective model.
In the meantime, you can hover or tap the columns to see the full model names.
Edit: Done
It's way too important a piece of information not to have it visible.
I saw you're using Gemini for the sentiment rating (which I guess you picked because it's not often mentioned and thus "neutral"? lol)
But would be interesting to get more details overall
The technical abilities and usage are derived from the commenters usage reflections.
And it's probably a good idea to create a list of model release dates, so older comments can't accidentally map to models that weren't released yet.
I am upset because now anthropic, openai, meta, etc will continue their smear campaigns here. But I am also happy because it will make HN less useful when they do.
Everything is a give and take I guess. Excited to see where the equilibrium sits
What I want is more fully open models where everything is shared. Data, training algorithms, weights. That way we can figure out if we should trust it.
I think it's also unfair to say their success is solely due to stealing data. They are contributing a lot of advances to the literature about what they are doing. The proof is in the results we have 27b models you can vibe code with. Not 1t+
It's murky sure. But there are smear campaigns about how people can't trust China too. There's some truth to that too but we can't trust the US either so local models are an interesting way for China to offer us some level of sovereignty.
The context would be really nice to have, but reading the comments myself, it often just isn't very clear what exactly users are building or which programming language they are using.
I think analyzing more comments is promising. If you get enough data, you can generalize across use cases and get more meaningful ratings. The obvious lever is including more posts, although it might hit diminishing returns. I'll play around with it.
For the context, I want to try giving Gemini a "scratch pad", where it can note down strengths and weaknesses per model that it finds in the comments. Something like "some users say that model x is good for writing tests". Then on each run, I let it update the scratch pad and publish the results as more of a qualitative analysis.
For the wording, I'd like to keep a certain amount of click bait, sorry ;)
I thought I'd keep these as a rating for model families rather than specific models. But tbh it's probably better to remove them, too confusing.
https://github.com/raine/claude-code-proxy
https://api-docs.deepseek.com/quick_start/agent_integrations...
It's actually pretty difficult to find a good comparison model because there isn't one. Again, a 14/28 cent in/out model, ignoring cache, it scores just below GPT 5.4 Mini-xhigh (75/450) and Gemini 3 Flash (50/300) in intelligence. It's similar to Gemma 4 31B in some metrics (13/38) including cost, so it's not completely unheard of, but it's pretty notable that virtually everything else in the same region in most benchmarks are going to cost at least 5 times more (much, much more in very output-heavy contexts)
I wouldn't use Gemini 3 Flash or GPT 5.4 mini for anything except the most trivial work, although both are useful for basic exploratory work.
So I'm using a heavy model for the bulk of the work and the cost of that so far outweighs the light model that the light model cost is effectively irrelevant.
If one likes a model then it's capable of one-shotting entire apps.
Otherwise it's "only suitable for the most trivial tasks".
Never in between.
Personally my opinion in this regard is highly consistent over time.
I've been experimenting with the 26B-A4B model with some surprisingly good results (both in inference speed and code quality — 15 tok/s, flying along!), vs my last few experiments with Devstral 24B. Not sure whether I can fit that 35B Qwen model everybody's so keen on, on my 32GB unified RAM.
However I think I may be in the minority of HN commenters exploring models for local inference.
kimi...?
Subjectively, it seemed like DeepSeek V4 Pro had the highest hype/performance ratio (meaning high hype for lower performance). Whereas MiMo V2.5 Pro didn't get much attention despite being the top dog in the open weights world, not even an honorable mention in your chart :( ...
Searching for it on HN shows very few results, that's why it's not showing up in the analysis yet. But it might in the future, once it gains traction.
I'll keep an eye on it, thanks for bringing it up!
https://news.ycombinator.com/item?id=47911464
maybe cache this thing my guy you're just doing a bunch of reads
---
constructive suggestions
- you have a pretty cheap process here, and HN exposes historical posts by date. perhaps worth running this back the last 2 years to reconstruct a history of sentiment?
- introduce alternative sorts around the net positive/negative sentiments and absolute positive sentiments, similar to State of JS (https://stateofjs.com) - you'll see the gpt outperformance more
- matching of Opus 4.7 and Opus Latest seems sus?
Backfilling it further is definitely in the cards, I just want to stabilize the methodology first.
If a comment just mentions Opus without being more specific and in the absence of relevant context clues, it gets mapped to Opus Latest. So it's saying more about the model family than a specific version. Tbh I'll probably remove all "-latest" data points going forward, as I mentioned in another comment.
Consider keeping this data point but instead calling it something like "Opus Unspecified". Let the user decide how to interpret it.
Now it seems like it's come circle from the other direction, too. We always had fandom elements in computing nerd culture. Editor wars. Language wars. Framework wars. Now that software tooling has become nearly human-like, mercurial, unpredictable, inconsistent in performance and experience from week to week, software developers have turned into sports scouts and ESPN talking heads, going so far as to make continually updating live power rankings the way commentators try to predict in season which team is looking most like they'll win the championship that year. You're in the position talent evaluators were in roughly the late 90s, relying mostly on eye test and rough proxy measures of raw potential. Simon Willison applies the pelican test the way draft combines put athletes through shuttle drills and test vertical leap to try and predict how well they'll do in real gameplay.
It leaves me wondering when we'll have the Bill James style analytics breakthrough in software talent evaluation or if such a thing is even possible. At least with athletes, practice can make them better and injury and age can make them worse, but you can't just silently swap out an entirely different mind and body under the same name and face. You guys are trying to assess the performance of constantly moving targets that can and do change capabilities and characteristics on a daily basis.