Working with The Associated Press to provide fresh results for the Gemini app

86 points 63 comments 9 hours ago
331c8c71

We've gone the whole way from decentralization and rebelliousness of the early internet and the landscape is becoming suffocatingly sterile (=lifeless).

I'm much more excited about eventual emergence of underground homebrew models without any guardrails...

umvi

> I'm much more excited about eventual emergence of underground homebrew models without any guardrails

Not if AI gatekeepers and interest groups have anything to say about it. AI without guardrails could be classified as a "weapon" and made illegal such that we are only allowed to use models produced by regulated entities and meet certain "safety standards" (like how medical software has to be approved by FDA).

Edit: oh, I guess "underground" could be interpreted in a way that these models are still produced and distributed (but secretly, illegally, etc)

srid

Yes. What do you think Grok AI's real-time information being sourced on posts of individual users (similar to early internet) as opposed to established media (like AP)?

smithcoin

FYI If you want to turn this off in workspace you'll need to go here https://admin.google.com/ac/managedsettings/47208553126 and here https://admin.google.com/ac/managedsettings/793154499678.

[deleted]
sebmellen

The hero we needed

heavyarms

There's not a lot of detail in the announcement but I assume this is some kind of RAG system. I wonder if it will cover some short time period (past week, past month?) or if they are trying to cover the whole time period since the knowledge cutoff of the current model.

urbandw311er

My guess is that they’ll just stuff a few daily headlines into the prompt so that queries about current affairs have some context, rather than re-training the model. Total guess obviously.

PhilippGille

RAG isn't re-training. You can have vector embeddings of all AP news in a vector DB, then when prompted, find related news via similarity search, and add the most similar (and thus related) ones to the context.

Here's some simple example code in Go, for RAG with 5000 arXiv paper abstracts: https://github.com/philippgille/chromem-go/tree/v0.7.0/examp... (full disclosure it's using a simple vector DB I wrote)

itsibitzi

As someone who works in the news industry I find it pretty sad that we've just capitulated to big tech on this one. There are countless examples of AI summaries getting things catastrophically wrong, but I guess Google has long since decided that pushing AI was more important than accurate or relevant results, as can also be seen with their search results that simply omit parts of your query.

I can only hope this data is being incorporated in some way that makes hallucinations less likely.

nerdjon

Unfortunately this has just been the reality over the last couple years. People just ignore the hallucination problem (or try to say it isn't a big deal). And yet we have seen time and time again examples of these models being given something, told to summarize it, and still hallucinate important details. So you can't even make the argument that its data is flawed or something.

These models will interject information from their training whether or it is relevant or not. This is just due to the nature of how these models work.

Anyone trying to argue that it doesn't happen that often or anything is missing the key problem. Sure it may be right most of the time, but all that does is build a false sense of security and eventually you stop double checking or clicking through to a source. Whether it is a search result, manipulating data, or whatever.

This is made infinitely worse when these summaries are one and done, a single user is going to see the output and no one else will see it to fact check. It isn't like an article being wrong that everyone reading it is reading the same article, can then comment that something is wrong, it get updated, and so on and so forth. That feedback loop is non-existent with these models

umvi

> Anyone trying to argue that it doesn't happen that often or anything is missing the key problem. Sure it may be right most of the time, but all that does is build a false sense of security and eventually you stop double checking or clicking through to a source. Whether it is a search result, manipulating data, or whatever.

Same problem existed before AI summaries.

"Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray's case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them.

In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know."

– Michael Crichton (1942-2008)

extr

Hallucinations are not a big problem with SOTA models at this point, especially grounded against an actual news article.

scarface_74

The examples that have made news were with iOS. iOS doesn’t really do a summary of the content. It just tries to do a summary of the headline.

The on device model that it uses is also literally 1% the size of the large models like Gemini

paxys

The news industry capitulated to big tech the moment it got reliant on big tech for the majority of its revenue. The entire media landscape today is the direct result of that.

asdff

Take it a step back further, and you will see that the media landscape capitulated to Big Anything a long time ago. For probably generations now, if we consider people like william randolf hearst and other newspaper men.

[deleted]
dismalaf

> I can only hope this data is being incorporated in some way that makes hallucinations less likely.

The key word is "real-time". LLMs can't be trained in realtime, so it's obviously going to call an API that pulls up and reads from AP news, just like their search engine.

notatoad

i don't think you can assume that - "real time" in this context could just mean they feed every article into their training system as soon as it's published.

iamjackg

That seems more unlikely to me -- training is not free and takes a long time, so it would not result in "[enhancing] the usefulness of results displayed in the Gemini app" and it being "particularly helpful to our users looking for up-to-date information."

Fine-tuning, which is cheaper and faster, has been proven to not be a good solution to "teach" models new facts.

I think what's most likely here is that Gemini will have access to a form of RAG based on a database of AP articles that gets updated in real-time as new articles are published.

CuriouslyC

They can't deploy that fast and people want to pin model version so it's not feasible anyhow.

summerlight

If there's any company who can afford "real-time LLM training" at this moment, I'm 100% sure they will win this AI race since they probably have at least ~10x compute compared to competitors. Of course, no one can do that right now.

dismalaf

Have you ever asked an LLM what time it is? It takes months to train them...

tomrod

But it can be trained to access basic, limited APIs to get current information.

dismalaf

Yes, which is literally what I suggested it does in my original comment.

tomrod

<3 thanks for calling out what I missed. I didn't realize you were supporting an earlier comment on the chain.

asdff

Uhh, has your head been in the sand? Look at the average output of your industry without ai. It gets things wrong. It misleads. It hallucinates. It has incentives that fundamentally differ from what the readership seeks in news. The fact that your industry took so readily to the technology to output ever more garbage says it all about the state of the industry vs any condemnation of the fundamental technology.

onlyrealcuzzo

Gemini is the leading model with the lowest hallucination rate: https://www.visualcapitalist.com/ranked-ai-models-with-the-l...

I would expect that number to go down from 1.3% to below 1% over the course of the year.

There's always a chance what you're reading is wrong - due to purposeful deception, negligence, or accident.

Realistically, hardly anything is 100% accurate besides math.

itsibitzi

I think people really don't understand the effort, care and risk that goes into producing quality reporting.

I work with investigative reporters on stories that take many months to produce. Every time we receive a leak there is an extensive process of proving public interest before we can even start looking at the material. Once we can see it in we have to be extremely careful with everything we note down to make sure that our work isn't seen as prejudiced if legal discovery happens. We're constantly going back and forth with our editorial legal team to make sure what we're saying is fair and accurate. And in the end, the people we're reporting are given a chance to refute any of the facts we're about to present. Any mistakes can result in legal action that can ruin the lives of reporters and shut down companies.

Now, imagine I were to go to a reporter who has spent 6 months working on a story about, for example, a high profile celebrity sexually assaulted multiple women, how the royal family hides their wealth and are exempt from laws, or how multinational corporations use legal loopholes to avoid paying taxes, and said, "oh, 1% of people reading this will likely be given some totally made up details".

Given that stories often have more than a million impressions, this would lead tens of thousands of people with potentially libellous "hallucinations".

It simply should not be allowed.

LLMs have their place, for sure, but presenting the news is not it.

tokioyoyo

Although I agree with every single sentence you've said, we've seen in the past decade how only very small percentage of people actually care about the content of the news. Everyone just discusses and gets their information from the headlines, so this is a natural consequence of "let's just summarize it to a couple of sentences since nobody reads it anyways".

simonw

The Gemini models themselves may score well on this, but Google's feature implementations are a whole other thing. AI Overviews frequently take untrustworthy search results (like a fan fiction plot outline for Encanto 2) and turn those into confidently incorrect answers. https://simonwillison.net/2024/Dec/29/encanto-2/

kccqzy

And doesn't bringing in The Associated Press solve this problem? No need for the AI to decide what is trustworthy or not. For the vast majority of people everything The Associated Press publishes is trustworthy.

stusmall

1.3% isn't great. I'd rather just go, and pay, directly to trusted news sources. Everyone has different tolerance for falsehoods and priorities I guess.

jonas21

What's the error rate for human journalists? Based on my experience, I'd guess it's much higher than 1.3%.

stusmall

As others have already pointed out, feeding these new articles aren't magically going to make them any more accurate. These hallucinations are going to be on top of any errors in the data sources.

I'm not replying to point that out, I think others have done a better job. It's mostly that this conversation made me think of this classic Babbage quote that I've always enjoyed.

"On two occasions I have been asked, – 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' ... I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question"

marsavar

Except when that happens, a clarification is almost always added at the bottom of the article ("This article was amended on [date]. An earlier version said xxx" or some variation thereof). You're not gonna get a second push notification from an AI summary saying "Oopsies, the previous notification was wrong". Once it's out, it's out, and that sort of damage is difficult to repair.

[deleted]
onlyrealcuzzo

Yes, but that's going to be on top of the ~1.3% hallucination rate (largely, there's always some very small chance it hallucinates the truth when the article had it wrong - but basically not worth considering).

marsavar

Anything other than 0% is borderline immoral. Imagine sending a push notification to somebody's phone with a completely made-up headline summary. Even if it happens once in a hundred times, that's too much. Things like that slowly but surely erode trust and make it harder and harder to trust anything that's generated by AI, especially when it comes to news, where trustworthiness is essential, and probably the main reason people pay for news. See for example https://www.bbc.co.uk/news/articles/cge93de21n0o

anomaly_

This is a ridiculous standard. News headlines at the moment would have an error rate wildly above 1.3%. The articles about Apple having trouble with LLM headlines is that the on-device model is weak and it's trying to compress too much into too few characters. I'd guess the chance of Gemini incorrectly summarising an article to be almost 0%.

scarface_74

Have you ever read a news article on a subject where you have expertise and knew it was inaccurate? The news is probably more inaccurate than you think.

I bet you think the news is accurate all other times. It’s called “Gell-Mann Amnesia”

baq

You’d have to pay quite a bit to get journalists to answer your questions specifically.

The whole isn’t about generating news articles, it’s about getting the model up to date on facts so it can synthesize a newspaper for you. I’d say it’s a way to get journalists to be journalists again instead of clickbait composers - as long as the model doesn’t inject clickbait there itself. I don’t trust Google to not do it sometime, but they aren’t doing it now and the infrastructure is being made for others to consume when Gemini suffers from inevitable enshittification.

stusmall

> You’d have to pay quite a bit to get journalists to answer your questions specifically.

This isn't what I meant. I pay directly for subscriptions/donations to news organizations that employee journalists that do this original reporting. I don't want a middle man that just messes it up. This goes for LLMs and for free news sites that don't do much more than summarize original reporting. I've seen more than a few times where they inject opinions, mess up facts or put focus on what was originally a small side point in the article.

peanutz454

> There's always a chance what you're reading is wrong - due to purposeful deception, negligence, or accident.

I am quite certain my personal hallucinations level is more than 1.3%, obviously we want our machines to be better than us, but my doctor once said folic acid is not a vitamin.

micromacrofoot

It's responsibility laundering — AI can say whatever they want and they can shrug it off by saying bots are sometimes unreliable

sandspar

Journalism is dying, killed by its own excesses and by the internet. Google is offering it life support. The other option is death.

contagiousflow

And what are we going to feed into the models without the journalism?

asdff

Abrogate the direct primary inputs. Weather sensors. Wildfire cameras. Police scanners. Court proceedings. Changes to ordinances. New LLC filings. Bankrupcies. Birth records. Death records. The whole corpus of society that is automatically logged and used as the primary data for people to then perform research or develop journalism upon.

That is what you siphon up. And in output you can mad lib out an article just like those johnny on the spot AP reporters do anyhow, filling in the skeleton article about a death or an attack or a banquet or award show with the relevant input concerning the event. LLM isn't even used for finding this input but to just adjust the boilerplate, perhaps to tailor news specifically to the reader's own inclinations based on engagement with other articles collected via fingerprinting.

baq

Exactly the right question. Journalism has been given a lifeline, a way out of the attention economy.

great_tankard

You might want to look into why "journalism is dying" and whether Google (and Facebook) had anything to do with it.

Mainsail

Theoretical question. What is replacing it? Is it this? Is it something else? Nothing? Curious to peoples thoughts on this.

mattlondon

Potentially social media to a certain degree, at least for raw "news" of what is happening.

Of course that willm be shit, but there we are.

airstrike

I'd wager 95% of what we call journalism today could simply disappear with no replacement and the world would be better off.

spankalee

"killed by its own excesses"?

sandspar

Doubling and tripling and quadrupling down on behaviors that most consumers wish they'd stop doing. You used to work at Google so you must be familiar with how groupthink operates.

xnx

The timing of this announcement is surely to contrast to open OpenAI which is currently in court being sued by The New York Times.

throw7

I wonder what the byline will look like. I'm sure their current crop of beat reporters are enthusiastic with developments.

sharpshadow

Is there the option to get the news then as they fly in immediately in a feed?

nxobject

I'm surprised I'd never asked that question before, since the AP and other syndicates began as teletype wire feeds. What do modern newsrooms use as the modern replacement of the AP "wire"?

eichi

One of the CEO was really competitive and has been the few legecy asset which are contributing current Google: other legecy assets are pools of competitive people who hadn't found the best place to show the ability. Current google is just the target of the good profile.

bluSCALE4

Google, what we really want are ads.

bangaroo

wow! this sure is great! gemini has worked so great up until this point - for example, i learned that a man who died in 1850 is one of three private owners of the airbus a340-600 last week! i'm so glad gemini exists and i absolutely cannot wait to experience a world wherein people get news from it.

Made by @calebRussel