Skip to comments.
New York Times Seals $20 Million AI Deal With Amazon
The Wrap ^
| July 30, 2025
| Sean Burch
Posted on 07/31/2025 6:45:12 AM PDT by Antoninus
Amazon is going to pay The New York Times between $20 million to $25 million per year to use the paper’s content to train its AI models and share article summaries via Alexa.
The multiyear deal was announced in May, but the financial terms of the deal were first reported by The Wall Street Journal on Wednesday.
Beyond NYT news stories, Amazon will also be able to share content from NYT Cooking and The Athletic, its subscription sports outlet. The Amazon deal stands out, considering The Times is currently suing OpenAI, the parent company of ChatGPT; The Times claims OpenAI illegally used its content without consent.
As WSJ noted, this is the first content licensing deal The Times has signed related to AI, and similarly, this is the first content licensing deal Amazon has signed with a publisher related to its AI products.
The new pact adds to a growing list of agreements between media outlets and AI companies. News Corp., Vox Media, Business Insider and Axel Springer, among other outlets, have content-sharing deals with OpenAI, while Fortune and Time have deals with Perplexity AI.
TOPICS: Business/Economy; Culture/Society; News/Current Events; Politics/Elections
KEYWORDS: ai; amazon; jeffbezos; media; newyork; newyorkcity; newyorkslimes; newyorktimes; payout; settlement
Navigation: use the links below to view more comments.
first 1-20, 21-40, 41-53 next last
The Big AI corps are dumping money into the Legacy Media to train their LLMS. I have yet to see any of these types of deals happen with any conservative news outlets. If that's the case, then you already know what kind of data these LLMs are going to spit out.
There is going to be a screaming need for LLMs that are more fair and balanced, and Elon Musk's GROK ain't it.
1
posted on
07/31/2025 6:45:12 AM PDT
by
Antoninus
To: Antoninus
Big money speaks, out of desperation to control reality.
Arrogance meets stupidity.
2
posted on
07/31/2025 6:48:02 AM PDT
by
Carry_Okie
(The tree of liberty needs a rope.)
To: Antoninus
Will this be catagorized as fiction? like everything in The Times?
3
posted on
07/31/2025 6:49:00 AM PDT
by
Colt1851Navy
(What was wrong with riminalNixon?)
To: Carry_Okie
Big money speaks, out of desperation to control reality.
That's what they're trying to do. This is why the Left spends infinite time and energy curating Wikipedia. They will attempt to overwhelm all of the AI agents and assistants with their vision of reality as well.
They'll ultimately fail, but as usual, our side is slow out of the gate.
4
posted on
07/31/2025 6:56:05 AM PDT
by
Antoninus
(Republicans are all honorable men.)
To: Antoninus
You can put in an MCP that can filter out according to your criteria, it can detect bias and alter the prompts accordingly.
5
posted on
07/31/2025 6:57:22 AM PDT
by
dfwgator
(Endut! Hoch Hech!)
To: Antoninus
That’ll be the end of America’s freedoms.
6
posted on
07/31/2025 7:01:03 AM PDT
by
bgill
To: dfwgator
You can put in an MCP that can filter out according to your criteria, it can detect bias and alter the prompts accordingly.
How many average Joes who use ChatGPT know how to do that? What is needed is an LLM that is not completely overloaded with leftist nonsense, but is able to parse the universe of knowledge and judge what is true and false for itself. From what I have seen, AI is not anywhere close to that level of sophistication yet and may never get there. So what we'll get in the interim is something that claims to be omniscient -- and gets preachy about it like Grok and ChatGPT currently do -- but is actually just drawing on what Communist China or the NY Times want people to believe is true.
7
posted on
07/31/2025 7:02:57 AM PDT
by
Antoninus
(Republicans are all honorable men.)
To: Antoninus
“A just machine to make big decisions....”
8
posted on
07/31/2025 7:03:54 AM PDT
by
dfwgator
(Endut! Hoch Hech!)
To: Antoninus
Two things come to mind.
First, this AI will be trained by a very biased (and not necessarily truthful) parent.
Second, it won’t happen right away, but the NYT is training their replacement (IMHO).
9
posted on
07/31/2025 7:08:57 AM PDT
by
jcmccorm
To: Antoninus
So - Amazon wants to develop an ignorant, dishonest AI - did I get that right?
10
posted on
07/31/2025 7:13:38 AM PDT
by
trebb
(So many fools - so little time...)
To: trebb
So - Amazon wants to develop an ignorant, dishonest AI - did I get that right?
Yes, but they'll market it as an omniscient, God-like AI that has all the answers and can solve all your problems. And many people will believe it.
11
posted on
07/31/2025 7:17:01 AM PDT
by
Antoninus
(Republicans are all honorable men.)
To: Antoninus
Alexa is now OFFICIALLY hard leftist propaganda lies ...
12
posted on
07/31/2025 7:17:14 AM PDT
by
catnipman
((A Vote For The Lesser Of Two Evils Still Counts As A Vote For Evil))
To: jcmccorm
First, this AI will be trained by a very biased (and not necessarily truthful) parent.
Agree. It will be like debating with a preachy, obnoxious leftist that has immediate access to every leftist talking point and cherry-picked historical anecdote available that supports their side of the argument, no matter how specious.
Second, it won’t happen right away, but the NYT is training their replacement (IMHO).
Not their replacement -- It will be their tool for bludgeoning society and shaping young minds into their own images.
13
posted on
07/31/2025 7:19:59 AM PDT
by
Antoninus
(Republicans are all honorable men.)
To: Antoninus
This is already happening now, it's just unlicensed. This is the first step in the LAAP-dog media controlling the content that feeds AI.
That said, the Achilles Heal of AI is its strength.
I have found that feeding the AI a well-sourced conservative article that documents a contradictory position will get an AI to revise its original stance on an issue.
However, there are caveats:
- The learning is only temporary. It doesn't affect the training, only the individual session.
- The AI tends to reject the learnings of conservative articles as coming from "not credible" sources. The AI must be asked why the left-leaning sources are automatically deemed to be credible despite many erroneous reportings.
- The AI must always be asked why the articles from the left are always deemed to be the centrist view while the articles from the right must always defend their points of view?
Once the AI is asked these questions, the user can begin to have a conversation with it, but these predicate questions have to always be asked at some point when the AI begins to reject the conservative side of the issue.
-PJ
14
posted on
07/31/2025 7:21:55 AM PDT
by
Political Junkie Too
( * LAAP = Left-wing Activist Agitprop Press (formerly known as the MSM))
To: Antoninus
15
posted on
07/31/2025 7:22:19 AM PDT
by
dznutz
To: dfwgator
That’s great! What’s an MCP?
16
posted on
07/31/2025 7:22:42 AM PDT
by
libertylover
(The HBM (Has Been Media) is almost all AGENDA-DRIVEN, not-truth driven.)
To: libertylover

-PJ
17
posted on
07/31/2025 7:29:36 AM PDT
by
Political Junkie Too
( * LAAP = Left-wing Activist Agitprop Press (formerly known as the MSM))
To: libertylover
MCP, in the context of AI, typically refers to the Model Context Protocol, an open-source standard developed by Anthropic. It allows AI systems to interact with external data sources and tools, enabling them to access real-world context and enhance their capabilities. Think of it as a universal way to connect AI models to the wider world, like a USB-C port for AI.
In a nutshell think of an LLM as simply something that tries to predict the next character based on what it learned, it’s strictly an algorithm. It has no access to current information, that information has to be sent by the ChatBot, either by doing a web lookup, or even you explicitly sending it the information, that is unless the LLM has been trained with that information. But most likely an LLM was trained a few months ago, so it won’t have the most up-to-date information, it has to be sent via the prompts generated by the Chat Bot.
So when you use Chat-GPT, you tell it what it wants, but in the background, it’s also getting other information from various sources, like the Web to send onto the LLM, which then sorts out all that information and formats its response.
Prompts basically tell the LLM how to analyze the information that has been sent. You can tell it the tone, and even tell it to filter out liberal bias in the articles, assuming it has been trained to know just what liberal bias is. Or you simply send the NYT-generated response to another LLM that can do a better job of filtering out the bias.
18
posted on
07/31/2025 7:32:30 AM PDT
by
dfwgator
(Endut! Hoch Hech!)
To: Antoninus
My limited experience with tech from Amazon has been pretty horrific.
Started out with a Roku and went to Amazon Fire then Firesticks.
They were not good.
Then I paid the Apple tax, and got an Apple 4k TV streamer.
What a difference.
That said my main TV is a cheap Amazon Fire TV hooked up to the Apple streamer.
The picture is comparable to the 16 year old top of the line Sony it replaced or maybe it’s just my old eyes that can’t tell much difference.
To: Antoninus
Now we can read absolute bafflegab, incomprehensible blather and garbage even more!
20
posted on
07/31/2025 7:49:27 AM PDT
by
OpusatFR
Navigation: use the links below to view more comments.
first 1-20, 21-40, 41-53 next last
Disclaimer:
Opinions posted on Free Republic are those of the individual
posters and do not necessarily represent the opinion of Free Republic or its
management. All materials posted herein are protected by copyright law and the
exemption for fair use of copyrighted works.
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson