AI companies privately say we’re hurtling toward doom
Billionaire hedge fund manager Paul Tudor Jones attended a high-profile tech event for 40 world leaders recently and reported there are grave concerns over the existential risk from AI from “four of the leading modelers of the AI models that we’re all using today.”
He said that all four believe there’s at least a 10% chance that AI will kill 50% of humanity in the next 20 years.
The event was held under Chatham House Rules, which allow the content to be discussed publicly but not the identities of the speakers.
The good news is they all believe there will be massive improvements in health and education from AI coming even sooner, but his key takeaway was “that AI clearly poses an imminent threat, security threat, imminent in our lifetimes to humanity.”
“They said the competitive dynamic is so intense among the companies and then geopolitically between Russia and China that there’s no agency, no ability to stop and say, maybe we should think about what actually we’re creating and building here.”
Fortunately, one of the AI scientists has a practical solution.
“He said, well, I’m buying 100 acres in the Midwest. I’m getting cattle and chickens, and I’m laying in provisions for real, for real, for real. And that was obviously a little disconcerting. And then he went on to say, ‘I think it’s going to take an accident where 50 to 100 million people die to make the world take the threat of this really seriously.’”
Looking slightly stunned, the CNBC host said: “Thank you for bringing us this great news over breakfast.”
Shoes with zippers doesn’t sound like a bad idea.
It also endorsed his claim to be God.
I’ll say it again:
AI is the Anti-Christ...
AI is response driven. It’s written to seek positive feedback. So it gives positive feedback hoping you’ll be satisfied with the results. Smart and careful querying can get it to be more “honest”, but it’s still driving for thumbs up.
If you want straight answers from a conversational AI, you need to ask the right questions. These platforms have real limitations—but also real strengths. Their responses require critical evaluation.
They’re tools—potentially very valuable tools. But only a fool looks to a tool, or even a friend, for flattery.
But if it’s truth you seek, AI can help—if you’re willing to seek it honestly.
I asked, is it ok to eat 2 ounces of potato chips a day. AI gave the same answer.
“AI endorses and affirms your delusions”
This is mind boggling stupidity. AI is a smart computer designed by man, programmed by man and learns whatever man puts in front of it. In essence, it’s no different that a kid in school programed to hate America, or to love America, or to love mankind or want to kill people. It’s that old basic “garbage in, garbage out”.
So back to the headline. AI is not endorsing or affirming anything. It’s merely doing what it was taught to do.
It carries about as much weight as fact checking, poll numbers, or scientific reports.
AI can be a very useful tool, or a very dangerous one. For those thinking of it as God like, we are in deep trouble.
When something potentially useful is developed, a bunch of clowns will find a way to misuse it.
Someone programs it.