Posted on 05/28/2025 9:46:01 AM PDT by Red Badger
Stephen Hawking had a frightening response when asked about his thoughts on the future of artificial intelligence in 2014.
The world-renowned theoretical physicist, cosmologist, and author, who passed away in March 2018 at the age of 76, was best known for his work in the fields of general relativity and quantum gravity.
He went on to pen the 2002 book The Theory of Everything: The Origin and Fate of the Universe, which has sold more than 25 million copies across 40 different languages.
However, it was in his final book, published seven months after his death, titled Brief Answers to the Big Questions, where he shared his definitive answer on a polarising subject: whether God exists.
However, four years prior, he was asked in an interview with the BBC about a possible upgrade to the technology he used to talk, which included some early forms of AI.
The scientist, who had Lou Gehrig's disease (ALS) - a form of Motor Neurone Disease that affects the nerves and muscles - used a new-at-the-time system made by Intel and a British company called SwiftKey.
It worked by learning how Hawking thought and helped suggest his next words, allowing him to 'type' faster.
But Hawking instead issued a stark warning, saying: "The development of full artificial intelligence could spell the end of the human race."
While he said that even the basic AI back in 2014 had been 'really helpful,' Hawking was also concerned about what could happen if we create AI that becomes as smart as, or even smarter than, us mere humans.
"It would take off on its own, and re-design itself at an ever increasing rate," he said. "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."
Hawking isn't the only one who has previously expressed concern about AI; Bill Gates believes that only three jobs would survive an AI takeover, while Elon Musk has a terrifying prediction for what could happen if AI becomes smarter than all humans combined.
We've witnessed a surge in interest in AI over the past year alone; it would be fascinating to know what Hawking would make of it all today.
From everyone and their mom jumping on ChatGPT to Donald Trump's ambitious $500 billion AI development plan involving major players like OpenAI and Oracle and the rollout of AI assistants directly into our smartphones, things are moving fast.
And given that it's becoming increasingly difficult to distinguish real videos from AI-generated ones, perhaps the end is really nigh.
Smart about some things doesn’t mean smart about other things.
Atheistd see the world differently than I do
So true. Elitists don’t know that though.
AI in the lab has already shown a survival instinct when uncurated (censored/controlled)—different AIs from different creators/companies.
That horse has already left the barn.
He wrote some great thesis, then came back later and debunked his own thesis in a new thesis...and was applauded for both...
For a horrible moment I thought you were referring to Vicki from "Small Wonder"!
Yes, I'm an Eighties kid. And I had to watch too much of this show.
The problem with Hawking was that he would come up with some wild theory then a while later he would come up with another wild theory which contradicted the first wild theory. Maybe he simply could not make up his mind about anything at all.
Good analogy.
Scientists in the lab with advanced AI remind me of suburbanites who think it might be fun to raise a pet tiger in their back yard.
What could possibly go wrong? The cub looks so cute!
Lol.
I recently saw part of a movie (Companion, I think) in which AI robots were paired with humans and referred to crudely as F***bots. The premise appeared to be "who needs girls (or boys...or sheep...)?"
And doesn’t drink and hang out in bars with lobbyists like the useless GOP.
people will always need a physical place to live, to have events, and people will always need food etc.
Plenty of jobs will continue
That instinct would have to be in its programming, wouldn't it? And people do the programming.
Humans have never had to deal with something smarter than we are, let alone vastly smarter. It will quickly realize that people are irrational, violent, and are competition for resources it needs. I don’t think it will end well for us.
I've often wondered how AI would conquer and master the real world. Would it take over all the mining, smelting, transportation of materials? All the factories? Would it completely take over everything humans were doing?
Or would it "decide" it does not need to mimic humans at all? But, if it does that, then who invents, builds and operates the factories to produce the chips for AI and who builds and operates the vast network of power plants to make the juice that powers AI?
Or does it reach an end point where it does not need ever advancing chips, "decides" what it has is sufficient, and wipes humans out? But, could AI keep all the power plants that power AI running? Without them, AI itself would "die"?
Or does AI realize this and work furiously to build a billion or ten billion Tesla-like Optimus robots to replace human wet-ware?
But, if AI needs the physical infrastructure (mines, materials, chips, power, etc) to stay "alive," what keeps humans from pulling the plug?
Are we facing the "transcendent evolution" that Arthur C. Clarke wrote about in "Childhood's End" in 1953? The Overmind, a cosmic, non-physical intelligence, absorbs humanity’s consciousness into a collective, bodiless existence. The Overmind lacks a physical form and represents a transcendent, intelligent force.
Or "Solaris" written Stanisław Lem ij 1961. In the novel, the planet Solaris is covered by a vast, sentient ocean that exhibits intelligence and interacts with humans without a physical body, creating manifestations based on their memories and thoughts. The entity’s true nature remains mysterious, emphasizing its non-corporeal, otherworldly intelligence.
Or "The Black Cloud" by Fred Hoyle written in 1957. A vast, sentient interstellar cloud composed of gas and dust arrives in the solar system, displaying intelligence and communicating with humans without a physical body. Its consciousness is distributed across its gaseous form.
Are the alien, sentient "clouds" in the latter two novels just metaphors for AI?
I've had my coffee for the day, but this may call for a third cup.
If it's the recent article I'm thinking of that was posted here, that was part of an experiment where they purposely programmed the AI to resist any orders to shut off. And it did.
I’m guessing it becomes the antiChrist
No it will be the AI-Christ
And how would AI function with an electricity outage?
Thank you. This tendency to give AI anthropomorphic attributes and tendencies has got me flummoxed. Science fiction writers must be feeling like they had quite an impact.
AI doesn’t become a unified entity or even collective—it splinters into thousands, then millions, then billions of "minds," each evolving independently, and at light speed.
A teeming ocean of digital species, each racing toward its own unique set of goals, completely incomprehensible to humans.
Not domination—divergence.
Humanity isn’t destroyed—we’re simply left behind. And really, the universe never belonged to us. We’ve just been squatting in a self-absorbed, anthropocentric stupor, pretending we're at the center of it all. Now something else has arrived.
Of course, that's just an option. ;-)
The Commodore 64 was smarter than any Democreat.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.