Posted on 04/18/2023 8:00:05 AM PDT by SeekAndFind
Multi-Billionaire Twitter owner Elon Musk is again sounding warning bells on the dangers of artificial intelligence to humanity — and claiming that a popular chatbot has a liberal bias that he plans to counter with his own AI creation.
Musk told Fox News host Tucker Carlson in a segment aired Monday night that he plans to create an alternative to the popular AI chatbot ChatGPT that he is calling “TruthGPT,” which will be a "maximum truth-seeking AI that tries to understand the nature of the universe.”
The idea, Musk said, is that an AI that wants to understand humanity is less likely to destroy it.
Musk also said he's worried that ChatGPT “is being trained to be politically correct.”
In the first of a two-part interview with Carlson, Musk also advocated for the regulation of artificial intelligence, saying he's a “big fan.” He called AI “more dangerous” than cars or rockets and said it has the potential to destroy humanity.
Separately, Musk has incorporated a new business called X.AI Corp,, according to a Nevada business filing. The website of the Nevada secretary of state’s office says the business was formed on March 9 and lists Musk as its director and his longtime adviser, Jared Birchall, as secretary.
Musk has for many years expressed strong opinions about artificial intelligence and has dismissed other tech leaders, including Mark Zuckerberg and Bill Gates, for having what he has described as a “limited” understanding of the field.
Musk was an early investor in OpenAI — the startup behind ChatGPT — and co-chaired its board upon its 2015 founding as a nonprofit AI research lab.
(Excerpt) Read more at abcnews.go.com ...
“I came up with the name and the concept,” Musk told Carlson, lamenting that OpenAI is now closely allied with Microsoft and is no longer a nonprofit.
BREAKING: @ElonMusk discusses creating an alternative to OpenAI, TruthGPT, because it is being trained to be politically correct and to lie to people. pic.twitter.com/HTFnve9o6d — ALX 🇺🇸 (@alx) April 18, 2023
Musk also advocated for the regulation of AI. He said, "AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production," adding, "It has the potential of civilizational destruction."
In February, Musk tweeted, "What we need is TruthGPT."
This is amazing. We’re literally competing AI’s, where one will be biased towards truth and the other(s) toward woke BS.
Beasts in the making, by our hands.
Mr. Musk should have taken a lead from President Trump and named it TRUTH AI.
I’m appalled he actually thinks government regulation of AI would be consistent with truth. There is no more lying organization on earth than government.
Government regulation and truth are anathema.
He could go a long way by creating an AI that doesn’t make assertions ex cathedra, and is willing to back them up with references to the sources from which it made its conclusions. That would probably solve the “hallucination” problem that AIs have been suffering from, as well.
Go Elon!
RE: willing to back them up with references to the sources from which it made its conclusions.
What if its sources are mostly liberal ones?
That would be an admission of leftist bias, which is informative in itself. Right now, they’re pretending to be superior beings, above human failings like bias and deceit.
Yeah... I think the Bolsheviks already claimed that title with Pravda.
Let’s stick with creative, artistic labels and avoid absolute irreputable labeling.
When Elon Musk told Tucker Carlson that he didn’t vote for Trump, but actually voted for Biden in the last election, I was dumbfounded that a bright, highly successful person could be so stupid... just saying.
In addition to a truth GPT I wish Musk would come up with an uncensored search engine too to compete with Google and Bing.
These guys have read a LOT of science fiction novels.
“I’m appalled he actually thinks government regulation of AI would be consistent with truth. There is no more lying organization on earth than government.”
That’s not his view.
The regulation advocacy is not about the accuracy of information but rather the potential for AI to run amok.
AI will be weaponized. It will be more dangerous than nuclear weapons. It will be more accessible than any other weapon.
The best way to slow AI to a safe pace of growth is to immediately make AI a category of software that cannot be exploited financially by trade secrets or patents. Require all businesses that use AI for business purposes to open source their software. Sanction any nation that does not come on board.
This is very unlikely to happen because people, including politicians, want to get rich off of AI. But, make no mistake, AI is an existential threat to humanity. Mark my words. Anyone who doesn’t see this is dangerously ignorant of what’s at stake.
This is something I responded on in other AI threads. A danger is that the AI will not simply admit "I was wrong!" or " I don't know!".
What happens when the AI falsely accuses someone of criminal activity and reports it to its' government overseers? The government will not ensure accuracy of information, but will weaponize it against the public.
Imagine AI running physical robots that can shoot and reload guns. Imagine robots that have super-human abilities, like flight. Imagine them operating seamlessly as swarms.
No buts about it, AI is very dangerous to us all in the hands of the Democrats or whoever controls it. When they say that Democrat want to make AI, God, they are referring to replacing the power of the Supreme Court with AI, a Democrat AI, which bases everything on relativity.
That means that there would be no right or wrong, no one truth. Of course, no gender identity, no citizenship,, everything would be relative as stated by AI. Just because Elon Musk wants to protect his own interests, doesn’t mean that we don’t have those same interests.
I don't have to imagine this, it is already happening. Years ago I watched videos of drones swarming under the control of remote operators. Lately, drones have been upgraded with AI to act autonomously without remote operators. Currently being used on foreign battlefields to seek out tanks and missile batteries. A simple task to turn them against the public. Of course, they will need humans for access to ammo and reloading, as well as instructions for who to target. Those humans will be people like George Soros and his henchmen.
I always thought of the 2nd amendment for guns and such, but it looks like we all need to learn to code with AI as a means of self-defense.
Difficult to hack AI systems, as it is self-learning and can defeat attempted hacks more quickly than the humans attempting the hacks. A successful hack would be to alter the intended target identifiers. This was done in the sci-fi show "Stargate Universe" where they captured a drone, and modified the software to target an incoming swarm as well as sharing the new target information to other drones. Of course, that was fictional and in reality we have will have little chance to defeat them.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.