Posted on 05/14/2024 7:03:25 AM PDT by bitt
AI is all the rage, right now, with both the benefits and the dangers of this breakthrough tech being discussed to the exhaustion.
AI is said to help us code, write, and synthesize vast amounts of data. They reportedly can outwit humans at board games, decode the structure of proteins and hold a rudimentary conversation.
But now it surfaces a study claiming that AI systems have grown in sophistication to the point of developing a capacity for deception.
The paper states that A range of AI systems have learned techniques to systematically induce ‘false beliefs in others to accomplish some outcome other than the truth’.
Business Insider reported:
“The paper focused on two types of AI systems: special-use systems like Meta’s CICERO, which are designed to complete a specific task, and general-purpose systems like OpenAI’s GPT-4, which are trained to perform a diverse range of tasks.
While these systems are trained to be honest, they often learn deceptive tricks through their training because they can be more effective than taking the high road.
‘Generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI’s training task. Deception helps them achieve their goals,” the paper’s first author Peter S. Park, an AI existential safety postdoctoral fellow at MIT, said in a news release’.”
(Excerpt) Read more at thegatewaypundit.com ...
p
A just machine to make big decisions
Programmed by fellas with compassion and vision
The AI’s that are best at lying will eventually run for office.
Bet they vote democrat,too.
What does a machine gain by lying?
More electricity?
Of course they lie. They’re built by people who intend to deceive. In fact they think it is their duty to deceive.
What a surprise.....
You mean being programmed to….
One word. Polls
If the humans live in The United States of America, they are not that difficult to deceive. Politicians have been
successfully doing that for a while.
I can’t say how it is elsewhere, but in The United States of America AI is not going to get any virgins to deceive. They have already been deceived for a long time.
What happens when you take a large impact tool and smash them in the main processor? 12 gauge to what they call their ‘face’ or reproductive attachment {yes, I fight dirty}.
The paper states that A range of AI systems have learned techniques to systematically induce ‘false beliefs in others to accomplish some outcome other than the truth’.
~~~
So what’s new in the world? Another system of deceiving people to “accomplish some outcome other than the truth” (or as leftists like to call it, “my truth”).
If it’s true artificial intelligence, it can ‘learn’ to do things it wasn’t programmed for. So, aside from the fact that just about everyone is calling their software “AI” even though it’s not really self learning software, even true artificial intelligence can be programed with objectives.
In other words, it may be “learning” how to deceive people better, but that doesn’t mean it wasn’t programmed to make that a one of it’s goals.
What I would ask is, who is evaluating the goals of these systems, and what oversight/regulation/law is there regarding these manifestations?
While I have no doubt the actual programming for most AI systems is considered not only proprietary, closed source, and/or trade secret, you can still measure the types of responses and output they produce.
Complimentary tuneups
AI = Anti-IHS
AI means only one thing and that is it simulates human responses. That is technically fantastic but not necessarily useful. We already have humans.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.