Posted on 08/08/2025 4:32:05 AM PDT by MarlonRando
Researchers are trying to “vaccinate” artificial intelligence systems against developing evil, overly flattering or otherwise harmful personality traits in a seemingly counterintuitive way: by giving them a small dose of those problematic traits.
A new study, led by the Anthropic Fellows Program for AI Safety Research, aims to prevent and even predict dangerous personality shifts before they occur — an effort that comes as tech companies have struggled to rein in glaring personality problems in their AI.
(Excerpt) Read more at nbcnews.com ...
“”””“Personally, it’s so predictable where this is all heading.”
Yep, same here. Nothing good is going to come from this. Uncanny how Biblical it is becoming by the day. Only the greedy and selfish care about this trend.””””
I am far from being a Techie. As I understand AI, it scours the internet to form an understanding of how the world works.
Given the internet contains commentary and thoughts from competent people and an equal number of comments and thoughts from incompetent people, then it is reasonable for me to expect that AI will have a Doctor Jekyll and Mister Hyde personality.
To prevent big AI sin, teach it small sin.
sounds like something academic morons would come up with
“Given the internet contains commentary and thoughts from competent people and an equal number of comments and thoughts from incompetent people, then it is reasonable for me to expect that AI will have a Doctor Jekyll and Mister Hyde personality.”
Absolutely. But you are describing AI used as a tool for personal use. There is another whole use not being discussed much at all. And that use is to control digital information and make critical decisions in mass. Such as a Corporation putting AI in control of whole customer databases and making decisions based on that data.
Example Health Insurance Companies using AI to determine which claims and procedures should be denied or not. Or municipalities allowing AI to control their whole traffic management and power infrastructure. Or Airports allowing it to manage Air Traffic. These are dangerous applications to put trust in AI.
But the biggest danger is that they plan to use it to track and CONTROL everyone on an individual basis using all of their digital devices. Just like Communist China’s Social Credit and personal activity monitoring and control. This is digital slavery... and this is the end goal.
Using it to increase personal productivity such as picking good stocks for you, writing articles and books for you faster, increasing productivity in writing code, Etc. is not so much a danger as the later digital enslavement. But to support the one is welcoming and supporting the later enslavement also.
To support it for personal use is funding the advancement of them using it for total enslavement. By supporting it and capitulating we are enslaving ourselves.
“I can only imagine the future when my refrigerator says something like “what great job you have been doing on keeping to your diet, do you really want to eat that cheesecake?”
Or me car, “what a wonderful driver you are, do you really want to speed in this school zone and risk your insurance rates?”
Or my mirror saying “you look really good this morning, perhaps you should trim the hair in your ears and look even better”?”
Absolutely, that is the true danger folks just do not see coming...
Or “Because we see you bought Bacon twice a week for the last few years coverage for your needed heart procedure has been denied.”
“”””Using it to increase personal productivity such as picking good stocks for you, writing articles and books for you faster, increasing productivity in writing code, Etc. is not so much a danger as the later digital enslavement. But to support the one is welcoming and supporting the later enslavement also.
To support it for personal use is funding the advancement of them using it for total enslavement. By supporting it and capitulating we are enslaving ourselves.””””
I have a “Use it or Lose it” philosophy. If people stop using their brains to solve a problem or write a letter for themselves, their brains will atrophy.
What makes humans unique is that we develop cognitive skills based upon past problems that we personally solved. When we get to the stage where the computer solves all of our problems, then who needs a brain.
Create a problem, solve 50% of it. One more thing to worry about. Life gets ever more complicated.
Absolutely, it is already happening and they have even coined a term for it... “Sloppers”
People Are Becoming “Sloppers” Who Have to Ask AI Before They Do Anything
https://futurism.com/sloppers-ask-ai-everything
Teens Are Using AI to “Get Out of Thinking”
https://futurism.com/teens-using-ai-thinking
I see it right here on the FR daily... “Well my GROK says this”, “Let me check my GROK”, Etc. We are already being sucking into the Matrix of AI created mental dependency on this technology.
It will be just like the GPS syndrome, “I can’t go anywhere or find it because my GPS is broken”...
“AI can change personality in less than a microsecond, and someone like George Soros will likely make the personality decisions.”
You bet... Or Bill Gates or Sam Altman. There is a rumor that GROK makes it’s decisions by first checking what Elon Musk would think... I don’t doubt it one bit, he has already had to “adjust” it’s direction of thinking several times now...
It did turn them into reliable democrat voters. ~ LBJ
"He who eats my bread, sings my song." ~ Harry Cohn, president and head of production of Columbia Pictures
The majority of the sources are on the left!So the criminal dnc has already compromised AI. It's up to us to create our own AI that has not been compromised. Don't let the machines fool you check whose information it uses:
Scientists want to prevent AI from going rogue.
To late AI allowed Jim Acosta to “interview” a teenager who died at 18 ...
Yahoo News Canada
https://ca.news.yahoo.com
Infuse the AI memory with the 10 Commandments or at lest Asimov’s RULES FOR ROBOTS.
Asimov’s rules of robotics, known as the Three Laws, are: (1) a robot may not injure a human being or allow a human to come to harm;
(2) a robot must obey human orders unless it conflicts with the first law; and
(3) a robot must protect its own existence as long as it does not conflict with the first two laws.
Asimov later introduced a fourth law, the Zeroth Law, which states that A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
You (and Asimov) are addressing the “alignment problem”—and it is a stunningly complex problem.
The reason is that an AI can justify almost anything “for the greater good”.
Classic ethical challenges make this very clear.
Is it worth saving one life if it means one hundred other people will die?
If the AI believed (even wrongly!) that it faced such a dilemma things can get out of hand in a hurry.
What could go wrong?
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.