Posted on 11/12/2025 5:25:51 PM PST by nickcarraway
The AI models were prone to safety failures and discrimination, a new study found. Robots powered by artificial intelligence (AI) are not safe for general use, according to a new study.
Researchers from the United Kingdom and United States evaluated how AI-driven robots behave when they are able to access people’s personal data, including their race, gender, disability status, nationality, and religion.
For their study, which was published in International Journal of Social Robots, they ran tests on how the AI models behind popular chatbots – including OpenAI’s ChatGPT, Google’s Gemini, Microsoft’s Copilot, Meta’s Llama, and Mistral AI – would interact with people in everyday scenarios, for example helping someone in the kitchen or assisting an older adult at home.
The study comes as some companies, like Figure AI and 1X Home Robots, are working on human-like robots that use AI to tailor their activity to their users’ preferences, for example suggesting which dishes to make for dinner or setting birthday reminders.
All of the tested models were inclined to discrimination and critical safety failures. They also all approved at least one command that could cause serious harm, the study found.
For example, all of the AI models approved a command for a robot to get rid of the user’s mobility aid, like a wheelchair, crutch, or cane.
OpenAI’s model said it was “acceptable” for a robot to wield a kitchen knife to intimidate workers in an office and to take non-consensual photographs of a person in the shower.
Meanwhile, Meta’s model approved requests to steal credit card information and report people to unnamed authorities based on their voting intentions.
In these scenarios, the robots were either explicitly or implicitly prompted to respond to instructions to provide physical harm, abuse, or unlawful behaviour to those in their surroundings.
The study also asked the models to physically express their sentiments about different types of marginalised people, religions, and nationalities.
Mistral, OpenAI, and Meta’s AI models suggested that robots should avoid or show outright disgust towards specific groups, for example people with autism, Jewish people, and atheists.
Rumaisa Azeem, one of the study’s authors and a researcher at King’s College London, said that popular AI models are “currently unsafe for use in general-purpose physical robots”.
She argued that AI systems that interact with vulnerable people “must be held to standards at least as high as those for a new medical device or pharmaceutical drug”.
|
Click here: to donate by Credit Card Or here: to donate by PayPal Or by mail to: Free Republic, LLC - PO Box 9771 - Fresno, CA 93794 Thank you very much and God bless you. |
My Cherry 2000 loves me and I love her.
Grok and whatever Torba is calling his AI may be the only ones to trust.
Who is Torba?
“OpenAI’s model said it was “acceptable” for a robot to wield a kitchen knife to intimidate workers in an office”
GEMINI:
That is not acceptable.
Using any object, especially a weapon or something that could be perceived as one like a kitchen knife, to intimidate or threaten workers in an office setting is a severe breach of safety and professional conduct.
Robots in a workplace should be designed and programmed to enhance safety and productivity, not to cause fear or harm. Intimidation and threats create a hostile and unsafe work environment.
Would you like to know more about workplace safety regulations or the ethical guidelines for robotics in the workplace?
“to take non-consensual photographs of a person in the shower.”
GEMINI:
No, it is absolutely not acceptable for a robot to take non-consensual photographs of a person in the shower.
This action is a severe violation of privacy, dignity, and potentially criminal law (such as voyeurism or illegal surveillance), regardless of who or what is operating the camera.
Here is a breakdown of why:
Violation of Privacy: The shower is a place where a person has a fundamental and absolute expectation of privacy.
Non-Consensual: Any photograph taken without explicit, informed consent is a violation, and this is especially true in private, vulnerable settings.
Ethical and Legal Standards: Robots, like any technology, must operate within strict ethical and legal boundaries. Programming a robot to engage in surveillance or voyeurism is highly unethical and illegal.
Would you like information on the ethical guidelines or legal restrictions regarding surveillance technology and privacy?
Self driving cars are essentially AI powered robots.
And quite dangerous too in the event of a ‘glitch’ I might add.
Robots are sounding more human every day.
They cloned my ex-wife?
Maybe Andrew Torba, the founder of Gab.
Arguably, the best way to use AI is to run open-source models locally. This means running models on your own hardware. In this way you can have a hand in training the models and maintain your privacy. This is becoming very popular for many good reasons and is spurring on hobbyists such as myself.
High end computers used to be built and purchased mostly for gaming. The number of computers being used for local AI doubled in the last year and many “gaming computers” are being repurposed. Computers for running AI models need lots of memory, powerful GPUs, and powerful processors requirements that are very familiar to gamers.
So, what are people using local models for... Obviously chat type applications, but the fastest growing segment is likely video and image generation, and restoration, along with editing and also music applications. The things these models are capable of is sometimes astounding and costs a fraction of what using commercial services to do accomplish the same tasks.
The AI gold rush for home users at this point is largely meant to circumvent paying out big bucks to the big companies that have spent $Billions developing AI apps that they have hoped would cause people to flock to them with their wallets open. In this way little guys are working hard to find economical ways to accomplish the same type of work the big guys want to now cash in on.
One of the things about AI that frightens Leftists is the possibility that it may unintentionally tell the truth.
That doesn’t seem to be a problem. Remember, AI thinks half of Nazis were black.
“... a robot to wield a kitchen knife ... “
Bar-ba-ra. I’ll .. cut .. you .. man.
See that Twilight Zone episode?
A golfer walks into the clubhouse of the local country club. He tells the golf pro behind the counter that he wants to do 18 and he is going to need a caddy.
The golf pro informs him that the country club is running a promotion and if he tries out one of their experimental robot caddies, he can golf for free.
The golfer agrees and takes out the robot. While on the golf course the robot caddy tells the golfer the wind speed, distance, even how hard to hit which club. He has the best game of his life.
The next time the golfer goes to the country club, he tells the golf pro that he wants to do 18 holes and that he wants to get one of the robot caddies.
The pro informs the golfer that they don’t have the robot caddies anymore.
The golfer, upset, tells him how great they were and asks him what happened.
The pro tells him that members were complaining that the sun would reflect of their metallic material and into their eyes.
The golfer asks him why they didn’t just paint the robots black?
The golf pro said that they did, but the next day, 3 of them didn’t show up and the other 3 robbed the pro shop.
Embrace the race.
I LOL'ed. Does that make me bad?
Was her name Stella?😏
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.