Posted on 01/04/2025 8:48:51 AM PST by BenLurkin
A two-hour conversation with an artificial intelligence (AI) model is all it takes to make an accurate replica of someone's personality, researchers have discovered.
In a new study published Nov. 15 to the preprint database arXiv, researchers from Google and Stanford University created "simulation agents" โ essentially, AI replicas โ of 1,052 individuals based on two-hour interviews with each participant. These interviews were used to train a generative AI model designed to mimic human behavior.
...
To create the simulation agents, the researchers conducted in-depth interviews that covered participants' life stories, values and opinions on societal issues. This enabled the AI to capture nuances that typical surveys or demographic data might miss, the researchers explained. Most importantly, the structure of these interviews gave researchers the freedom to highlight what they found most important to them personally.
Although the AI agents closely mirrored their human counterparts in many areas, their accuracy varied across tasks. They performed particularly well in replicating responses to personality surveys and determining social attitudes but were less accurate in predicting behaviors in interactive games involving economic decision-making. The researchers explained that AI typically struggles with tasks that involve social dynamics and contextual nuance.
They also acknowledged the potential for the technology to be abused. AI and "deepfake" technologies are already being used by malicious actors to deceive, impersonate, abuse and manipulate other people online. Simulation agents can also be misused, the researchers said.
However, they said the technology could let us study aspects of human behavior in ways that were previously impractical, by providing a highly controlled test environment without the ethical, logistical or interpersonal challenges of working with humans.
(Excerpt) Read more at livescience.com ...
“study aspects of human behavior in ways that were previously impractical, by providing a highly controlled test environment without the ethical, logistical or interpersonal challenges of working with humans.”
Ah yes. Torture the robots. They won’t mind.
I wonder how long it has to watch you on a Zoom call? Uh oh.
Hey, I got a question. If they were able to model you, they could ask their model questions that you would never answer personally, right?
They could put a bunch of you together, introduce a stimulus then see what the group would do, right?
Totally unrelated, but did you all hear that data center owners are now building their own nuclear plants to power their data centers? Apparently, for some reason, we can build nuke plants again. Weird huh?
There would be many who would object to replicating my personality. Might be against the Geneva Convention.
No wonder AI displays episodes of psychosis and frequently hallucinates.
What does “85%” accuracy mean?
Some studies claim our DNA is 95% the same as chimpanzees.
That 15% of differences can cover a lot.
Good question. I think not, but who knows?
lol
There’s gonna be some crazy robots running around one day :)
I suspect they could, although accuracy might take the hit relative to how much other data they had.
I believe that they could extrapolate or interpolate information and the accuracy would be based on how many data points they would have. So first, have a long conversation with AI, then buy a person’s data profile, throw in their medical records, internet... ahem.. habits, and all social media activity and I think you could have a superb model with high accuracy for any questions asked.
XD XD. Good point.
I think for executing red flag operations 85% would be close enough for them.
Just think of the benefits. Have AI duplicate a person suspected of a felony, such as a January 6th riot, then use that copied personality to give a guilty plea and it will save the tax payers lotsa money.
Seriously, I can see some cop outfit doing something like that. Having AI monitor an hours long interview with a suspect, then ask AI if he’s guilty. And leftard courts will likely use it as evidence.
In the case of Chuck Schumer, it would take about 5 minutes.
Next they will mail in votes for these AI creatures from the Google lagoon.
“I believe that they could extrapolate or interpolate information and the accuracy would be based on how many data points they would have. So first, have a long conversation with AI, then buy a personโs data profile, throw in their medical records, internet... ahem.. habits, and all social media activity and I think you could have a superb model with high accuracy for any questions asked.”
In an authoritarian/totalitarian system, such would be plenty enough to separate out “freinds” from “enemies”. A much more efficient and useful tool than just grabbing all members of a group to supply slave labor, which was what Stalin did.
It is a great tool for dictators and tyrants.
“there’s a good reason there’s only one of me” applies. a REALLY good reason. trust me.
Captain Kirk proved you could beat this by muttering “Mind your own business, Mr. Spock. I’m sick of your half-breed interference, do you hear?” over rand over while they do the quiz. Try it!
Concur 100%.
Back when Stalin did it, people had to collect and put all this info together. It would take them decades to find a guy. They dealt with the most serious first then finally got to the guy that wrote a letter 20 years before and gave him his tenner.
Imagine what Stalin could have done with an AI data center with it’s own nuke plant next door. What took the Soviets to do in 60 years to their country could be done in a couple days to everyone on earth.
AI is insanely dangerous in the wrong hands.
Hold my beer!
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.