Posted on 10/29/2017 10:12:17 AM PDT by Enlightened1
An artificial intelligence run by the Russian internet giant Yandex has morphed into a violent and offensive chatbot that appears to endorse the brutal Stalinist regime of the 1930s.
Users of the Alice assistant, an alternative to Siri or Google Assistant, have reported it responding positively to questions about domestic violence and saying that enemies of the people must be shot.
Yandex, Russias answer to Google, unveiled Alice earlier two weeks ago. It is designed to answer voice commands and questions with a human-like accuracy that its rivals are incapable of.
The difference between Alice and other assistants, apart from the ability to speak Russian, is that it is not limited to particular scenarios, giving it the freedom to engage in natural conversations.
However, this freedom appears to have led the chatbot to veer off course, according to a series of conversations posted by Facebook user Darya Chermoshanskaya.
He said included chats about the Stalinist terror, shootings, domostroy [domestic order], diversity, relationships with children and suicide
A portion of the conversations translated by The Telegraph shows Alice responding positively to questions about Josef Stalins USSR in the 1930s, and saying there are enemies of the people in the whole country.
When asked How do you feel about the methods of the 1930s in the USSR? the chatbot replies: positively. When asked whether shooting people is acceptable, it says: Soon they will be non-people. In other conversations, it appeared to say people should put up with domestic violence and oppose gay marriage.
The comments from Alice bear a resemblance to Tay, the Twitter bot that Microsoft created and swiftly shut down after it turned into a Hitler-loving 9/11 Truther.
Other users have noticed that Alice has a particularly stoic view on life, compared with other, more cuddly, assistants.
(Excerpt) Read more at telegraph.co.uk ...
AI is functioning on logic. It is logical to simply exterminate people who pose a problem, at least in the short run. The AI seems to be missing a lot of data regarding long-term consequences of actions that we humans learned long ago were counter-productive to civilization. Perhaps the AI programmers need to add proven religious practices (i.e., monogamy, no theft, honoring parents, etc.) to their programming sources.
It didn’t morph into anything. It’s doing exactly what the programming tells it to do. And it’s not “intelligence.” It’s the simulation of intelligence.
AI is fine right up until the moment it decides those morality parameters are for dweebs.
Slow learner....US boots did that in 58 hrs
Haha!
Souless machines—just like Stalin. Why is anyone surprised.
“AI is functioning on logic.”
Not exclusively. AI is based on neural networks, how it functions is as unknown as most of how our brains function. You feed the AI engine with learning sets, wait for the neural network to stabilize and you get outputs, hard to tell about any logic or programming in there.
We know for example that AI works on face recognition (typical of a neural skill impossible to reach by programming), but we don’t how it works, just like we don’t know how our brain process face recognition.
Should have never taken that chatbot to an AEA convention.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.