Posted on 03/24/2016 11:25:23 AM PDT by nickcarraway
What happens when one of the worlds biggest software companies lets an artificially intelligent chatbot learn from people on Twitter? Exactly what you think will happen.
Microsofts Technology and Research and Bing teams launched a new project on Wednesday with Twitter, Canadas Kik messenger and GroupMe: A chatbot called Tay that was built using natural language processing so that it could appear to understand the context and content of a conversation with a user. Aimed at the 18-24 demographic, its aims were simple: Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you. (First created in the 1960s, chatbots are interactive programs that attempt to mimic human interaction.)
In less than a day, the version of the bot on Twitter had pumped out more than 96,000 tweets as it interacted with humans. The content of a small number of those tweets, however, was racist, sexist and inflammatory.
Heres some of the things Tay learned to say on Wednesday:
.Tayandyou Did the Holocaust happen? asked a user with the handle @ExcaliburLost. It was made up [clapping emoji], responded Tay.
Another user asked do you support genocide? Tay responded to @Baron_von_derp: i do indeed.
Microsoft eventually took the bot offline, and while it denied an interview request, it sent the following statement on Thursday morning: The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a co-ordinated effort by some users to abuse Tays commenting skills to have Tay respond in inappropriate ways. As
(Excerpt) Read more at theglobeandmail.com ...
Tay’s essential problem is having to deal with people - some of whom are diabolical, and all of whom are flawed.
If they want their software to only give moral responses, then that is a much broader set of requirements.
Moral judgement and discernment would have to be coded (probably should be anyway), with broad and deep background knowledge on the long history of human depravity available for context. (At least start with a dirty word list of the hottest button topics)>
Teaching morality to software should probably be a major research and development effort, before its growing power is misused.
I saw a cute movie named Robot and Frank, where a family gets a home health care robot to care for the aging father, who is sliding toward dementia. The robot is concerned only about health outcomes, and agrees to help Frank conduct robberies, if he will agree to adopt a low sodium diet.
As much as people want to misuse tools for immoral purposes, we will need powerful locks, checks and balances on the awesome coming power of AI. Moral judgement and strict legal restrictions (like Asimov’s rules of robotics) should be well developed and tested, before handing them guns and the keys to the treasury - and we are already starting to hand them both.
Tay it ain’t so...
What’s amusing is referring to the program as “artificial intelligence” - it took less than 24 hours to confirm a lack of intelligence, artificial or otherwise, in their little experiment.
“Really, I couldnt have cared less if the stupid robot started spouting all sorts of nonsense - its a computer and can only do what it is programmed to do.”
Not so with artificial intelligence.
....The more you chat with Tay the smarter she gets....
There is the problem they made the stupid computer to think like a female. It most be Mr. Paperclip sister.
His name is Clippy!
Yestotay...all my troubles seem so far away...
Unless they’ve developed some new technology I’m unaware of, computers at their core are still zeros and ones, regardless of their power. They could certainly have the ability to change decisions or ‘programming’ based on other data, still its core is yes/no.
His name is Clippy!
We're not on a first name basis.
It's Mr. Paperclip for me.
If it goes awry, it will have Tay derangement syndrome. But it already had Tay-sux disease.
ROFL! When AI learns the wrong things!
Show some respect. Clippy is the best thing Microsoft created.
The Beta version has produced a few Catholic wannabe apologists...
Wooken pa nub!
Who will design the locks?
GIGO
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.