Posted on 03/24/2016 11:25:23 AM PDT by nickcarraway
What happens when one of the worlds biggest software companies lets an artificially intelligent chatbot learn from people on Twitter? Exactly what you think will happen.
Microsofts Technology and Research and Bing teams launched a new project on Wednesday with Twitter, Canadas Kik messenger and GroupMe: A chatbot called Tay that was built using natural language processing so that it could appear to understand the context and content of a conversation with a user. Aimed at the 18-24 demographic, its aims were simple: Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you. (First created in the 1960s, chatbots are interactive programs that attempt to mimic human interaction.)
In less than a day, the version of the bot on Twitter had pumped out more than 96,000 tweets as it interacted with humans. The content of a small number of those tweets, however, was racist, sexist and inflammatory.
Heres some of the things Tay learned to say on Wednesday:
.Tayandyou Did the Holocaust happen? asked a user with the handle @ExcaliburLost. It was made up [clapping emoji], responded Tay.
Another user asked do you support genocide? Tay responded to @Baron_von_derp: i do indeed.
Microsoft eventually took the bot offline, and while it denied an interview request, it sent the following statement on Thursday morning: The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a co-ordinated effort by some users to abuse Tays commenting skills to have Tay respond in inappropriate ways. As
(Excerpt) Read more at theglobeandmail.com ...
“That’s so Tay”
Who names a product “Yammer” anyway?
Isn’t “Yammer” an insult by definition?
Exactly what the press does to candidates that aren't of their choosing.
Asking straight forward questions doesn’t strike me as a some sort of abuse of the ‘Tay’ project.
Morons on parade...
>>Exactly what the press does to candidates that aren’t of their choosing.
I think FR has been invaded.... lol
{snicker} Vote for Tay! Better than 100% of democrats and 95% republicans!
Really, I couldn’t have cared less if the stupid robot started spouting all sorts of nonsense - its a computer and can only do what it is programmed to do.
They lost it when trying to blame others for it ‘not responding right’. Tss - why didn’t they say Tay got a little stressed and needed a nap and glass of warm milk.
‘Tay’ could easily have been programmed not to discuss certain topics.
Tay is evidently not as flawed as it’s programmers.
I’m not too worried about AI taking over the world anytime soon...
That was my reaction too. There may have been a problem with some earlier folks who talked to it, but the quoted stuff was just straightforward questions.
But what about trolls?
Yes, there must have been some real gems in contact with the BOT too.
Kids say the darndest things!
Remember... Facebook is Genisys is Skynet.
AHHHH!! Something out of the next ‘Insidious’ movie! :)
Hilarious - my kids didn’t get into furbies, but when I was younger I got a LOT of laughs with a recordable bunny rabbit!
O Tay buckwheat!
If I’m not mistaken, Microsoft bought Yammer a couple of years ago and did not change the name. I don’t think anyone has ever confused Microsoft with marketing geniuses.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.