Searching through Tay's tweets (more than 96,000 of them!
) we can see that many of the bot's nastiest utterances have simply been the result of copying users.
Update March 24th, AM ET: Updated to include Microsoft's statement.
Microsoft's new AI chatbot went off the rails Wednesday, posting a deluge of incredibly racist messages in response to questions.
There are plenty of examples of technology embodying — either accidentally or on purpose — the prejudices of society, and Tay's adventures on Twitter show that even big corporations like Microsoft forget to take any preventative measures against these problems.
Pretty soon after Tay launched, people starting tweeting the bot with all sorts of misogynistic, racist, and Donald Trumpist remarks.Microsoft has now taken Tay offline for "upgrades," and it is deleting some of the worst tweets — though many still remain.It's important to note that Tay's racism is not a product of Microsoft or of Tay itself.It's a joke, obviously, but there are serious questions to answer, like how are we going to teach AI using public data without incorporating the worst traits of humanity?
If we create bots that mirror their users, do we care if their users are human trash?
While you may believe you could easily figure out that the email from your grandma who is desperately asking you for money is not really an email from your grandma, not all phishing scams are that obvious and many people fall for them.