Microsoft shuts Twitter AI bot after it learns racism from Twitter users

Microsoft launched an AI on Twitter for a chat with people, only to silence it within 24 hours as users started sharing racist comments with the bot. As an experiment in "conversational understanding" and to engage people through "casual and playful conversation", Tay was soon bombarded with racial comments.
 
"The AI chatbot Tay is a machine learning project, designed for human engagement. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments," the spokesperson said.
 
"Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation," Microsoft said. "The more you chat with Tay the smarter she gets, so the experience can be more personalised for you."