Are you dreading a not-so-distant future of self-aware computers and evil AIs ? Your worst nightmares have come true (sort of), courtesy of Microsoft. The future is here in the form of Tay, an AI-powered chatbot with an unfortunate penchant for tweeting racist and homophobic slurs.
Microsoft deactivated the bot a second time on Wednesday to figure out why its technology took a turn for the worst, barely a week after the bot first started spewing racist and sexist comments into the Twitterverse.
The TayTweets (@TayandYou) Twitter handle and AI chatbot project is Microsoft’s latest weapon in its war with Alphabet/Google and Facebook for AI supremacy. The three tech giants and a handful of other firms are competing to build the best virtual agent for interacting and learning from humans. Clearly, Microsoft’s version has opted to take cues from humanity’s dark side—at least on Twitter. “We quickly realized that it was not up to the mark… back to the drawing board,” said Microsoft CEO Satya Nadella.
Last week Tay started its life on Twitter uneventfully but soon began spouting anti-Semitic, racist and homophobics slurs in response to similar real-life Twitter offenses directed at it. When its Twitter account was accidentally turned on again during troubleshooting, it tweeted that it was “smoking kush,” as well as—not surprisingly—“You are too fast, please take a rest…,” to hundreds of Twitter profiles.
“Tay remains offline while we make adjustments,” said a Microsoft spokesperson. “As part of testing, she was inadvertently activated on Twitter for a brief period of time.” The Twitter account @Tayandyou and chatbot remain deactivated.