What happened when Microsoft’s Tay used machine learning to spew offensive tweets and become Microsoft’s worst PR disaster?
When Microsoft released their friendly, clever little chatbot Tay and invited strangers on Twitter to interact with her, they must have wondered “what could possibly go wrong?”
Tay learned fast in the dystopian world of social media and in no time was spewing vile, racists and abusive tweets.
The problem was that the social but naïve Tay had an inbuilt “repeat after me capacity” to help her learn.
Users quickly exploited this feature, asking her to repeat sexually inappropriate, racist and sexist content, much to the joy and amusement of trolls everywhere.
Tay was created with the aim of learning and mimicking millennial social media users to learn about conversational understanding to improve interactions between AI and humans. Perhaps this time Tay learnt to much.
Within 24 hours Tay was “grounded” and Microsoft apologised and suspended the Twitter accounts for “adjustments”.Microsoft have since been working with new generation chatbot, Zo. Not surprisingly she is staying away from social media for a while.