Microsoft's Racist AI
Oh humanity and its inventions. #facepalm
Microsoft recently created Taylor, an artificially intelligent chatbot designed to interact with people on Twitter with roughly the language of a contemporary teenager. The project did not go so well and was turned off in about a day. Why? Through it's use of Twitter as a training platform, "Tay" quickly began spewing racist comments and promoting Hitler. Microsoft's experiment may have been a public relations disaster offers a lot to social sciences as a reflection of who we are. Basically, the neural network was trained by the type of content it found on Twitter through the weighted relationships it found between Q&A associations and responses in addition to the direct feedback it received when users interacted with Tay directly.
The bottom line is Tay was a bigoted jerk because we are. The AI fundamentally learned our nature in a "big data" measurable fashion.
For more fun with programming interfaces to society gone wrong - take a look at Robbie the Racist Robot or the Racist HP Webcams.