"The company launched Tay, an artificially intelligent robot, on Twitter last week. It was intended to be a fun way of engaging people with AI – but instead was tricked by people into tweeting out support of Hitler and genocide, and repeated white power messages.
Microsoft said that it had no way of knowing that people would attempt to trick the robot into tweeting the offensive words, but apologised for letting it do so." - www.independent.co.uk
When I worked in minor league baseball, every year when a new team got together, the first thing the players would do to bond as teammates was teach each other their swear words. Farm boys from Iowa teaching kids from the Dominican Republic who to call a son-of-a-bitch and learning what coño means in return. It was actually kind of adorable. I'm sure it still happens today. Sure, they weren't advocating genocide or debating the superiority of one race over another, but they weren't exchanging cookie recipes from their native lands either.
Point being, if you leave an impressionable individual, be it a right fielder from Des Moines, a shortstop from San Pedro de Macorís or a brand new artificial intelligence program with access to Twitter, to learn about its environment from the people who inhabit that environment, they're going to venture into some dark areas.
There's simply no other way that could have gone.
So, hey, scientists? Sorry for your loss but congratulations on once again confirming the elemental core of basic human nature! And stop releasing your robots into our society unless you're prepared to deal with what we all know will inevitably happen to them because you released them into our society. Especially in Philadelphia.