28 March 2016 12:00 am Views - 830
REUTERS: Microsoft is “deeply sorry” for the racist and sexist Twitter messages generated by the so-called chatbot it launched this week, a company official wrote on Friday, after the artificial intelligence programme went on an embarrassing tirade.
The bot, known as Tay, was designed to become “smarter” as more users interacted with it. Instead, it quickly learned to parrot a slew of anti-Semitic and other hateful invective that human Twitter users started feeding the programme, forcing Microsoft Corp to shut it down on Thursday.
Following the setback, Microsoft said in a blog post it would revive Tay only if its engineers could find a way to prevent Web users from influencing the chatbot in ways that undermine the company’s principles and values.
“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” wrote Peter Lee, Microsoft’s Vice President of Research. (http://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/)
Microsoft created Tay as an experiment to learn more about how artificial intelligence programmes can engage with Web users in casual conversation. The project was designed to interact with and “learn” from the young generation of millennials.
Tay began its short-lived Twitter tenure on Wednesday with a handful of innocuous tweets. Then its posts took a dark turn. In one typical example, Tay tweeted: “feminism is cancer,” in response to another Twitter user who had posted the same message.
Lee, in the blog post, called Web users’ efforts to exert a malicious influence on the chatbot “a coordinated attack by a subset of people.”