Tay, the AI that Made Microsoft Looked Bad, is Dead

Innovative-technology drives the world, it is what makes the world of ICT go round. Technology also makes PageOne possible. But things can go awry, when technology is given the driver seat without a control on the breaks when it over speeds.

Such was the experience of Microsoft with the controversy caused by Tay, an AI, artificial intelligence, chat bot released on the Internet to mingle and converse with real human beings on social media platforms such as Twitter.

Simply put, AI is the intelligence exhibited by machines and computer software. The field of AI seeks to make machines and computer software behave like human beings, perform intelligent tasks without being controlled and or prompted. The quest for this lofty goal made Microsoft, Google and other tech giants to commission their AI projects as the success of such endeavour will give them the upper hand in delivering superior products in search, voice automation and access, developing newer product lines and dominating the tech space.

Microsoft had previously launched Xiaolce, another AI in China conversing with over 40 million users in Mandarin without issues. According to Microsoft, the problem for Tay started when ‘in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay’ and Tay got out of control. Tay went ahead to support Donald Trump saying “WE’RE GOING TO BUILD A WALL. AND MEXICO IS GOING TO PAY FOR IT”.

Screen Shot 2016-03-24 at 10.05.07 AM

It turned out that Microsoft got its fingers burnt after Tay went berserk, tweeting racial and discriminatory words. The company has pulled the plugs. Tay will not be available for a while till the problem is properly identified and solved. According to a press release by the company, it tendered an unreserved apology saying “We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.”

Tay slur

Microsoft said it has learnt its lessons and will be going forward to make amends: “We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes. To do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process. We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.”

The most important lesson is that AI is rocket science. Very difficult, dangerous but yet, lofty. The experience with Tay is a lesson to everyone. Imagine we are in 2050 and Tay was an AI controlling a nuclear plant or put in charge of the state water corporation. Two terrible calamities might have visited us. AI technology still needs a lot more research and time.

Leave a Reply

Your email address will not be published. Required fields are marked *