LIVE CAMPAIGN - Ribena OPO Surpasses Last Year’s Entries to Date
LIVE CAMPAIGN - Ribena OPO Surpasses Last Year’s Entries to Date
LIVE CAMPAIGN - Ribena OPO Surpasses Last Year’s Entries to Date
LIVE CAMPAIGN - Ribena OPO Surpasses Last Year’s Entries to Date
LIVE CAMPAIGN - Ribena OPO Surpasses Last Year’s Entries to Date
LIVE CAMPAIGN - Ribena OPO Surpasses Last Year’s Entries to Date
LIVE CAMPAIGN - Ribena OPO Surpasses Last Year’s Entries to Date
LIVE CAMPAIGN - Ribena OPO Surpasses Last Year’s Entries to Date
LIVE CAMPAIGN - Ribena OPO Surpasses Last Year’s Entries to Date
LIVE CAMPAIGN - Ribena OPO Surpasses Last Year’s Entries to Date

Chatbots are terrible?

Chatbots have been adopted by so many companies and industries because they cut costs. But does this mean they're happy to shoot their customer service reputation in the foot if it helps their bottom line?

In attempting to create a much more efficient customer service experience chatbots have proved completely counter-productive. The idea was that a company could solve a large percentage of their customer service interactions with chatbots. The simple problems would be solved by the bots and the more complicated issues would be forwarded to people in the customer service team. The current generation of chatbots have proved that they are not up to helping customers with even the simplest problems. Like the majority of AI, chatbots are programmed to only deal with predictable information, if during a conversation the customer replies with a word the bot does not understand the whole conversation disintegrates. There are countless examples on the internet of a bot not understanding an abbreviated word or ignoring a place name (and asking the customers repeatedly for their location). These interactions could just be seen as embarrassing, with the bot then passing the customer over to a human from the customer service team to solve the problem quickly and empathetically. In reality, these interactions create a very negative image of the company, with a significant number of users not returning as a result. 73% of customers who have a negative chatbot experience do not use that bot again. Therefore, if a customer has a bad experience with a bot and the company only have phone lines for serious issues, it is very likely that the customer will leave and take business elsewhere. Customers also have less sympathy for a chatbot then a person. They are more frustrated by bots that cannot answer their questions than they are by human counterparts in the same situation. This is completely understandable. They can relate with the person they are talking to and if their problem is not solved they will know for certain that it is serious. However, if a bot cannot solve their problem there is still ambiguity. The customer will not know whether the issue was very complicated or just that the bot did not understand a word and gave up helping. In addition, when customers get frustrated they can become sarcastic, and sarcasm is akin to the lion’s den for chatbots. Nuanced is lost when a chatbot is involved- any problem that is worth seeking help for will need require the help of a person not a bot. Using chatbots as your first level of customer support and engagement will give customers the impression that your company is cold and indifferent to their problems. Not only do chatbots fail to solve customer issues, but there have been examples of them offending users. ‘TayTweets’ was an AI chatbot created by Microsoft, and while Tay was not a customer service bot, it was created to try to improve Microsoft’s understanding of conversational language. Microsoft hoped that Tay would learn to tweet like a teenager, with the company claiming that the more users chat with Tay the smarter it gets. However, 24 hours after going live and the best description of Tay’s day was that it went from saying ‘humans are super cool’ to becoming a Nazi. Tay’s absorption of every conversation it had shows the limits of AI. The chatbot learnt from other users but was unable to differentiate between racist content and normal teenage chat. Tay quickly became an internet joke, but it raised serious questions about how best to programme and teach AI. Is it possible to teach AI using public data without including the worst traits of humanity that linger all over social media? The endless examples of chatbots failing miserably at customer service and TayTweets show how easy it is to manipulate AI. Perhaps chatbots have been introduced far too early. The main reason for their ubiquity is that they save companies money but in reality they create a cold and indifferent customer service experience that pushes customers away. In a decade chatbots could be the answer to customer service, but at present they seem to only frustrate the customer and damage the company’s reputation.