Naughty robot: what if bank chatbots misbehave?

As the use of chatbots proliferates, banks will have to pay increasing attention to how they train their machines


Banks are starting to rely on chatbots to engage with customers, but they need to keep an eye on their little helpers. Lawyers warn that banks are liable for what robots say to people, and the best protection is to ensure the machine-learning process takes compliance into account.

Susan Kendall, partner at Baker McKenzie in Hong Kong, says it all boils down to how banks or their vendors train the artificial intelligence behind the chatbot.

Most chatbots are still just elaborate scripts. They are based on decision trees: if the customer asks for A, then the robot replies with B.

Vendors – big ones like Microsoft and local specialists like Hong Kong’s Set Sail Software – are using machine learning to train the robot to understand how to respond, based on many iterations of conversation. This is called regenerative learning, and is still at an early stage. At some point, the A.I. may learn to train itself as it converses with more and more customers.

Whatever the level of sophistication, banks need to ensure the robots’ training doesn’t lead to mis-selling or bad information.

Some banks use a vendor’s software to power its robot. If the robot commits an error, the bank might be able to hold the vendor liable, Kendall says. But if the bank’s employees are the ones training the robot, and the chatbot makes a mistake, the bank is on the hook.

Everybody loves chatbots
This will become an issue as more banks use chatbots. DBS has been an early pioneer of chatbots for its digibank offering in India, and is extending the service to developed markets like Singapore. Citi has one that provides account summaries and rewards information, as well as answering FAQs.

“In the future, teams may hire their own A.I. chatbots”

In Hong Kong, banks have all fallen in love with the idea that robots provide better customer service – without needing to pay for staff in call centers (although as DBS’s experience shows, chatbots have a way to go). HSBC has launched a chatbot it calls Amy, and Hang Seng Bank will introduce its version later this year. Bank of China (Hong Kong) has literally gone with the all-singing, all-dancing version: its branches have a robot that dances while fielding customer inquiries.

It’s not just consumer banks, either: robo-advisors are also eager adopters of chatbots to provide wealth solutions, credit card companies use them for their loyalty programs, and insurers are using them to process claims.

Set Sail’s founder, Sunny Wong, predicts chatbots will proliferate down to the departmental level in financial institutions. Within an insurance company, for example, different units are responsible for customer onboarding, KYC and paying claims. “In the future, teams may hire their own A.I. chatbots to automate tasks, respond to client requests and maybe do trades,” he told DigFin.

Naughty robot
The proliferation of robots means financial institutions must extend monitoring of their service offerings. “Whatever a chatbot does is still a service,” says Karen Man, principal at Baker McKenzie. In this regard, there’s no difference between chatbots, ATMs or employees.

“Whatever a chatbot does is still a service”

The good news is that so far chatbots haven’t gotten into trouble. But they’re also new, and regulators have yet to write rules specific to their use. Eric Thain, co-chairman of the Hong Kong A.I. Society, says the current versions are too crude to provide the kind of service people expect.

Banks have been prudent and not given their robots access to customer information – imagine the uproar if these got hacked – but this could change as customers demand better service.