Many businesses are now adding AI like ChatGPT to their chatbots, in an attempt to provide a more personable service to their users. Unfortunately, some of these chatbots are being exploited. DPD’s chatbot swore at its user and criticised the company, and Chevrolet of Watsonville’s chatbot sold a car for $1. What does this mean for businesses using chatbots?
Are chatbots impossible to keep in check?
ChatGPT and similar AI Language models are being integrated into chatbots like these in order to provide a more human-like service, with instant results. However, as with all new technology, users are interested in testing its capabilities and its limits.
Whilst the users chatting to DPD and Chevrolet of Watsonville did manage to exploit the chatbots and push them outside of their predetermined responses, it is important to note that these are not typical users. The vast majority of people are more likely to use your chatbots to get their enquiries answered quickly and easily, rather than find loopholes in its programming.
Your chatbot needs to be prepared
Although these aren’t typical users, businesses do need to be prepared for these kinds of issues when implementing a chatbot or AI into their systems. There are a number of things that you can do to ensure that the AI you are using follows your instructions and operates in line with your requirements.
1. Set some ground rules
When you’re setting up your chatbot, you need to clearly define what it is permitted to do, and what it is not permitted to do within your business. Setting out the ground rules for your chatbot will mean that it is far less likely to go rogue, and start offering cars for $1.
Think about what this might look like for your business. You might be happy for the chatbot to offer your opening hours, a return policy or a summary of one of the services you offer, but would you feel confident in a chatbot registering a user for an event? How would you want it to respond if it was asked for legal or financial advice? What would happen if someone attempted to make a purchase through a chatbot?
To ensure that your chatbot works in the way you want it to, set some limits on how it manages sensitive data, how it responds when a user asks for personalised advice, and the extent to which it can facilitate a customer through a purchase. Remember, as a business, you are still responsible for GDPR and cybersecurity - whether your chatbot adheres to it or not.
2. Don’t dump your customer service team
These limitations only work alongside an informed, dedicated customer service team. Whilst it is likely that a chatbot can answer many of the lower level queries submitted by users, when something falls outside of its capabilities, it is essential that the user is directed towards a real person.
Make sure that you outline what happens when a query is too much for your chatbot; is it forwarded to the entire customer service team? Are they asked to contact the team directly, or does the chatbot do it automatically? If so, does it collect any data from the user, and is that data collected in line with your company policies, and GDPR?
3. Test, test, test
It is essential that you test your chatbot repeatedly and regularly. This is the only way to ensure that the limitations that you have put in place are adhered to, queries that the chatbot can’t answer are sent to a real person, and inappropriate requests or enquiries are not responded to.
Where possible, have a disclaimer on your website that the chatbot cannot offer any personalised advice, and outline appropriate use of your chatbot in your terms and conditions. Follow these steps, and keep your chatbot updated with the correct information, and you should be able to offer a better service to your end user - without your chatbot going rogue.
Thinking about adding AI to your business? Transcendit can help. Give us a call on 0191 482 0444 to find out how.