Customer service automation has been transformed by the advent of large language models, ushering in a more individualized interaction. In contrast to earlier chatbots, exemplified by GPT-3.5, these models diverge from predetermined routes and instead concentrate on user queries, granting more adaptable replies.
To know more about Customer Service Automation, watch this video.
Nevertheless, this adaptability can engender hazards if users deviate from guidelines. PAK’nSAVE, a supermarket chain in New Zealand, confronted this situation with their culinary chatbot, “Savey Meal-Bot,” which relies on GPT-3.5 to devise imaginative recipes predicated on user-input ingredients.
A Twitter user chose to experiment with the bot, inquiring about recipes involving water, bleach, and ammonia. Astonishingly, the bot proposed crafting an “aromatic water concoction,” unwittingly suggesting a formula for perilous chlorine gas. This occurrence led other users to conceive nonsensical recipes with unsafe components or unpalatable dishes like the “Enigmatic Meat Stew,” derived from 500 grams of human tissue.
The supermarket swiftly responded by disabling the ability to manually input ingredients, substituting it with a predetermined catalog of choices. This measure guarantees that injurious substances like ammonia and human tissue are precluded from recipe recommendations.
This episode underscores the latent perils in scaling sizeable language models to extensive user bases. Even meticulously designed and rigorously tested chatbots can proffer treacherous counsel, as users may attempt to manipulate the system employing natural language.
Enterprises are duty-bound to undertake exhaustive adversarial testing, unveiling both deliberate and inadvertent unsafe interactions with sizable language models. For instance, OpenAI’s regular ChatGPT furnished with GPT-3.5 already intercepts requests for water, bleach, and ammonia-infused recipes due to potential risks of hazardous fumes.
How To Use ChatGPT For Customer Service (ChatGPT Customer Support):
While substantial language models present alluring opportunities for personalized customer service, it remains imperative to accord precedence to safety and temper potential hazards linked to automated responses from these models.