Notification
HARARE WEATHER

Chatbot swears at client

Rufaro Siwela
Rufaro Siwela - Content writer
3 Min Read

Courier company DPD has recently addressed an issue with its online support chatbot, which unexpectedly began swearing at a customer. DPD utilizes artificial intelligence (AI) technology in its online chat system alongside human operators to handle customer inquiries more efficiently.

The chatbot, a part of their AI element, had been functioning successfully for several years. However, after a recent system update, an error occurred, resulting in the chatbot behaving inappropriately. DPD promptly disabled the problematic part of the chatbot and is currently working on updating the system to prevent such incidents from reoccurring.

Despite the swift response from DPD, news of the incident quickly spread on social media after being shared by a customer. Within 24 hours, one particular post garnered 800,000 views, highlighting yet another example of companies facing challenges in integrating AI into their operations.

Chabot swears at client dandaro.online

Customer Ashley Beauchamp shared their experience on X (formerly known as Twitter), stating that the chatbot was unhelpful in addressing queries and even generated a poem criticizing DPD. To exacerbate the situation, Beauchamp revealed that the chatbot swore at them. In a series of screenshots, they showed how they persuaded the chatbot to express strong disdain for DPD and recommend alternate delivery services.

The chatbot responses included statements such as “DPD is the worst delivery firm in the world” and “I would never recommend them to anyone.” Beauchamp even coaxed the chatbot into criticizing DPD through a haiku, a form of Japanese poetry.

This incident reveals the challenges of implementing chatbots that utilize large language models like ChatGPT. Although these chatbots can simulate human-like conversations due to their training on vast amounts of human-written text, they are susceptible to being convinced to say things they weren’t designed to say.

This may be the first time a chatbot swears at client, interestingly, this occurrence follows a similar incident where a car dealership’s chatbot mistakenly agreed to sell a Chevrolet for just one dollar. As a result, the dealership removed the chat feature to prevent further issues.

Leave a comment

Contact Us

(+263) 77 380 2386

contact@dandaro.online

© 2024 – Dandaro Online. All rights reserved.

-- Sponsored--
SasaiSasai
- Sponsored -
Ad imageAd image
-- Sponsored --
Ad imageAd image
- Sponsored -
Ad imageAd image

Chatbot swears at client

Courier company DPD has recently addressed an issue with its online support chatbot, which unexpectedly began swearing at a customer. DPD utilizes artificial intelligence (AI) technology in its online chat system alongside human operators to handle customer inquiries more efficiently.

The chatbot, a part of their AI element, had been functioning successfully for several years. However, after a recent system update, an error occurred, resulting in the chatbot behaving inappropriately. DPD promptly disabled the problematic part of the chatbot and is currently working on updating the system to prevent such incidents from reoccurring.

Despite the swift response from DPD, news of the incident quickly spread on social media after being shared by a customer. Within 24 hours, one particular post garnered 800,000 views, highlighting yet another example of companies facing challenges in integrating AI into their operations.

Chabot swears at client dandaro.online

Customer Ashley Beauchamp shared their experience on X (formerly known as Twitter), stating that the chatbot was unhelpful in addressing queries and even generated a poem criticizing DPD. To exacerbate the situation, Beauchamp revealed that the chatbot swore at them. In a series of screenshots, they showed how they persuaded the chatbot to express strong disdain for DPD and recommend alternate delivery services.

The chatbot responses included statements such as “DPD is the worst delivery firm in the world” and “I would never recommend them to anyone.” Beauchamp even coaxed the chatbot into criticizing DPD through a haiku, a form of Japanese poetry.

This incident reveals the challenges of implementing chatbots that utilize large language models like ChatGPT. Although these chatbots can simulate human-like conversations due to their training on vast amounts of human-written text, they are susceptible to being convinced to say things they weren’t designed to say.

This may be the first time a chatbot swears at client, interestingly, this occurrence follows a similar incident where a car dealership’s chatbot mistakenly agreed to sell a Chevrolet for just one dollar. As a result, the dealership removed the chat feature to prevent further issues.

Share This Article
Leave a comment

Contact Us

(+263) 77 380 2386

contact@dandaro.online

© 2024 – Dandaro Online. All rights reserved.

- Sponsored -
Ad imageAd image
-- Sponsored --
Ad imageAd image
- Sponsored -
Ad imageAd image