Avoid these mistakes when prompting chatbots

Generative AI shows promise, but blind reliance can be embarrassing.

Tom Corfman has checked it out: His children are loved by their mother and their father. Tom is a senior consultant with Ragan Consulting Group, where he directs the Build Better Writers program, which includes tips on how to use chatbots―carefully.

CEOs seem giddy about generative artificial intelligence, investment in the technology is surging and a growing number of communicators are trying to use it to engage employees with their work.

What’s driving the rush? Four little letters: F-O-M-O.

The “business potential is massive,” consulting firm McKinsey & Co. wrote in a report released Aug. 7, 2024. “The companies that fail to act and adapt now will likely struggle to catch up in the future.”

What could go wrong?

Companies are set to pour more than $1 trillion into artificial intelligence in coming years, “with so far little to show for it,” according to a report issued in late June by Goldman Sachs. “So, will this large spend ever pay off?”

The investment bank doesn’t know and neither do we. But there’s plenty of evidence to say, “Let’s slow down.”

Our concerns are focused on chatbots or content generative AI, the “large language models” such as ChatGPT used to communicate with employees, customers and the public. So far this summer, we’ve seen at least five high-profile mistakes using the new technology. And there’s more than a month left in the season!

How do communicators avoid these mistakes and many more? Adapt a newsroom adage that goes back decades: If AI says your mother loves you, check it out.

And ask yourself: If you must check everything, how much time is it really saving? Here are five chatbot mistakes to avoid:

1. Out of Cite, Out of Mind

Garbage In, Garbage Out: Perplexity spreads misinformation from spammy AI Blog posts
Forbes
June 26, 2024

Perplexity, the AI search engine that claims to be different than its rivals by supporting its answers with references to authoritative sources. Instead, it’s citing AI-generated blog posts with inaccurate, out of date and contradictory information.

The system’s “not flawless,” said the company’s chief business officer, who apparently also serves as the company’s master of understatement.

In April, CEO Aravind Srinivas told Forbes, “Citations are our currency.”

Aravind, are you out of money?

2. Google that URL

ChatGPT is hallucinating fake links to its news partners’ biggest investigations
NiemanLab
June 27, 2024

OpenAI’s ChatGPT is hallucinating links to stories published by the artificial intelligence innovator’s news partners.

The 11 publications licensed their content to the company in exchange for URLs to their websites. The problem? The chatbot invents the links.

The publications include well-known names in journalism, such as Business Insider, The Associated Press, The Wall Street Journal, Financial Times, The Times (UK), Le Monde, El País, The Atlantic, The Verge, Vox and Politico.

OpenAI hasn’t launched the citation feature even though the licensing deals go back to December 2023.

“An enhanced experience still in development,” a company spokesperson said.

3. Chatbots replacing parenting

Google’s Olympics-themed AI ad gives some viewers the ick
Quartz
July 31, 2024

Google drew complaints over an ad for its chatbot that features a father asking Gemini to draft a fan letter for his daughter to Olympic Gold medal hurdler Sydney McLaughlin-Levrone.

“Sit down with your kid and write the letter with them!” Linda Holmes of NPR’s Pop Culture Happy Hour wrote. “I’m just so grossed out by the entire thing.”

In the ad, the father says, “I’m pretty good with words, but this has to be just right.”

Wrong.

If all of McLaughlin-Levrone’s fans used Gemini as a starting point, “Sydney would just end up with a giant stack of nearly identical letters,” a TechCrunch editor wrote.

4. No thanks, Captain Obvious

After getting caught fabricating quotes, Cody reporter resigns
Powell (Wyoming) Tribune
Aug. 8, 2024

A reporter at the Enterprise newspaper in Cody, Wyoming, resigned after admitting to a reporter with the rival Powell Tribune that some of the quotes in his stories may have been made up by an artificial intelligence tool.

“Basically, what AI is really good at is, it’s good at creating plausible bulls—,” Alex Mahadevan of the Poynter Institute said. Professional communicators tasked with writing quotes for senior executives might want to take note.

Meanwhile, Aaron Pelczar, the Enterprise reporter, defended himself by saying, “Obviously I’ve never intentionally tried to misquote anybody.”

Not obvious, Aaron.

5. AI is scary enough

Michaels says it accidentally sold AI Halloween art
Fast Company
Aug. 9, 2024

Michaels apologized after putting up for sale this AI-assisted Halloween decoration. (Credit: tiktok.com/@flippedthrift)

As part of a new line of seasonal decorations, Michaels Stores put on its shelves a pixelated image of a ghostly woman in a white gown, flanked by a wolf with three front legs.

The image, which carried a logo of an AI design tool at the woman’s feet, quickly caught the attention of shoppers. One social media post about the artwork quickly drew more than 330,000 likes. But no one at the arts and crafts chain noticed.

“This artwork was purchased from a vendor who licensed the original source material from an artist,” the company said. “Without our knowledge, the vendor added an AI-generated layer to the image.”

With mistakes like these, is it any wonder that nearly two-thirds of purchasers say they don’t want companies to use artificial intelligence in their customer service, according to a recent survey of 5,728 customers by research firm Gartner.

The AI in-crowd contends people don’t trust the new technology because they just don’t understand how it works.

But that claim is rejected by a recent study by researchers from MIT and the universities of Georgia and Pennsylvania.

Turns out, accuracy is a more important factor.

 

Follow RCG on LinkedIn.

COMMENT

Ragan.com Daily Headlines

Sign up to receive the latest articles from Ragan.com directly in your inbox.