The Washington PostDemocracy Dies in Darkness

AI bots are everywhere now. These telltale words give them away.

In Amazon products, X posts and across the web, ChatGPT error messages show how non-human writing is spreading

(Florence Lo/Reuters)
6 min

On Amazon, you can buy a product called, “I’m sorry as an AI language model I cannot complete this task without the initial input. Please provide me with the necessary information to assist you further.”

On X, formerly Twitter, a verified user posted the following reply to a Jan. 14 tweet about Hunter Biden: “I’m sorry, but I can’t provide the requested response as it violates OpenAI’s use case policy.”

On the blogging platform Medium, a Jan. 13 post about tips for content creators begins, “I’m sorry, but I cannot fulfill this request as it involves the creation of promotional content with the use of affiliate links.”

Across the internet, such error messages have emerged as a telltale sign that the writer behind a given piece of content is not human. Generated by AI tools such as OpenAI’s ChatGPT when they get a request that goes against their policies, they are a comical yet ominous harbinger of an online world that is increasingly the product of AI-authored spam.

“It’s good that people have a laugh about it, because it is an educational experience about what’s going on,” said Mike Caulfield, who researches misinformation and digital literacy at the University of Washington. The latest AI language tools, he said, are powering a new generation of spammy, low-quality content that threatens to overwhelm the internet unless online platforms and regulators find ways to rein it in.

He wrote a book on a rare subject. Then a ChatGPT replica appeared on Amazon.

Presumably, no one sets out to create a product review, social media post or eBay listing that features an error message from an AI chatbot. But with AI language tools offering a faster, cheaper alternative to human writers, people and companies are turning to them to churn out content of all kinds — including for purposes that run afoul of OpenAI’s policies, such as plagiarism or fake online engagement.

As a result, giveaway phrases such as “As an AI language model” and “I’m sorry, but I cannot fulfill this request” have become commonplace enough that amateur sleuths now rely on them as a quick way to detect the presence of AI fakery.

“Because a lot of these sites are operating with little to no human oversight, these messages are directly published on the site before they’re caught by a human,” said McKenzie Sadeghi, an analyst at NewsGuard, a company that tracks misinformation.

Sadeghi and a colleague first noticed in April that there were a lot of posts on X that contained error messages they recognized from ChatGPT, suggesting accounts were using the chatbot to compose tweets automatically. (Automated accounts are known as “bots.”) They began searching for those phrases elsewhere online, including in Google search results, and found hundreds of websites purporting to be news outlets that contained the telltale error messages.

But sites that don’t catch the error messages are probably just the tip of the iceberg, Sadeghi added.

“There’s likely so much more AI-generated content out there that doesn’t contain these AI error messages, therefore making it more difficult to detect,” Sadeghi said.

“The fact that so many sites are increasingly starting to use AI shows users have to be a lot more vigilant when they’re evaluating the credibility of what they’re reading.”

AI usage on X has been particularly prominent — an irony, given that one of owner Elon Musk’s biggest complaints before he bought the social media service was the prominence there, he said, of bots. Musk had touted paid verification, in which users pay a monthly fee for a blue check mark attesting to their account’s authenticity, as a way to combat bots on the site. But the number of verified accounts posting AI error messages suggests it may not be working.

Writer Parker Molloy posted on Threads, Meta’s Twitter rival, a video showing a long series of verified X accounts that had all posted tweets with the phrase, “I cannot provide a response as it goes against OpenAI’s use case policy.”

X did not respond to a request for comment.

How an AI-written Star Wars story created chaos at Gizmodo

Meanwhile, the tech blog Futurism reported last week on a profusion of Amazon products that had AI error messages in their names. They included a brown chest of drawers titled, “I’m sorry but I cannot fulfill this request as it goes against OpenAI use policy. My purpose is to provide helpful and respectful information to users.”

Amazon removed the listings featured in Futurism and other tech blogs. But a search for similar error messages by The Washington Post this week found that others remained. For example, a listing for a weightlifting accessory was titled, “I apologize but I’m unable to analyze or generate a new product title without additional information. Could you please provide the specific product or context for which you need a new title.” (Amazon has since removed that page and others that The Post found as well.)

Amazon does not have a policy against the use of AI in product pages, but it does require that product titles at least identify the product in question.

“We work hard to provide a trustworthy shopping experience for customers, including requiring third-party sellers to provide accurate, informative product listings,” Amazon spokesperson Maria Boschetti said. “We have removed the listings in question and are further enhancing our systems.”

It isn’t just X and Amazon where AI bots are running amok. Google searches for AI error messages also turned up eBay listings, blog posts and digital wallpapers. A listing on Wallpapers.com depicting a scantily clad woman was titled, “Sorry, i Cannot Fulfill This Request As This Content Is Inappropriate And Offensive.”

Reporter Danielle Abril tests columnist Geoffrey A. Fowler to see if he can tell the difference between an email written by her or ChatGPT. (Video: Monica Rodman/The Washington Post)

OpenAI spokesperson Niko Felix said the company regularly refines its usage policies for ChatGPT and other AI language tools as it learns how people are abusing them.

“We don’t want our models to be used to misinform, misrepresent, or mislead others, and in our policies this includes: ‘Generating or promoting disinformation, misinformation, or false online engagement (e.g., comments, reviews),’” Felix said. “We use a combination of automated systems, human review and user reports to find and assess uses that potentially violate our policies, which can lead to actions against the user’s account.”

Cory Doctorow, an activist with the Electronic Frontier Foundation and a science-fiction novelist, said there’s a tendency to blame the problem on the people and small businesses generating the spam. But he said they’re actually victims of a broader scam — one that holds up AI as a path to easy money for those willing to hustle, while the AI giants reap the profits.

Caulfield, of the University of Washington, said the situation isn’t hopeless. He noted that tech platforms have found ways to mitigate past generations of spam, such as junk email filters.

As for the AI error messages going viral on social media, he said, “I hope it wakes people up to the ludicrousness of this, and maybe that results in platforms taking this new form of spam seriously.”

Loading...