4 Red Flags That ChatGPT Wrote What You’re Reading, According to Experts

published Jan 22, 2024
We independently select these products—if you buy from one of our links, we may earn a commission. All prices were accurate at the time of publishing.
Post Image

Artificial intelligence seems to be everywhere these days, from interior design to home staging. But one of the most well-known — and, possibly, one of the most controversial — uses of AI right now is ChatGPT.

If you’ve never played around with ChatGPT before, here’s a quick snapshot of how it works: A human user inputs a query, like “Write me a new recipe to make for dinner” or “How should I decorate my guest room,” and within a few seconds, ChatGPT spits out a response. 

But some people and businesses are using ChatGPT for far more than just meal prep. They’re leaning heavily on the chatbot to write everything from college admissions essays to product reviews — even though it sometimes leads to controversial outcomes. 

As a reader, this might make you question whether you’ve ever unwittingly read something you thought was written by a human author, but was actually produced by a bot. Wondering how to tell if something was written by ChatGPT? I chatted with a few experts to get their tips and advice for navigating this murky new AI-powered world we live in. Here’s what they had to say.

How does ChatGPT say to tell if something was written by ChatGPT?

But first, I wondered what ChatGPT would say if I asked it this very question. Just for fun, I made an account, typed “how to tell if something was written by ChatGPT” into the message bar, and hit enter.

ChatGPT, of course, gave me an answer right away. 

“Identifying whether something was written by ChatGPT can be challenging, as the model aims to generate human-like text,” it responded. “However, there are some general characteristics that may indicate text was generated by a language model.”

Then, it proceeded to give me a list of traits that might indicate a piece of text was written by a bot, such as repeating the same phrases or ideas multiple times or seeming generic and missing specific details. ChatGPT also says its writing may lack personal anecdotes and emotions. 

Another tell, according to ChatGPT? If the information seems outdated, because ChatGPT was only trained on data through January 2022.

At the end of its response, ChatGPT tried to reassure me, writing that if I have “specific concerns or doubts about a piece of text, feel free to ask, and I’ll do my best to assist you.”

Credit: Westend61/Getty Images

How to tell if something was written by ChatGPT

Okay, that was a fun experiment, but now for what the real, human experts had to say.

The first thing to consider is that ChatGPT text is generated by a prompt entered by a human. Because of that, the human user can tweak the prompt again and again to produce the specific voice, style, length, and type of text they’re looking for. This can make it surprisingly difficult to tell if something was written by ChatGPT.

“In my research, I have found that even well-trained humans struggle to reliably detect generated text,” says Liam Dugan, who is researching a variety of chatbot-related questions while working on his PhD at the University of Pennsylvania. (He and his colleagues created an online game, called “Real or Fake Text,” where people can test their AI-generated text detection skills — try it for yourself.)

According to Dugan, there aren’t any particular words or phrases that would indicate a piece of text was written by a chatbot. So, unfortunately, you can’t just do a “Control+F” for specific verbiage.

Other experts echoed this sentiment. 

“Unfortunately, there doesn’t seem to be an easy way to detect text that was written by ChatGPT or AI in general,” says Xavier Harding, a content producer at Mozilla, the company behind the Firefox internet browser. “We’ve seen the makers of ChatGPT, OpenAI, try their hand at offering methods to detect AI-written text and they’ve had to recall them, saying that there isn’t an accurate way to detect AI-generated text.”

Look out for copy-and-paste errors. 

But just because it can be hard to tell doesn’t mean all hope is lost. Because humans are the ones prompting ChatGPT, you might see some super-obvious clues that something was written by a chatbot.  

“I have seen people accidentally include side comments from the chatbot such as, “Sure, here’s a news article about detection,” says Dugan.

Know how to spot “hallucinations.” 

Setting aside these types of mistakes, experts also recommend carefully studying a piece of text for factual accuracy. The more niche the information, the more likely a chatbot will simply make up a fact or get something wrong, says Dugan.

People who work in the field of AI call these missteps “hallucinations.”

“Sometimes AI will say things that are, straight up, just not true,” says Harding. “This can be an easy tell that you’re reading text crafted by AI rather than a person, although it can be difficult to know if something is a hallucination if it’s on a subject matter you’re not familiar with.”

If you’re not very familiar with the topic you’re reading about, do a quick online search to verify the information in front of you, Harding adds.

Watch out for writing that feels too general. 

Another sign that something was written by a chatbot is excessive generality (which ChatGPT itself also mentioned). If a piece of text seems overly broad — like a movie review that doesn’t mention any of the cast members by name, for example — you might want to be suspicious.

“In order to avoid making mistakes, chatbots will often write generic text,” says Dugan. “They will give very safe and predictable responses and will try to avoid needing to make up information. However, this can often come off as odd to a human reader. Readers should pay attention to anything that seems like it’s avoiding giving specific information.”

Read more than the first few sentences. 

Another tip: Read as much of the text as possible, instead of just the first sentence or two. Gary Marcus, an emeritus professor of psychology and neural science at New York University, says it can be easier to ascertain whether something was written by a chatbot when the text is longer.

“It’s very hard to tell from any single sentence, because they are trained on mind-bogglingly large amounts of text to sound as humanlike as possible,” he says. “These systems often go off the rails in longer conversation, but not necessarily in short snippets.”

And what about chatbot detectors?

Poke around online and you’ll likely encounter tools that claim to be able to detect AI-generated text. Dugan is currently conducting a large-scale study to test the effectiveness of these detectors — but, so far, the early evidence isn’t promising.

“All of our results suggest that detectors are not reliable and that you should not be using them in any high-stakes scenarios,” he says. “Many companies claim to be able to detect generated text with 99 percent or higher accuracy, but these claims are false. Detectors frequently flag human-written text as being chatbot-generated and are easily fooled.”