
In the seminal 1995 cinematic masterpiece Clueless, Alicia Silverstone’s Cher describes her frenemy Amber as “a full-on Monet.” Why? “It’s like a painting, see? From far away, it’s okay, but up close it’s a big old mess.”
Just bear with me here, I do have a point: AI-generated content has a lot in common with Amber’s eclectic wardrobe and baffling hair styles. Generative AI has come a long way in recent years, so when you enter a prompt into a large language model (LLM) like ChatGPT, what comes out looks passable… at first. The sentences are generally sentence-shaped, the spelling is mostly fine, and there might be headings and bullet points to give the article some structure. Maybe it’s not as clean or in-depth as if you’d paid an experienced writer and editor to complete the assignment, but most people probably won’t even notice. Right?
Under closer scrutiny, however, the text loses its shine. It might be getting more accurate, but GenAI still leaves a trail. Spotting these signs isn’t as easy as calling out common phrases and em dashes, contrary to popular belief; if a certain combination of words or punctuation appears frequently in AI text, it’s only because they’re heavily used in human-written source material upon which these models are trained. Here’s how to spot the real indications of AI-generated text — and why leaning too heavily on artificial intelligence could make your company look totally clueless.
The song remains the same
In my last role in content marketing, we had a client that wanted to switch to AI-generated blog posts in order to bolster their content in a shorter time period without using any additional budget. Rather than blog posts taking four hours each to write, plus editing time, we were given a new missive: AI would write the blogs based on detailed prompts from our strategy team and we would have exactly one hour to edit them. It went about as well as you’d expect (as you can see in the case study linked in the next section), but there was a silver lining — it gave me a much better understanding of generative AI’s strengths and shortcomings, one of the worst of which is frequent repetition.
An AI-generated blog might make a point in the opening paragraph, repeat the same information in paragraph three, and then give it to you again in paragraph eight. I’m not talking about arguments to back up a thesis; these are typically points that don’t need to be repeated over and over. For example, you might be reading an article about the history of Nintendo that mentions the NES sold 61.91 units in its lifetime, but did you know it also sold over 60 million units in its lifetime — not to mention nearly 62 million units worldwide throughout its life?
The wording will likely be different each time, but the information will be the same. Sure, it bolsters word counts, but it’s the textual equivalent of pushing your vegetables around your plate to make it look like you’ve eaten your fair share. Even if the original point is sound, you don’t need it beaten into your head. This kind of repetition is a red flag for AI, and it sends the message that the writer and the company they represent don’t respect your time or your intelligence.
Citation needed
It’s one of the most basic rules of journalism: if you make a claim, you need to be able to back it up. This is just as true in content marketing; without a source, the text is simply speculation and the entire argument falls apart. For example, I can’t just say that replacing human writers with generative AI garners worse results in marketing copy; I’d have to back it up with, say, a case study showing that human content ranks faster and higher on search engines and leads to significantly more down-funnel engagement. (See what I did there?)
When generative AI pulls copy from internet sources, it has no way of gauging whether that text is accurate (which makes those AI overviews at the top of every Google search particularly untrustworthy). It may have sourced keyword-stuffed, less-than-credible websites that offer no backup for their claims. Or perhaps the source material had sources originally, but they were in links or footnotes that didn’t get carried over. As a result, you’ll generally see one of these scenarios:
- Zero links or cited sources
- Broken links
- Links to sources that have nothing to do with the AI text
- Claims that can’t be traced back to a source even after doing additional research
AI or not, readers should always be skeptical of unsourced claims. And if your company can’t back up the data in its marketing materials, you risk publishing false or misleading information. At best, that makes your brand look sloppy; at worst, you’re branded a liar. Losing that credibility is going to be a lot more costly than whatever cash you saved pivoting to AI.
You’re talking a lot, but you’re not saying anything
While unsourced claims and repetitive text can hurt your credibility, the biggest problem with generative AI is that it can’t say anything new. It literally can’t by design. Large language models like ChatGPT are “trained on immense amounts of data,” according to IBM, but that data needs to exist already. Further, because “LLMs work as giant statistical prediction machines that repeatedly predict the next word in a sequence,” you won’t ever get original insights from AI.
For brands trying to position themselves as leaders in their fields, that’s a big problem. You can’t be a subject matter expert without demonstrating expertise. By ChatGPT’s own admission when prompted, “ChatGPT can generate original combinations, perspectives, and arguments that haven’t been written in exactly that form before,” but it does not “have personal experiences, conduct independent experiments, form beliefs or intentions,” or “‘discover’ facts outside its training or real-time data access.”
In other words, LLMs can take a bunch of words from different sources and generate answer-shaped responses to your prompts, but there’s no guarantee the response is relevant, and it’s almost certainly not going to be insightful. As a consumer, why would you place your faith and hard-earned cash in a company that can’t address your pain points in a meaningful way? And as that company, if you’re simply repeating the points that your competitors and outside experts have already made, why would anyone come to you first?
There’s no secret code word that will clue you in as to whether or not the copy you’re reading was generated by an AI chatbot. I can’t help but recoil every time I see a LinkedIn post or Reddit thread claiming a certain phrase or use of punctuation is a dead giveaway for AI when they’re really just common writing crutches that we all use, whether we want to admit it or not. Read between the lines if you really want to know where the text came from; if it’s full of baseless claims, needlessly repetitive, or frustratingly vague, those are red flags you’re looking for.
And you can have my em dash when you pry it from my cold, dead hands.
Leave a comment